KorQuAD 1.0

The Korean Question Answering Dataset




What is KorQuAD 1.0?


KorQuAD 1.0은 한국어 Machine Reading Comprehension을 위해 만든 데이터셋입니다. 모든 질의에 대한 답변은 해당 Wikipedia article 문단의 일부 하위 영역으로 이루어집니다. Stanford Question Answering Dataset(SQuAD) v1.0과 동일한 방식으로 구성되었습니다.






Getting Started


KorQuAD 1.0의 전체 데이터는 1,560 개의 Wikipedia article에 대해 10,645 건의 문단과 66,181 개의 질의응답 쌍으로, Training set 60,407 개, Dev set 5,774 개의 질의응답쌍으로 구분하였습니다.




모델을 평가하기 위한 공식적인 evaluation script와 입력 샘플 prediction 파일을 제공합니다. 평가를 실행하려면 python evaluate-korquad_v1.0.py [path_to_dev-v1.0] [path_to_predictions] 를 입력하세요.




Dev set에 대해 만족하는 모델을 만들었다면 공식 점수를 얻고 leaderboard에 올리기 위해 모델을 제줄하세요. 테스트 결과의 무결성을 위하여 Test set은 공개되지 않습니다. 대신 모델을 제출하여 Test set에서 실행할 수 있도록 해야 합니다. 다음은 모델의 공식적인 평가를 위한 과정 안내 튜토리얼입니다.






Leaderboard


KorQuAD 1.0의 Test set으로 평가한 Exact Match(EM) 및 F1 score 입니다.


Rank Reg. Date Model EM F1
- 2018.10.17 Human Performance 80.17 91.20
1 2020.01.08 SkERT-Large (single model)

Skelter Labs

87.66 95.15
2 2019.10.25 KorBERT-Large v1.0

ETRI ExoBrain Team

87.76 95.02
3 2020.01.07 SkERT-LARGE (single model)

Skelter Labs

87.25 94.75
4 2019.06.26 LaRva-Kor-Large+ + CLaF (single)

Clova AI LaRva Team

86.84 94.75
5 2020.01.03 SkERT Large (single model)

Skelter Labs

87.28 94.66
6 2019.06.04 BERT-CLKT-MIDDLE (single model)

Anonymous

86.71 94.55
7 2019.06.03 LaRva-Kor-Large + CLaF (single)

Clova AI LaRva Team (LPT)

86.79 94.37
8 2020.01.02 SkERT-Large (single model)

Skelter Labs

86.30 94.28
9 2019.03.15 {BERT-CLKT} (single model)

Anonymous

86.22 94.08
10 2019.07.17 KorBERT

Anonymous

86.12 94.02
11 2019.05.07 LaRva-Kor+ + CLaF (single)

Clova AI LaRva Team (LPT)

85.35 93.96
12 2019.04.24 LaRva-Kor+ (single)

Clova AI LaRva Team (LPT)

85.25 93.94
13 2019.07.25 Bert-Base-Kor-LEN (ensemble)

ChangWook Jun

85.51 93.46
14 2019.06.29 BERT-DAL-Masking-Morp (single)

JunSeok Kim

85.15 93.20
15 2019.12.12 HanBert-54k-N (single model)

TwoBlock Ai

81.94 92.93
16 2019.09.20 ETRI BERT (single model)

deepfine

84.56 92.91
17 2019.05.24 BERT fine-tuned(ensemble)

Oh Yeon Taek

83.99 92.89
18 2019.12.19 HanBert-54k-ML (single model)

TwoBlock Ai

81.89 92.65
19 2019.06.19 ETRI BERT + Saltlux ADAM API (single model)

Saltlux Inc. AI Labs, AIR team

84.15 92.64
20 2019.04.10 BERT-Kor (single)

Clova AI LPT Team

83.79 92.63
21 2019.03.29 BERT insp. by GPT-2 + KHAIII (single)

Kakao NLP Team

84.12 92.62
22 2019.06.19 BERT-DA-Masking-Morph (single)

JunSeok Kim

84.20 92.59
23 2019.12.20 HanBert-90k-N (single model)

TwoBlock Ai

81.61 92.48
24 2019.12.20 HanBert-90k-ML (single model)

TwoBlock Ai

81.35 92.41
25 2019.09.10 ETRI BERT (single model)

deepfine

83.48 92.39
26 2019.04.01 BERT-Multilingual+CLAF+ReTK (single)

KIPI R&D Center1

83.76 92.27
27 2019.01.30 BERT LM fine-tuned + KHAIII + DHA (single)

Kakao NLP Team

83.32 92.10
28 2019.12.04 BERT+VA (single)

JoonOh-Oh

83.68 92.00
29 2019.01.24 BERT LM fine-tuned (single) + KHAIII

Kakao NLP Team

82.14 91.85
30 2019.01.30 BERT multilingual (ensemble)

mypeacefulcode

82.53 91.67
31 2019.03.28 BERT KOR (ensemble)

DeepNLP ONE Team

82.68 91.47
32 2019.06.13 {BERT-DA-Morph} (single)

JunSeok Kim

82.48 91.47
33 2019.06.03 DynamicConv + Self-Attention + N-gram masking (single)

Enliple AI and Chonbuk National University, Cognitive Computing Lab

80.94 91.45
34 2019.06.03 BERT_LM_fine-tuned (single)

Anonymous

82.04 91.40
35 2019.02.14 BERT fine-tuned (single)

GIST-Dongju Park

82.27 91.24
36 2019.03.21 BERT+KEFT (single)

KT BigData BU

82.27 91.23
37 2019.12.01 BERT (single)

JoonOh-Oh

81.68 91.12
38 2019.02.22 BERT/RPST (single)

Anonymous

82.25 91.11
39 2019.03.08 BERT + ES-Nori (single model)

Chang-Uk Jeong @ RNBSOFT AI Chatbot Team

81.94 91.04
40 2019.10.15 {BERT-base-unigramLM(Kudo)} (single model)

AIRI@domyounglee

78.55 91.04
41 2019.06.19 BERT-Kor-morph (single)

AIRI

80.09 91.01
42 2019.04.08 BERT (single)

Bnonymous

80.58 90.75
43 2019.01.10 EBB-Net + BERT (single model)

Enliple AI

80.12 90.71
44 2019.04.10 Bert single-model

NerdFactory, AI research

81.63 90.68
45 2019.09.05 {ETRI BERT} (single model)

deepfine

80.86 90.61
46 2019.07.11 BERT-Fintent V1 + Utagger-UoU (single)

GDchain AI Lab

79.45 90.38
47 2019.03.13 BERT-Multilingual (single model)

Initiative

80.66 90.35
48 2019.05.08 BERT-Multiling-morph (single)

kwonmha

79.35 90.34
49 2019.03.05 BERT-multilingual (single model)

HYU-Minho Ryu

80.45 90.27
50 2019.09.18 Mobile-BERT(18M Params & 36.6MB size) (single)

Enliple AI and Chonbuk National University, Cognitive Computing Lab

81.07 90.25
51 2019.02.21 Bert_FineTuning (Single model)

Star Ji

71.75 90.12
52 2019.05.08 BERT-Multi-Kr (single)

paul.kim

71.86 89.83
53 2019.03.26 BERT (single model)

BDOT

71.78 89.82
54 2019.06.17 BERT-Multilingual

lyeoni, NEOWIZ AI Lab

71.47 89.71
55 2019.04.29 BERT_Multi (Single)

EunsongGoh

71.40 89.49
56 2019.01.11 BERT-Multiling-simple (single)

kwonmha

70.75 89.44
57 2019.02.19 BERT multilingual finetune TPU (single)

jskim_kbnow

71.19 89.20
58 2019.05.21 Bert-Base-Multilingual (Single)

ybigta KorQuAD

70.50 89.14
59 2019.06.01 {BERT-Multilingual fine-tuned+OKT} (single)

JunSeok Kim

77.12 88.92
60 2019.05.04 BERT-multilingual (single)

Anonymous

70.57 88.64
61 2019.04.26 BERT-multilingual (single model)

Tae Hwan Jung@graykode, Kyung Hee Univ

69.86 88.49
62 2018.12.28 BERT-Multilingual (single)

Clova AI LPT Team

77.04 87.85
63 2019.03.04 DocQA (single)

CLaF

75.63 85.91
64 2019.12.20 DistilBERT-base-multilingual (default huggingface) (single model)

Heeryon Cho

66.88 85.72
65 2019.03.04 BiDAF (single)

CLaF

71.88 83.00
66 2019.12.19 DistilBERT-base-multilingual (from huggingface) (single model)

Anonymous

62.90 81.29
- 2018.10.17 Baseline 71.52 82.99