KorQuAD

KorQuAD 1.0

The Korean Question Answering Dataset




What is KorQuAD 1.0?


KorQuAD 1.0 is a large-scale question-and-answer dataset constructed for Korean machine reading comprehension, and investigate the dataset to understand the distribution of answers and the types of reasoning required to answer the question. This dataset benchmarks the data generating process of SQuAD v1.0 to meet the standard.







Getting Started


KorQuAD 1.0 is a large-scale Korean dataset for machine reading comprehension task consisting of human generated questions for Wikipedia articles. We benchmark the data collecting process of SQuADv1.0 and crowdsourced 70,000+ question-answer pairs. 1,637 articles and 70,079 pairs of question answers were collected. 1,420 articles are used for the training set, 140 for the dev set, and 77 for the test set. 60,407 question-answer pairs are for the training set, 5,774 for the dev set, and 3,898 for the test set.

Download a copy of the dataset (distributed under the CC BY-ND 2.0 KR license):

When submitting a model through Codalab, we consider that you have agreed to calculate the test scores and disclose the scores through the leaderboard. Submitted models, source code, etc. will be licensed by the participant and followed as specified.




To evaluate your models, we have also made available the evaluation script we will use for official evaluation, along with a sample prediction file that the script will take as input. To run the evaluation, use python evaluate-korquad_v1.0.py [path_to_dev-v1.0] [path_to_predictions].




Once you have a built a model that works to your expectations on the dev set, you submit it to get official scores. You are limited to one official attempt per week. To preserve the integrity of test results, we do not release the test set to the public. Instead, we require you to submit your model so that we can run it on the test set for you. Here's a tutorial walking you through official evaluation of your model.






Leaderboard


Here are the ExactMatch (EM) and F1 scores evaluated on the test set of KorQuAD 1.0.


Rank Reg. Date Model EM F1
- 2018.10.17 Human Performance 80.17 91.20
1 2023.06.27 EXAONE-LM-v1.0 (single model)

LG AI Research

89.71 96.23
2 2024.02.02 MoBERT-Large V2.0 (single model, 355M)

ETRI XAI-NLP Team

89.05 95.92
3 2022.12.13 VAIV AI

VAIV Company AI Lab (Kisu Yang)

88.28 95.79
4 2023.08.25 MoBERT-Large V1.0 (single model, 355M)

ETRI XAI-NLP Team

88.56 95.66
5 2020.08.24 SDS-XFormer+ (single model)

Samsung SDS AI Research

88.10 95.57
6 2022.03.18 HAIQV-LM-Large V1.0 (single model)

Hanwha Systems/ICT NLP Part

87.71 95.39
7 2020.07.13 LGSP-LM-Large V2.0

LG AI NLP Team

87.46 95.39
8 2021.11.03 SkERT-Large 2.0.0 (ensemble)

Skelter Labs

87.94 95.25
9 2021.12.02 InfoLab KorLM v0.4 (single model)

KAIST InfoLab

88.17 95.24
10 2021.12.07 SkERT-Large 2.0.1 (ensemble)

Skelter Labs

87.58 95.18
11 2021.09.09 InfoLab KorLM v0.3

KAIST InfoLab

87.79 95.16
12 2020.01.08 SkERT-Large (single model)

Skelter Labs

87.66 95.15
13 2020.11.09 Americano (single)

SK Planet RB Dialogue Team(JunSeok Kim)

86.81 95.13
14 2020.07.13 BERT (single model)

Anonymous

86.99 95.12
15 2021.04.08 Summer is coming 1.1 (single model)

Anonymous

87.84 95.08
16 2020.09.08 Tubu the Destroyer 1.2 (single model)

Anonymous

87.87 95.06
17 2023.03.18 LDCC-LM (single model)

Lotte Data Communication AI Technical Team (Wonchul Kim)

87.17 95.04
18 2019.10.25 KorBERT-Large v1.0

ETRI ExoBrain Team

87.76 95.02
19 2021.06.02 SF-Xformer-Large (single model)

Samsung Finance AI Center

87.07 94.82
20 2021.03.29 Summer is coming 1.0 (single model)

Anonymous

86.81 94.81
21 2020.07.08 BERT (single model)

Anonymous

86.45 94.78
22 2021.05.20 InfoLab KorLM v0.2

KAIST InfoLab

87.48 94.77
23 2020.01.07 SkERT-LARGE (single model)

Skelter Labs

87.25 94.75
24 2019.06.26 LaRva-Kor-Large+ + CLaF (single)

Clova AI LaRva Team

86.84 94.75
25 2022.05.03 APplus (single model)

ActionPower

86.92 94.71
26 2020.01.03 SkERT Large (single model)

Skelter Labs

87.28 94.66
27 2021.12.11 mT5-Large v1.0 (single model)

Everdoubling & AISchool

87.38 94.65
28 2019.06.04 BERT-CLKT-MIDDLE (single model)

Anonymous

86.71 94.55
29 2021.11.23 TBA

Anonymous

86.76 94.54
30 2022.10.15 LDCC-LM (single model)

Lotte Data Communication AI Technical Team

86.3 94.45
31 2021.04.10 InfoLab KorLM v0.1

KAIST InfoLab

83.99 94.45
32 2019.06.03 LaRva-Kor-Large + CLaF (single)

Clova AI LaRva Team (LPT)

86.79 94.37
33 2021.11.26 Aibril multilingual T5 - Large (single)

Aibril NLP AI team

87.02 94.37
34 2021.10.20 KonanNet v1.0 (single model)

Konan Technology Inc.

86.07 94.33
35 2020.01.02 SkERT-Large (single model)

Skelter Labs

86.30 94.28
36 2019.03.15 {BERT-CLKT} (single model)

Anonymous

86.22 94.08
37 2019.07.17 KorBERT

Anonymous

86.12 94.02
38 2019.05.07 LaRva-Kor+ + CLaF (single)

Clova AI LaRva Team (LPT)

85.35 93.96
39 2019.04.24 LaRva-Kor+ (single)

Clova AI LaRva Team (LPT)

85.25 93.94
40 2020.05.18 SDS-NET (single model)

Sanghwan Bae & Soonhwan Kwon

85.81 93.92
41 2020.03.24 ElBERT-v1.0 + MixTune + Data Augmentation (single)

Enliple AI Lab

86.17 93.84
42 2020.05.26 Opt (single model)

Anonymous

85.68 93.77
43 2021.08.25 NAMZ-ALBERT V2 (single)

Mediazen NAMZ AI Reseach Team and KISTI National Supercomputing Center

85.12 93.57
44 2019.07.25 Bert-Base-Kor-LEN (ensemble)

ChangWook Jun

85.51 93.46
45 2020.05.27 Baseline (single model)

Anonymous

84.97 93.38
46 2021.02.10 NAMZ-ALBERT (single)

Mediazen NAMZ AI Research Team and KISTI National Supercomputing Center

84.66 93.36
47 2020.10.23 Espresso (single)

SK Planet RB Dialogue Team(JunSeok Kim)

84.35 93.35
48 2021.04.21 Hansol-base-v1.1 (single model)

Hansol Inticube AI convergence LAB

84.38 93.22
49 2019.06.29 BERT-DAL-Masking-Morp (single)

JunSeok Kim

85.15 93.20
50 2020.07.08 ALBERT Large(single model)

Anonymous

84.12 93.07
51 2020.10.12 Cappuccino (single)

SK Planet RB Dialogue Team(JunSeok Kim)

83.48 93.00
52 2019.12.12 HanBert-54k-N (single model)

TwoBlock Ai

81.94 92.93
53 2019.09.20 ETRI BERT (single model)

deepfine

84.56 92.91
54 2019.05.24 BERT fine-tuned(ensemble)

Oh Yeon Taek

83.99 92.89
55 2021.01.17 ActionBasic (single model)

ActionPower

83.76 92.7
56 2019.12.19 HanBert-54k-ML (single model)

TwoBlock Ai

81.89 92.65
57 2019.06.19 ETRI BERT + Saltlux ADAM API (single model)

Saltlux Inc. AI Labs, AIR team

84.15 92.64
58 2019.04.10 BERT-Kor (single)

Clova AI LPT Team

83.79 92.63
59 2019.03.29 BERT insp. by GPT-2 + KHAIII (single)

Kakao NLP Team

84.12 92.62
60 2019.06.19 BERT-DA-Masking-Morph (single)

JunSeok Kim

84.20 92.59
61 2019.12.20 HanBert-90k-N (single model)

TwoBlock Ai

81.61 92.48
62 2019.12.20 HanBert-90k-ML (single model)

TwoBlock Ai

81.35 92.41
63 2019.09.10 ETRI BERT (single model)

deepfine

83.48 92.39
64 2019.04.01 BERT-Multilingual+CLAF+ReTK (single)

KIPI R&D; Center1

83.76 92.27
65 2019.01.30 BERT LM fine-tuned + KHAIII + DHA (single)

Kakao NLP Team

83.32 92.10
66 2019.12.04 BERT+VA (single)

JoonOh-Oh

83.68 92.00
67 2019.01.24 BERT LM fine-tuned (single) + KHAIII

Kakao NLP Team

82.14 91.85
68 2019.01.30 BERT multilingual (ensemble)

mypeacefulcode

82.53 91.67
69 2021.11.23 {T5-base} (single model)

RippleAI

81.68 91.65
70 2021.03.31 Hansol-Base-single-v1 (single)

Hansol Inticube AI convergence LAB

82.4 91.57
71 2019.03.28 BERT KOR (ensemble)

DeepNLP ONE Team

82.68 91.47
72 2019.06.13 {BERT-DA-Morph} (single)

JunSeok Kim

82.48 91.47
73 2019.06.03 DynamicConv + Self-Attention + N-gram masking (single)

Enliple AI and Chonbuk National University, Cognitive Computing Lab

80.94 91.45
74 2019.06.03 BERT_LM_fine-tuned (single)

Anonymous

82.04 91.40
75 2020.06.20 BERT+RNN (ensemble model)

🏆 Enliple AI NLP Challenge 🏆

KHY

SlideShare GitHub
82.22 91.39
76 2019.02.14 BERT fine-tuned (single)

GIST-Dongju Park

82.27 91.24
77 2019.03.21 BERT+KEFT (single)

KT BigData BU

82.27 91.23
78 2019.12.01 BERT (single)

JoonOh-Oh

81.68 91.12
79 2019.02.22 BERT/RPST (single)

Anonymous

82.25 91.11
80 2019.03.08 BERT + ES-Nori (single model)

Chang-Uk Jeong @ RNBSOFT AI Chatbot Team

81.94 91.04
81 2019.10.15 {BERT-base-unigramLM(Kudo)} (single model)

AIRI@domyounglee

78.55 91.04
82 2019.06.19 BERT-Kor-morph (single)

AIRI

80.09 91.01
83 2019.04.08 BERT (single)

Bnonymous

80.58 90.75
84 2020.06.20 BERT+RNN (single model)

🏆 Enliple AI NLP Challenge 🏆

KHY

SlideShare GitHub
81.40 90.74
85 2019.01.10 EBB-Net + BERT (single model)

Enliple AI

80.12 90.71
86 2019.04.10 Bert single-model

NerdFactory, AI research

81.63 90.68
87 2020.02.12 BERT-Multilingual (single model)

Anonymous

81.09 90.61
88 2019.09.05 {ETRI BERT} (single model)

deepfine

80.86 90.61
89 2020.06.25 BERT-Small + Transfer Learning + Adversarial Training (ensemble)

🏆 Enliple AI NLP Challenge 🏆

TmaxAI

SlideShare
81.73 90.55
90 2019.07.11 BERT-Fintent V1 + Utagger-UoU (single)

GDchain AI Lab

79.45 90.38
91 2019.03.13 BERT-Multilingual (single model)

Initiative

80.66 90.35
92 2019.05.08 BERT-Multiling-morph (single)

kwonmha

79.35 90.34
93 2019.03.05 BERT-multilingual (single model)

HYU-Minho Ryu

80.45 90.27
94 2019.09.18 Mobile-BERT(18M Params & 36.6MB size) (single)

Enliple AI and Chonbuk National University, Cognitive Computing Lab

81.07 90.25
95 2019.02.21 Bert_FineTuning (Single model)

Star Ji

71.75 90.12
96 2019.05.08 BERT-Multi-Kr (single)

paul.kim

71.86 89.83
97 2019.03.26 BERT (single model)

BDOT

71.78 89.82
98 2020.04.26 BERTbase (single model)

Anonymous

71.63 89.76
99 2019.06.17 BERT-Multilingual

lyeoni, NEOWIZ AI Lab

71.47 89.71
100 2020.02.17 {Bert_Multi} (multi model)

EunsongGoh

66.73 89.62
101 2020.06.25 BERT-Small + Transfer Learning + Adversarial Training (single model)

🏆 Enliple AI NLP Challenge 🏆

TmaxAI

SlideShare
80.40 89.59
102 2019.04.29 BERT_Multi (Single)

EunsongGoh

71.40 89.49
103 2019.01.11 BERT-Multiling-simple (single)

kwonmha

70.75 89.44
104 2019.02.19 BERT multilingual finetune TPU (single)

jskim_kbnow

71.19 89.20
105 2019.05.21 Bert-Base-Multilingual (Single)

ybigta KorQuAD

70.50 89.14
106 2020.06.19 scBert (single model)

🏆 Enliple AI NLP Challenge 🏆

PNU, delosycho@gmail.com

SlideShare GitHub
79.17 88.99
107 2020.06.19 scBert (single model)

🏆 Enliple AI NLP Challenge 🏆

PNU

SlideShare GitHub
79.27 88.98
108 2020.06.20 SDAC (single model)

🏆 Enliple AI NLP Challenge 🏆

[Enliple AI NLP Challenge] Team BS

SlideShare
78.71 88.95
109 2019.06.01 {BERT-Multilingual fine-tuned+OKT} (single)

JunSeok Kim

77.12 88.92
110 2020.06.20 scBert (single model)

🏆 Enliple AI NLP Challenge 🏆

PNU, Sangyeon, delosycho@gmail.com

SlideShare GitHub
78.86 88.90
111 2020.06.19 SDA (single model)

🏆 Enliple AI NLP Challenge 🏆

[Enliple AI NLP Challenge] Team BS

SlideShare
78.42 88.79
112 2020.06.19 BerT3Q ensemble T3Q-NLP

🏆 Enliple AI NLP Challenge 🏆

Team t3q.com [Enliple AI NLP Challenge]

SlideShare GitHub
78.99 88.65
113 2019.05.04 BERT-multilingual (single)

Anonymous

70.57 88.64
114 2020.06.19 5959 (single model)

🏆 Enliple AI NLP Challenge 🏆

GYKIM

Slides
78.63 88.58
115 2019.04.26 BERT-multilingual (single model)

Tae Hwan Jung@graykode, Kyung Hee Univ

69.86 88.49
116 2020.06.25 BERT-small-SeqBoost (single)

🏆 Enliple AI NLP Challenge 🏆

Yonsei Univ. | Korea Univ.

GitHub
78.27 88.29
117 2018.12.28 BERT-Multilingual (single)

Clova AI LPT Team

77.04 87.85
118 2020.06.18 predictions-200619(single model)

🏆 Enliple AI NLP Challenge 🏆

RnDeep

velog
76.94 87.50
119 2020.06.18 BERT-Dep (single)

🏆 Enliple AI NLP Challenge 🏆

Virssist

GitHub
77.30 87.45
120 2020.06.19 [AI NLP] bert small

🏆 Enliple AI NLP Challenge 🏆

76.68 87.43
121 2020.06.19 BERT-Dep2 (single)

🏆 Enliple AI NLP Challenge 🏆

Virssist

GitHub
76.99 87.33
122 2020.06.19 korquad_v1.0_0619

🏆 Enliple AI NLP Challenge 🏆

76.81 87.33
123 2020.06.19 [AI NLP] bert small

🏆 Enliple AI NLP Challenge 🏆

76.53 87.17
124 2020.06.19 BERT-small-SeqBoost (single)

🏆 Enliple AI NLP Challenge 🏆

Yonsei Univ. | Korea Univ.

GitHub
73.27 87.11
125 2019.03.04 DocQA (single)

CLaF

75.63 85.91
126 2019.12.20 DistilBERT-base-multilingual (default huggingface) (single model)

Heeryon Cho

66.88 85.72
127 2019.03.04 BiDAF (single)

CLaF

71.88 83.00
128 2019.12.19 DistilBERT-base-multilingual (from huggingface) (single model)

Anonymous

62.90 81.29
- 2018.10.17 Baseline 71.52 82.99