2019/01/27
-----
Fig. BERT Tasks(圖片來源)。
-----
11 NLP Tasks
1. GLUE: The General Language Understanding Evaluation (GLUE) benchmark
1.1. MNLI: Multi-Genre Natural Language Inference
1.2. QQP: Quora Question Pairs
1.3. QNLI: Question Natural Language Inference
1.4. SST-2: The Stanford Sentiment Treebank
1.5. CoLA: The Corpus of Linguistic Acceptability
1.6. STS-B: The Semantic Textual Similarity Benchmark
1.7. MRPC: Microsoft Research Paraphrase Corpus
1.8. RTE: Recognizing Textual Entailment
2. SQuAD: The Standford Question Answering Dataset
3. NER: CoNLL 2003 Named Entity Recognition (NER) dataset
4. SWAG: The Situations With Adversarial Generations (SWAG) dataset
-----
Paper
# BERT
Devlin, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." arXiv preprint arXiv:1810.04805 (2018).
https://arxiv.org/pdf/1810.04805.pdf-----
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.