D
8.6
avg score
Rank #226
Better than 2% of all models
Context
N/A
Input $/1M
TBD
Output $/1M
TBD
Type
text-generation
License
Open Source
Benchmarks
6 tested
Data updated today
About
Distilbert text generation model. 2664K downloads on HuggingFace.
Tested on 6 benchmarks with 4.0% average. Top scores: MUSR (11.2%), IFEval (6.1%), BBH (HuggingFace) (2.8%).
Capabilities
reasoning
11.2
#128 globally
math
0.6
#216 globally
knowledge
1.7
#225 globally
language
6.1
#151 globally
general
2.8
#69 globally
Benchmark Scores
Compare AllTested on 6 benchmarks · Ranked across 5 categories
Score Distribution (all 231 models)
0255075100
▲ You are here
reasoningCompare reasoning →
MUSR
11.2—HuggingFace MuSR (Multi-Step Reasoning). Tests multi-hop reasoning requiring chaining multiple facts together.
mathCompare math →
MATH Level 5
0.6—HuggingFace evaluation of MATH Level 5 problems. Competition math requiring advanced reasoning and proof construction.
knowledgeCompare knowledge →
MMLU-PRO
2.1—HuggingFace MMLU-Pro. Harder version of MMLU with 10 answer choices instead of 4 and more challenging questions.
GPQA
1.2—HuggingFace evaluation of GPQA (Graduate-Level Google-Proof Q&A). PhD-level science questions that cannot be easily searched.
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Info
Research
Documentation
Community
Source Code
BenchGecko API
distilbert-distilgpt2
Specifications
- Typetext-generation
- ContextN/A
- ReleasedMar 2022
- LicenseOpen Source
- StatusActive
Available On
D
DistilBERTTBDLearn More
Share & Export
Frequently Asked Questions
Distilgpt2 is an open-source text-generation AI model by DistilBERT, released in March 2022. It has an average benchmark score of 8.6.