Better than 9% of all models
Context
N/A
Input $/1M
TBD
Output $/1M
TBD
Type
text
License
Open Source
Benchmarks
4 tested
Data updated today
About
Tested on 4 benchmarks with 16.7% average. Top scores: GSM8K (21.3%), ARC AI2 (15.2%), MMLU (15.2%).
Capabilities
math
21.3
#155 globally
knowledge
15.2
#204 globally
Benchmark Scores
Compare AllTested on 4 benchmarks · Ranked across 2 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
mathCompare math →
GSM8K
21.3—Grade school math word problems. 8,500 problems testing multi-step arithmetic reasoning. A foundational math benchmark.
knowledgeCompare knowledge →
ARC AI2
15.2—AI2 Reasoning Challenge. Grade-school science questions requiring multi-step reasoning. Easy and Challenge sets test different difficulty levels.
MMLU
15.2—Massive Multitask Language Understanding. 57 subjects from STEM, humanities, and social sciences. The most widely-cited knowledge benchmark.
Winogrande
15.2—Commonsense coreference resolution. Tests understanding of pronoun references in ambiguous sentences.
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Research
Documentation
Community
Source Code
BenchGecko API
deepseek-coder-6-7b
Specifications
- Typetext
- ContextN/A
- ReleasedJan 2024
- LicenseOpen Source
- Statusbenchmark-only
Available On
Learn More
Share & Export
Frequently Asked Questions
DeepSeek Coder 6.7B is an open-source text AI model by DeepSeek, released in January 2024. It has an average benchmark score of 16.1.