Beta
Home/Models/DeepSeek R1 Distill Qwen 1.5B
DeepSeek logo

DeepSeek R1 Distill Qwen 1.5B

by DeepSeek · Released Jan 2025

Open Source
13.9
avg score
Rank #215
Compare
Better than 7% of all models
Context
N/A
Input $/1M
TBD
Output $/1M
TBD
Type
text-generation
License
Open Source
Benchmarks
6 tested
Data updated today
About

Deepseek-ai text generation model. 630K downloads on HuggingFace.

Tested on 6 benchmarks with 10.4% average. Top scores: IFEval (34.6%), MATH Level 5 (16.9%), BBH (HuggingFace) (4.7%).

Capabilities
reasoning
3.0
#172 globally
math
16.9
#178 globally
knowledge
1.4
#227 globally
language
34.6
#118 globally
general
4.7
#61 globally
Benchmark Scores
Compare All
Tested on 6 benchmarks · Ranked across 5 categories
Score Distribution (all 231 models)
0255075100
▲ You are here
MUSR

HuggingFace MuSR (Multi-Step Reasoning). Tests multi-hop reasoning requiring chaining multiple facts together.

3.0
MATH Level 5

HuggingFace evaluation of MATH Level 5 problems. Competition math requiring advanced reasoning and proof construction.

16.9
MMLU-PRO

HuggingFace MMLU-Pro. Harder version of MMLU with 10 answer choices instead of 4 and more challenging questions.

2.1
GPQA

HuggingFace evaluation of GPQA (Graduate-Level Google-Proof Q&A). PhD-level science questions that cannot be easily searched.

0.8
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
deepseek-ai-deepseek-r1-distill-qwen-15b
Specifications
  • Typetext-generation
  • ContextN/A
  • ReleasedJan 2025
  • LicenseOpen Source
  • StatusActive
Available On
DeepSeek logoDeepSeekTBD
Share & Export
Tweet
DeepSeek R1 Distill Qwen 1.5B is an open-source text-generation AI model by DeepSeek, released in January 2025. It has an average benchmark score of 13.9.