Home/Models/R1 0528
DeepSeek logo

R1 0528

by DeepSeek · Released May 2025

Open Source
56.4
avg score
Rank #85
Compare
Better than 64% of all models
Context
164K tokens (~82 books)
Input $/1M
$0.50
Output $/1M
$2.15
Type
text
License
Open Source
Benchmarks
25 tested
Data updated today
About

May 28th update to the [original DeepSeek R1](/deepseek/deepseek-r1) Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active...

Tested on 25 benchmarks with 57.9% average. Top scores: Chatbot Arena Elo — Overall (1421.7%), MATH level 5 (96.6%), OpenCompass — AIME2025 (89.0%).

Looking for similar performance at lower cost?
Qwen2.5 Coder 7B Instruct scores 56.0 (99% as good) at $0.03/1M input · 94% cheaper
Capabilities
coding
58.0
#44 globally
reasoning
33.5
#77 globally
math
73.6
#27 globally
knowledge
56.9
#69 globally
speed
40.0
#42 globally
language
79.2
#53 globally
Benchmark Scores
Compare All
Tested on 25 benchmarks · Ranked across 7 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
Aider polyglot

Multi-language code editing from Aider. Tests editing ability across Python, JavaScript, TypeScript, Java, C++, Go, Rust, and more.

71.4
OpenCompass — LiveCodeBenchV6

OpenCompass Live Code Bench v6. Fresh competitive programming problems to evaluate code generation without memorization.

61.0
WeirdML

Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.

41.6
HELM — WildBench

Stanford HELM WildBench evaluation. Tests reasoning on challenging real-world tasks.

82.8
SimpleBench

Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.

29.0
ARC-AGI

Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.

21.2
MATH level 5

Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.

96.6
OpenCompass — AIME2025

OpenCompass evaluation on AIME 2025 problems. Tests mathematical reasoning on fresh competition problems.

89.0
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

66.4
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Recently Happened
DeepSeek R1 0528 posted 87.2% on GPQA Diamond
Mar 24, 2026
Links
Documentation
Community
BenchGecko API
deepseek-r1-0528
Specifications
  • Typetext
  • Context164K tokens (~82 books)
  • ReleasedMay 2025
  • LicenseOpen Source
  • StatusActive
  • Cost / Message~$0.003
Available On
DeepSeek logoDeepSeek$0.50
Share & Export
Tweet
R1 0528 is an open-source text AI model by DeepSeek, released in May 2025. It has an average benchmark score of 56.4. Context window: 164K tokens.