Compare · ModelsLive · 2 picked · head to head

R1 0528 vs DeepSeek V3.2 Speciale

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

DeepSeek V3.2 Speciale wins 8 of 9 shared benchmarks. Leads in speed · math · knowledge.

Category leads
speed·DeepSeek V3.2 Specialemath·DeepSeek V3.2 Specialeknowledge·DeepSeek V3.2 Specialelanguage·DeepSeek V3.2 Specialecoding·DeepSeek V3.2 Speciale
Hype vs Reality
R1 0528
#53 by perf·no signal
QUIET
DeepSeek V3.2 Speciale
#6 by perf·#5 by attention
DESERVED
Best value
2.2x better value than R1 0528
R1 0528
43.7 pts/$
$1.32/M
DeepSeek V3.2 Speciale
97.8 pts/$
$0.80/M
Vendor risk
One or more vendors flagged
DeepSeek logo
DeepSeek
$3.4B·Tier 1
Higher risk
DeepSeek logo
DeepSeek
$3.4B·Tier 1
Higher risk
Head to head
R1 0528DeepSeek V3.2 Speciale
Artificial Analysis · Agentic Index
R1 0528 leads by +20.8
Artificial Analysis Agentic Index · a composite score measuring how well a model performs in agentic workflows · multi-step tool use, planning, error recovery, and autonomous task completion. Aggregates results from multiple agentic benchmarks including SWE-bench, tool-use tests, and planning evaluations. The canonical single-number metric for "how good is this model as an agent?"
R1 0528
20.8
DeepSeek V3.2 Speciale
0.0
Artificial Analysis · Coding Index
DeepSeek V3.2 Speciale leads by +13.9
Artificial Analysis Coding Index · a composite score that aggregates performance across multiple coding benchmarks into a single index. Tracks code generation quality, debugging ability, multi-language competence, and real-world software engineering tasks. Used by Artificial Analysis to rank model coding capability in a normalized, comparable format. Useful for developers choosing between models for coding-heavy workloads.
R1 0528
24.0
DeepSeek V3.2 Speciale
37.9
Artificial Analysis · Quality Index
DeepSeek V3.2 Speciale leads by +2.4
R1 0528
27.1
DeepSeek V3.2 Speciale
29.4
OpenCompass · AIME2025
DeepSeek V3.2 Speciale leads by +7.0
R1 0528
89.0
DeepSeek V3.2 Speciale
96.0
OpenCompass · GPQA-Diamond
DeepSeek V3.2 Speciale leads by +6.1
R1 0528
80.6
DeepSeek V3.2 Speciale
86.7
OpenCompass · HLE
DeepSeek V3.2 Speciale leads by +14.2
R1 0528
14.4
DeepSeek V3.2 Speciale
28.6
OpenCompass · IFEval
DeepSeek V3.2 Speciale leads by +11.7
R1 0528
80.0
DeepSeek V3.2 Speciale
91.7
OpenCompass · LiveCodeBenchV6
DeepSeek V3.2 Speciale leads by +19.9
R1 0528
61.0
DeepSeek V3.2 Speciale
80.9
OpenCompass · MMLU-Pro
DeepSeek V3.2 Speciale leads by +2.0
R1 0528
83.5
DeepSeek V3.2 Speciale
85.5
Full benchmark table
BenchmarkR1 0528DeepSeek V3.2 Speciale
Artificial Analysis · Agentic Index
Artificial Analysis Agentic Index · a composite score measuring how well a model performs in agentic workflows · multi-step tool use, planning, error recovery, and autonomous task completion. Aggregates results from multiple agentic benchmarks including SWE-bench, tool-use tests, and planning evaluations. The canonical single-number metric for "how good is this model as an agent?"
20.80.0
Artificial Analysis · Coding Index
Artificial Analysis Coding Index · a composite score that aggregates performance across multiple coding benchmarks into a single index. Tracks code generation quality, debugging ability, multi-language competence, and real-world software engineering tasks. Used by Artificial Analysis to rank model coding capability in a normalized, comparable format. Useful for developers choosing between models for coding-heavy workloads.
24.037.9
Artificial Analysis · Quality Index
27.129.4
OpenCompass · AIME2025
89.096.0
OpenCompass · GPQA-Diamond
80.686.7
OpenCompass · HLE
14.428.6
OpenCompass · IFEval
80.091.7
OpenCompass · LiveCodeBenchV6
61.080.9
OpenCompass · MMLU-Pro
83.585.5
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
DeepSeek logoR1 0528$0.50$2.15164K tokens (~82 books)$9.13
DeepSeek logoDeepSeek V3.2 Speciale$0.40$1.20164K tokens (~82 books)$6.00