Compare · ModelsLive · 2 picked · head to head
R1 0528 vs DeepSeek V3.2 Speciale
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
DeepSeek V3.2 Speciale wins on 8/9 benchmarks
DeepSeek V3.2 Speciale wins 8 of 9 shared benchmarks. Leads in speed · math · knowledge.
Category leads
speed·DeepSeek V3.2 Specialemath·DeepSeek V3.2 Specialeknowledge·DeepSeek V3.2 Specialelanguage·DeepSeek V3.2 Specialecoding·DeepSeek V3.2 Speciale
Hype vs Reality
Attention vs performance
R1 0528
#53 by perf·no signal
DeepSeek V3.2 Speciale
#6 by perf·#5 by attention
Best value
DeepSeek V3.2 Speciale
2.2x better value than R1 0528
R1 0528
43.7 pts/$
$1.32/M
DeepSeek V3.2 Speciale
97.8 pts/$
$0.80/M
Vendor risk
Mixed exposure
One or more vendors flagged
DeepSeek
$3.4B·Tier 1
DeepSeek
$3.4B·Tier 1
Head to head
9 benchmarks · 2 models
R1 0528DeepSeek V3.2 Speciale
Artificial Analysis · Agentic Index
R1 0528 leads by +20.8
Artificial Analysis Agentic Index · a composite score measuring how well a model performs in agentic workflows · multi-step tool use, planning, error recovery, and autonomous task completion. Aggregates results from multiple agentic benchmarks including SWE-bench, tool-use tests, and planning evaluations. The canonical single-number metric for "how good is this model as an agent?"
R1 0528
20.8
DeepSeek V3.2 Speciale
0.0
Artificial Analysis · Coding Index
DeepSeek V3.2 Speciale leads by +13.9
Artificial Analysis Coding Index · a composite score that aggregates performance across multiple coding benchmarks into a single index. Tracks code generation quality, debugging ability, multi-language competence, and real-world software engineering tasks. Used by Artificial Analysis to rank model coding capability in a normalized, comparable format. Useful for developers choosing between models for coding-heavy workloads.
R1 0528
24.0
DeepSeek V3.2 Speciale
37.9
Artificial Analysis · Quality Index
DeepSeek V3.2 Speciale leads by +2.4
R1 0528
27.1
DeepSeek V3.2 Speciale
29.4
OpenCompass · AIME2025
DeepSeek V3.2 Speciale leads by +7.0
R1 0528
89.0
DeepSeek V3.2 Speciale
96.0
OpenCompass · GPQA-Diamond
DeepSeek V3.2 Speciale leads by +6.1
R1 0528
80.6
DeepSeek V3.2 Speciale
86.7
OpenCompass · HLE
DeepSeek V3.2 Speciale leads by +14.2
R1 0528
14.4
DeepSeek V3.2 Speciale
28.6
OpenCompass · IFEval
DeepSeek V3.2 Speciale leads by +11.7
R1 0528
80.0
DeepSeek V3.2 Speciale
91.7
OpenCompass · LiveCodeBenchV6
DeepSeek V3.2 Speciale leads by +19.9
R1 0528
61.0
DeepSeek V3.2 Speciale
80.9
OpenCompass · MMLU-Pro
DeepSeek V3.2 Speciale leads by +2.0
R1 0528
83.5
DeepSeek V3.2 Speciale
85.5
Full benchmark table
| Benchmark | R1 0528 | DeepSeek V3.2 Speciale |
|---|---|---|
Artificial Analysis · Agentic Index Artificial Analysis Agentic Index · a composite score measuring how well a model performs in agentic workflows · multi-step tool use, planning, error recovery, and autonomous task completion. Aggregates results from multiple agentic benchmarks including SWE-bench, tool-use tests, and planning evaluations. The canonical single-number metric for "how good is this model as an agent?" | 20.8 | 0.0 |
Artificial Analysis · Coding Index Artificial Analysis Coding Index · a composite score that aggregates performance across multiple coding benchmarks into a single index. Tracks code generation quality, debugging ability, multi-language competence, and real-world software engineering tasks. Used by Artificial Analysis to rank model coding capability in a normalized, comparable format. Useful for developers choosing between models for coding-heavy workloads. | 24.0 | 37.9 |
Artificial Analysis · Quality Index | 27.1 | 29.4 |
OpenCompass · AIME2025 | 89.0 | 96.0 |
OpenCompass · GPQA-Diamond | 80.6 | 86.7 |
OpenCompass · HLE | 14.4 | 28.6 |
OpenCompass · IFEval | 80.0 | 91.7 |
OpenCompass · LiveCodeBenchV6 | 61.0 | 80.9 |
OpenCompass · MMLU-Pro | 83.5 | 85.5 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.50 | $2.15 | 164K tokens (~82 books) | $9.13 | |
| $0.40 | $1.20 | 164K tokens (~82 books) | $6.00 |
People also compared
DeepSeek V3.2 Speciale vs GPT-5.5 ProDeepSeek V3.2 Speciale vs GPT-5.5DeepSeek V3.2 Speciale vs GPT-5 ChatClaude Mythos Preview vs DeepSeek V3.2 SpecialeDeepSeek V3.2 Speciale vs Qwen3.5 397B A17BClaude Instant vs DeepSeek V3.2 SpecialeDeepSeek V3.2 Speciale vs Step 3.5 FlashDeepSeek-V2 (MoE-236B, May 2024) vs DeepSeek V3.2 Speciale