Compare · ModelsLive · 4 picked · head to head
Claude Opus 4.6 vs o4 Mini vs Gemini 3.1 Pro Preview vs DeepSeek V3.2 Speciale
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Gemini 3.1 Pro Preview wins on 12/21 benchmarks
Gemini 3.1 Pro Preview wins 12 of 21 shared benchmarks. Leads in reasoning · knowledge · speed.
Category leads
reasoning·Gemini 3.1 Pro Previewknowledge·Gemini 3.1 Pro Previewmath·Claude Opus 4.6coding·Claude Opus 4.6speed·Gemini 3.1 Pro Previewagentic·Gemini 3.1 Pro Previewarena·Claude Opus 4.6
Hype vs Reality
Attention vs performance
Claude Opus 4.6
#54 by perf·#4 by attention
o4 Mini
#79 by perf·#13 by attention
Gemini 3.1 Pro Preview
#36 by perf·no signal
DeepSeek V3.2 Speciale
#4 by perf·#5 by attention
Best value
DeepSeek V3.2 Speciale
5.1x better value than o4 Mini
Claude Opus 4.6
3.8 pts/$
$15.00/M
o4 Mini
19.3 pts/$
$2.75/M
Gemini 3.1 Pro Preview
8.7 pts/$
$7.00/M
DeepSeek V3.2 Speciale
97.8 pts/$
$0.80/M
Vendor risk
Mixed exposure
One or more vendors flagged
Anthropic
$380.0B·Tier 1
OpenAI
$840.0B·Tier 1
Google DeepMind
$4.00T·Tier 1
DeepSeek
$3.4B·Tier 1
Head to head
21 benchmarks · 4 models
Claude Opus 4.6o4 MiniGemini 3.1 Pro PreviewDeepSeek V3.2 Speciale
ARC-AGI
Gemini 3.1 Pro Preview leads by +4.0
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Claude Opus 4.6
94.0
o4 Mini
58.7
Gemini 3.1 Pro Preview
98.0
ARC-AGI-2
Gemini 3.1 Pro Preview leads by +7.9
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Claude Opus 4.6
69.2
o4 Mini
6.1
Gemini 3.1 Pro Preview
77.1
Chess Puzzles
Gemini 3.1 Pro Preview leads by +29.0
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
Claude Opus 4.6
17.0
o4 Mini
26.0
Gemini 3.1 Pro Preview
55.0
FrontierMath-2025-02-28-Private
Claude Opus 4.6 leads by +3.8
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Claude Opus 4.6
40.7
o4 Mini
24.8
Gemini 3.1 Pro Preview
36.9
FrontierMath-Tier-4-2025-07-01-Private
Claude Opus 4.6 leads by +6.2
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Claude Opus 4.6
22.9
o4 Mini
6.3
Gemini 3.1 Pro Preview
16.7
GPQA diamond
Gemini 3.1 Pro Preview leads by +4.8
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude Opus 4.6
87.4
o4 Mini
72.8
Gemini 3.1 Pro Preview
92.1
OTIS Mock AIME 2024-2025
Gemini 3.1 Pro Preview leads by +1.2
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude Opus 4.6
94.4
o4 Mini
81.7
Gemini 3.1 Pro Preview
95.6
SimpleBench
Gemini 3.1 Pro Preview leads by +14.4
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Claude Opus 4.6
61.1
o4 Mini
26.4
Gemini 3.1 Pro Preview
75.5
SimpleQA Verified
Gemini 3.1 Pro Preview leads by +30.8
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Claude Opus 4.6
46.5
o4 Mini
23.9
Gemini 3.1 Pro Preview
77.3
WeirdML
Claude Opus 4.6 leads by +5.8
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Claude Opus 4.6
77.9
o4 Mini
52.6
Gemini 3.1 Pro Preview
72.1
Artificial Analysis · Agentic Index
Gemini 3.1 Pro Preview leads by +59.1
Gemini 3.1 Pro Preview
59.1
DeepSeek V3.2 Speciale
0.0
Artificial Analysis · Coding Index
Gemini 3.1 Pro Preview leads by +17.6
Gemini 3.1 Pro Preview
55.5
DeepSeek V3.2 Speciale
37.9
Artificial Analysis · Quality Index
Gemini 3.1 Pro Preview leads by +27.8
Gemini 3.1 Pro Preview
57.2
DeepSeek V3.2 Speciale
29.4
APEX-Agents
Gemini 3.1 Pro Preview leads by +1.8
APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments.
Claude Opus 4.6
31.7
Gemini 3.1 Pro Preview
33.5
Chatbot Arena Elo · Coding
Claude Opus 4.6 leads by +87.2
Claude Opus 4.6
1542.9
Gemini 3.1 Pro Preview
1455.7
Chatbot Arena Elo · Overall
Claude Opus 4.6 leads by +4.0
Claude Opus 4.6
1496.6
Gemini 3.1 Pro Preview
1492.6
GSO-Bench
Claude Opus 4.6 leads by +29.7
GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues.
Claude Opus 4.6
33.3
o4 Mini
3.6
HLE
Claude Opus 4.6 leads by +17.2
HLE (Humanity's Last Exam) · crowdsourced expert-level questions designed to be among the hardest possible challenges for AI systems across all domains.
Claude Opus 4.6
31.1
o4 Mini
13.9
PostTrainBench
Claude Opus 4.6 leads by +1.6
Claude Opus 4.6
23.2
Gemini 3.1 Pro Preview
21.6
SWE-Bench verified
Claude Opus 4.6 leads by +3.1
Claude Opus 4.6
78.7
Gemini 3.1 Pro Preview
75.6
Terminal Bench
Gemini 3.1 Pro Preview leads by +3.7
Terminal Bench · tests the ability to accomplish real-world tasks using terminal commands, evaluating shell scripting and CLI tool proficiency.
Claude Opus 4.6
74.7
Gemini 3.1 Pro Preview
78.4
Full benchmark table
| Benchmark | Claude Opus 4.6 | o4 Mini | Gemini 3.1 Pro Preview | DeepSeek V3.2 Speciale |
|---|---|---|---|---|
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 94.0 | 58.7 | 98.0 | — |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 69.2 | 6.1 | 77.1 | — |
Chess Puzzles Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities. | 17.0 | 26.0 | 55.0 | — |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 40.7 | 24.8 | 36.9 | — |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | 22.9 | 6.3 | 16.7 | — |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 87.4 | 72.8 | 92.1 | — |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 94.4 | 81.7 | 95.6 | — |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 61.1 | 26.4 | 75.5 | — |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | 46.5 | 23.9 | 77.3 | — |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 77.9 | 52.6 | 72.1 | — |
Artificial Analysis · Agentic Index | — | — | 59.1 | 0.0 |
Artificial Analysis · Coding Index | — | — | 55.5 | 37.9 |
Artificial Analysis · Quality Index | — | — | 57.2 | 29.4 |
APEX-Agents APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments. | 31.7 | — | 33.5 | — |
Chatbot Arena Elo · Coding | 1542.9 | — | 1455.7 | — |
Chatbot Arena Elo · Overall | 1496.6 | — | 1492.6 | — |
GSO-Bench GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues. | 33.3 | 3.6 | — | — |
HLE HLE (Humanity's Last Exam) · crowdsourced expert-level questions designed to be among the hardest possible challenges for AI systems across all domains. | 31.1 | 13.9 | — | — |
PostTrainBench | 23.2 | — | 21.6 | — |
SWE-Bench verified | 78.7 | — | 75.6 | — |
Terminal Bench Terminal Bench · tests the ability to accomplish real-world tasks using terminal commands, evaluating shell scripting and CLI tool proficiency. | 74.7 | — | 78.4 | — |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $5.00 | $25.00 | 1.0M tokens (~500 books) | $100.00 | |
| $1.10 | $4.40 | 200K tokens (~100 books) | $19.25 | |
| $2.00 | $12.00 | 1.0M tokens (~524 books) | $45.00 | |
| $0.40 | $1.20 | 164K tokens (~82 books) | $6.00 |