Compare · ModelsLive · 2 picked · head to head
Claude Opus 4 vs Grok 3 Mini
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Claude Opus 4 wins on 5/9 benchmarks
Claude Opus 4 wins 5 of 9 shared benchmarks. Leads in coding · reasoning.
Category leads
coding·Claude Opus 4reasoning·Claude Opus 4knowledge·Grok 3 Minimath·Grok 3 Mini
Hype vs Reality
Attention vs performance
Claude Opus 4
#133 by perf·no signal
Grok 3 Mini
#110 by perf·no signal
Best value
Grok 3 Mini
125.7x better value than Claude Opus 4
Claude Opus 4
0.9 pts/$
$45.00/M
Grok 3 Mini
116.5 pts/$
$0.40/M
Vendor risk
Who is behind the model
Anthropic
$380.0B·Tier 1
xAI
$250.0B·Tier 1
Head to head
9 benchmarks · 2 models
Claude Opus 4Grok 3 Mini
Aider polyglot
Claude Opus 4 leads by +22.7
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
Claude Opus 4
72.0
Grok 3 Mini
49.3
ARC-AGI
Claude Opus 4 leads by +19.2
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Claude Opus 4
35.7
Grok 3 Mini
16.5
ARC-AGI-2
Claude Opus 4 leads by +8.2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Claude Opus 4
8.6
Grok 3 Mini
0.4
Fiction.LiveBench
Grok 3 Mini leads by +5.6
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
Claude Opus 4
61.1
Grok 3 Mini
66.7
FrontierMath-2025-02-28-Private
Grok 3 Mini leads by +1.4
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Claude Opus 4
4.5
Grok 3 Mini
5.9
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude Opus 4
68.3
Grok 3 Mini
68.3
MATH level 5
Grok 3 Mini leads by +5.9
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Claude Opus 4
85.0
Grok 3 Mini
90.9
OTIS Mock AIME 2024-2025
Grok 3 Mini leads by +13.4
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude Opus 4
64.4
Grok 3 Mini
77.8
WeirdML
Claude Opus 4 leads by +0.8
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Claude Opus 4
43.4
Grok 3 Mini
42.6
Full benchmark table
| Benchmark | Claude Opus 4 | Grok 3 Mini |
|---|---|---|
Aider polyglot Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework. | 72.0 | 49.3 |
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 35.7 | 16.5 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 8.6 | 0.4 |
Fiction.LiveBench Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination. | 61.1 | 66.7 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 4.5 | 5.9 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 68.3 | 68.3 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 85.0 | 90.9 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 64.4 | 77.8 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 43.4 | 42.6 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $15.00 | $75.00 | 200K tokens (~100 books) | $300.00 | |
| $0.30 | $0.50 | 131K tokens (~66 books) | $3.50 |