Compare · ModelsLive · 3 picked · head to head

Grok 3 Mini Beta vs Grok 4 vs GPT-5.1

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

GPT-5.1 wins 13 of 19 shared benchmarks. Leads in knowledge · math · reasoning.

Category leads
knowledge·GPT-5.1language·Grok 3 Mini Betamath·GPT-5.1reasoning·GPT-5.1coding·GPT-5.1agentic·GPT-5.1arena·GPT-5.1
Hype vs Reality
Grok 3 Mini Beta
#30 by perf·no signal
QUIET
Grok 4
#73 by perf·no signal
QUIET
GPT-5.1
#97 by perf·no signal
QUIET
Best value
18.4x better value than GPT-5.1
Grok 3 Mini Beta
162.0 pts/$
$0.40/M
Grok 4
6.1 pts/$
$9.00/M
GPT-5.1
8.8 pts/$
$5.63/M
Vendor risk
xAI logo
xAI
$250.0B·Tier 1
Medium risk
xAI logo
xAI
$250.0B·Tier 1
Medium risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Head to head
Grok 3 Mini BetaGrok 4GPT-5.1
HELM · GPQA
Grok 4 leads by +5.1
Grok 3 Mini Beta
67.5
Grok 4
72.6
GPT-5.1
44.2
HELM · IFEval
Grok 3 Mini Beta leads by +0.2
Grok 3 Mini Beta
95.1
Grok 4
94.9
GPT-5.1
93.5
HELM · MMLU-Pro
Grok 4 leads by +5.2
Grok 3 Mini Beta
79.9
Grok 4
85.1
GPT-5.1
57.9
HELM · Omni-MATH
Grok 4 leads by +13.9
Grok 3 Mini Beta
31.8
Grok 4
60.3
GPT-5.1
46.4
HELM · WildBench
GPT-5.1 leads by +6.6
Grok 3 Mini Beta
65.1
Grok 4
79.7
GPT-5.1
86.3
Aider polyglot
Grok 4 leads by +30.3
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
Grok 3 Mini Beta
49.3
Grok 4
79.6
APEX-Agents
GPT-5.1 leads by +2.3
APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments.
Grok 4
15.2
GPT-5.1
17.5
ARC-AGI
GPT-5.1 leads by +6.1
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Grok 4
66.7
GPT-5.1
72.8
ARC-AGI-2
GPT-5.1 leads by +1.7
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Grok 4
16.0
GPT-5.1
17.6
Chatbot Arena Elo · Overall
GPT-5.1 leads by +81.1
Grok 3 Mini Beta
1357.4
GPT-5.1
1438.5
Chess Puzzles
GPT-5.1 leads by +4.0
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
Grok 4
28.0
GPT-5.1
32.0
FrontierMath-2025-02-28-Private
GPT-5.1 leads by +11.4
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Grok 4
19.7
GPT-5.1
31.0
FrontierMath-Tier-4-2025-07-01-Private
GPT-5.1 leads by +10.4
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Grok 4
2.1
GPT-5.1
12.5
GPQA diamond
GPT-5.1 leads by +0.8
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Grok 4
82.7
GPT-5.1
83.5
OTIS Mock AIME 2024-2025
GPT-5.1 leads by +4.6
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Grok 4
84.0
GPT-5.1
88.6
SimpleBench
Grok 4 leads by +8.8
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Grok 4
52.6
GPT-5.1
43.8
SimpleQA Verified
GPT-5.1 leads by +1.0
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Grok 4
47.9
GPT-5.1
48.9
Terminal Bench
GPT-5.1 leads by +20.4
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
Grok 4
27.2
GPT-5.1
47.6
WeirdML
GPT-5.1 leads by +15.0
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Grok 4
45.7
GPT-5.1
60.8
Full benchmark table
BenchmarkGrok 3 Mini BetaGrok 4GPT-5.1
HELM · GPQA
67.572.644.2
HELM · IFEval
95.194.993.5
HELM · MMLU-Pro
79.985.157.9
HELM · Omni-MATH
31.860.346.4
HELM · WildBench
65.179.786.3
Aider polyglot
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
49.379.6
APEX-Agents
APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments.
15.217.5
ARC-AGI
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
66.772.8
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
16.017.6
Chatbot Arena Elo · Overall
1357.41438.5
Chess Puzzles
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
28.032.0
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
19.731.0
FrontierMath-Tier-4-2025-07-01-Private
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
2.112.5
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
82.783.5
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
84.088.6
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
52.643.8
SimpleQA Verified
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
47.948.9
Terminal Bench
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
27.247.6
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
45.760.8
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
xAI logoGrok 3 Mini Beta$0.30$0.50131K tokens (~66 books)$3.50
xAI logoGrok 4$3.00$15.00256K tokens (~128 books)$60.00
OpenAI logoGPT-5.1$1.25$10.00400K tokens (~200 books)$34.38