Compare · ModelsLive · 3 picked · head to head

GPT-5.1 vs Kimi K2 0711 vs o3

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

GPT-5.1 wins 16 of 23 shared benchmarks. Leads in coding · language · math.

Category leads
coding·GPT-5.1knowledge·o3language·GPT-5.1math·GPT-5.1reasoning·GPT-5.1
Hype vs Reality
GPT-5.1
#97 by perf·no signal
QUIET
Kimi K2 0711
#63 by perf·no signal
QUIET
o3
#69 by perf·no signal
QUIET
Best value
3.5x better value than o3
GPT-5.1
8.8 pts/$
$5.63/M
Kimi K2 0711
39.2 pts/$
$1.43/M
o3
11.0 pts/$
$5.00/M
Vendor risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
moonshotai logo
moonshotai
private · undisclosed
Unknown
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Head to head
GPT-5.1Kimi K2 0711o3
GSO-Bench
GPT-5.1 leads by +4.9
GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues.
GPT-5.1
13.7
Kimi K2 0711
4.9
o3
8.8
HELM · GPQA
o3 leads by +10.1
GPT-5.1
44.2
Kimi K2 0711
65.2
o3
75.3
HELM · IFEval
GPT-5.1 leads by +6.6
GPT-5.1
93.5
Kimi K2 0711
85.0
o3
86.9
HELM · MMLU-Pro
o3 leads by +4.0
GPT-5.1
57.9
Kimi K2 0711
81.9
o3
85.9
HELM · Omni-MATH
o3 leads by +6.0
GPT-5.1
46.4
Kimi K2 0711
65.4
o3
71.4
HELM · WildBench
GPT-5.1 leads by +0.1
GPT-5.1
86.3
Kimi K2 0711
86.2
o3
86.1
SimpleBench
GPT-5.1 leads by +0.1
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
GPT-5.1
43.8
Kimi K2 0711
11.6
o3
43.7
WeirdML
GPT-5.1 leads by +8.4
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
GPT-5.1
60.8
Kimi K2 0711
39.4
o3
52.4
Aider polyglot
o3 leads by +22.2
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
Kimi K2 0711
59.1
o3
81.3
ARC-AGI
GPT-5.1 leads by +12.0
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
GPT-5.1
72.8
o3
60.8
ARC-AGI-2
GPT-5.1 leads by +11.1
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
GPT-5.1
17.6
o3
6.5
Fiction.LiveBench
o3 leads by +27.8
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
Kimi K2 0711
61.1
o3
88.9
FrontierMath-2025-02-28-Private
GPT-5.1 leads by +12.3
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
GPT-5.1
31.0
o3
18.7
FrontierMath-Tier-4-2025-07-01-Private
GPT-5.1 leads by +10.4
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
GPT-5.1
12.5
o3
2.1
GPQA diamond
GPT-5.1 leads by +7.7
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-5.1
83.5
o3
75.8
HLE
GPT-5.1 leads by +3.5
HLE (Humanity's Last Exam) · a reasoning benchmark designed to be the hardest public evaluation of AI. Questions span mathematics, physics, philosophy, and logic · curated to be at or beyond the frontier of human expert capability. Tested with and without tool augmentation. Claude Opus 4.7 scores 46.9% without tools and 54.7% with tools · making it one of the few benchmarks where the top score is below 60%.
GPT-5.1
19.8
o3
16.3
Lech Mazur Writing
Kimi K2 0711 leads by +3.0
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
Kimi K2 0711
86.9
o3
83.9
OTIS Mock AIME 2024-2025
GPT-5.1 leads by +4.7
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-5.1
88.6
o3
83.9
SimpleQA Verified
o3 leads by +4.1
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
GPT-5.1
48.9
o3
53.0
SWE-Bench verified
GPT-5.1 leads by +5.7
SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability.
GPT-5.1
68.0
o3
62.3
SWE-Bench Verified (Bash Only)
GPT-5.1 leads by +7.6
SWE-Bench Verified (Bash Only) · a curated subset of SWE-bench where models fix real Python repository bugs using only bash commands, no agent frameworks.
GPT-5.1
66.0
o3
58.4
Terminal Bench
GPT-5.1 leads by +19.8
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
GPT-5.1
47.6
Kimi K2 0711
27.8
VPCT
GPT-5.1 leads by +10.0
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
GPT-5.1
38.0
o3
28.0
Full benchmark table
BenchmarkGPT-5.1Kimi K2 0711o3
GSO-Bench
GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues.
13.74.98.8
HELM · GPQA
44.265.275.3
HELM · IFEval
93.585.086.9
HELM · MMLU-Pro
57.981.985.9
HELM · Omni-MATH
46.465.471.4
HELM · WildBench
86.386.286.1
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
43.811.643.7
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
60.839.452.4
Aider polyglot
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
59.181.3
ARC-AGI
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
72.860.8
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
17.66.5
Fiction.LiveBench
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
61.188.9
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
31.018.7
FrontierMath-Tier-4-2025-07-01-Private
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
12.52.1
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
83.575.8
HLE
HLE (Humanity's Last Exam) · a reasoning benchmark designed to be the hardest public evaluation of AI. Questions span mathematics, physics, philosophy, and logic · curated to be at or beyond the frontier of human expert capability. Tested with and without tool augmentation. Claude Opus 4.7 scores 46.9% without tools and 54.7% with tools · making it one of the few benchmarks where the top score is below 60%.
19.816.3
Lech Mazur Writing
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
86.983.9
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
88.683.9
SimpleQA Verified
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
48.953.0
SWE-Bench verified
SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability.
68.062.3
SWE-Bench Verified (Bash Only)
SWE-Bench Verified (Bash Only) · a curated subset of SWE-bench where models fix real Python repository bugs using only bash commands, no agent frameworks.
66.058.4
Terminal Bench
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
47.627.8
VPCT
VPCT (Visual Pattern Completion Test) · tests visual reasoning and pattern recognition by having models complete visual sequences and transformations.
38.028.0
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
OpenAI logoGPT-5.1$1.25$10.00400K tokens (~200 books)$34.38
moonshotai logoKimi K2 0711$0.57$2.30131K tokens (~66 books)$10.03
OpenAI logoo3$2.00$8.00200K tokens (~100 books)$35.00