Compare · ModelsLive · 3 picked · head to head

GPT-5 vs Qwen3 Max vs Kimi K2 0711

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

GPT-5 wins 11 of 12 shared benchmarks. Leads in knowledge · coding · math.

Category leads
knowledge·GPT-5coding·GPT-5math·GPT-5reasoning·GPT-5
Hype vs Reality
GPT-5
#74 by perf·no signal
QUIET
Qwen3 Max
#49 by perf·no signal
QUIET
Kimi K2 0711
#63 by perf·no signal
QUIET
Best value
1.6x better value than Qwen3 Max
GPT-5
9.7 pts/$
$5.63/M
Qwen3 Max
24.9 pts/$
$2.34/M
Kimi K2 0711
39.2 pts/$
$1.43/M
Vendor risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Alibaba Qwen logo
Alibaba (Qwen)
$293.0B·Tier 1
Low risk
moonshotai logo
moonshotai
private · undisclosed
Unknown
Head to head
GPT-5Qwen3 MaxKimi K2 0711
Fiction.LiveBench
GPT-5 leads by +30.5
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
GPT-5
97.2
Qwen3 Max
66.7
Kimi K2 0711
61.1
Lech Mazur Writing
GPT-5 leads by +0.1
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
GPT-5
87.2
Qwen3 Max
87.1
Kimi K2 0711
86.9
Aider polyglot
GPT-5 leads by +28.9
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
GPT-5
88.0
Kimi K2 0711
59.1
Chess Puzzles
GPT-5 leads by +33.0
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
GPT-5
37.0
Qwen3 Max
4.0
GPQA diamond
GPT-5 leads by +18.1
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-5
81.6
Qwen3 Max
63.5
GSO-Bench
GPT-5 leads by +2.0
GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues.
GPT-5
6.9
Kimi K2 0711
4.9
MATH level 5
GPT-5 leads by +1.0
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
GPT-5
98.1
Qwen3 Max
97.1
OTIS Mock AIME 2024-2025
GPT-5 leads by +18.1
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-5
91.4
Qwen3 Max
73.3
SimpleBench
GPT-5 leads by +36.5
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
GPT-5
48.0
Kimi K2 0711
11.6
SimpleQA Verified
Qwen3 Max leads by +16.9
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
GPT-5
50.6
Qwen3 Max
67.5
Terminal Bench
GPT-5 leads by +21.8
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
GPT-5
49.6
Kimi K2 0711
27.8
WeirdML
GPT-5 leads by +21.3
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
GPT-5
60.7
Kimi K2 0711
39.4
Full benchmark table
BenchmarkGPT-5Qwen3 MaxKimi K2 0711
Fiction.LiveBench
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
97.266.761.1
Lech Mazur Writing
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
87.287.186.9
Aider polyglot
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
88.059.1
Chess Puzzles
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
37.04.0
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
81.663.5
GSO-Bench
GSO-Bench · evaluates AI models on real-world open-source software engineering tasks, testing the ability to understand and resolve actual GitHub issues.
6.94.9
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
98.197.1
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
91.473.3
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
48.011.6
SimpleQA Verified
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
50.667.5
Terminal Bench
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
49.627.8
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
60.739.4
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
OpenAI logoGPT-5$1.25$10.00400K tokens (~200 books)$34.38
Alibaba Qwen logoQwen3 Max$0.78$3.90262K tokens (~131 books)$15.60
moonshotai logoKimi K2 0711$0.57$2.30131K tokens (~66 books)$10.03