Compare · ModelsLive · 2 picked · head to head

Kimi K2.5 vs Qwen3 235B A22B Thinking 2507

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Kimi K2.5 wins 13 of 14 shared benchmarks. Leads in knowledge · math · language.

Category leads
knowledge·Kimi K2.5math·Kimi K2.5language·Kimi K2.5coding·Kimi K2.5
Hype vs Reality
Kimi K2.5
#87 by perf·no signal
QUIET
Qwen3 235B A22B Thinking 2507
#66 by perf·no signal
QUIET
Best value
1.6x better value than Kimi K2.5
Kimi K2.5
42.6 pts/$
$1.22/M
Qwen3 235B A22B Thinking 2507
68.0 pts/$
$0.82/M
Vendor risk
moonshotai logo
moonshotai
private · undisclosed
Unknown
Alibaba Qwen logo
Alibaba (Qwen)
$293.0B·Tier 1
Low risk
Head to head
Kimi K2.5Qwen3 235B A22B Thinking 2507
Chess Puzzles
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
Kimi K2.5
12.0
Qwen3 235B A22B Thinking 2507
12.0
Fiction.LiveBench
Kimi K2.5 leads by +11.1
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
Kimi K2.5
86.1
Qwen3 235B A22B Thinking 2507
75.0
FrontierMath-2025-02-28-Private
Kimi K2.5 leads by +19.4
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Kimi K2.5
27.9
Qwen3 235B A22B Thinking 2507
8.5
FrontierMath-Tier-4-2025-07-01-Private
Kimi K2.5 leads by +4.1
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Kimi K2.5
4.2
Qwen3 235B A22B Thinking 2507
0.1
GPQA diamond
Kimi K2.5 leads by +10.1
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Kimi K2.5
83.5
Qwen3 235B A22B Thinking 2507
73.4
OpenCompass · AIME2025
Kimi K2.5 leads by +1.0
Kimi K2.5
91.9
Qwen3 235B A22B Thinking 2507
90.9
OpenCompass · GPQA-Diamond
Kimi K2.5 leads by +8.3
Kimi K2.5
88.1
Qwen3 235B A22B Thinking 2507
79.8
OpenCompass · HLE
Kimi K2.5 leads by +10.1
Kimi K2.5
28.6
Qwen3 235B A22B Thinking 2507
18.5
OpenCompass · IFEval
Kimi K2.5 leads by +6.1
Kimi K2.5
93.9
Qwen3 235B A22B Thinking 2507
87.8
OpenCompass · LiveCodeBenchV6
Kimi K2.5 leads by +10.0
Kimi K2.5
80.6
Qwen3 235B A22B Thinking 2507
70.6
OpenCompass · MMLU-Pro
Kimi K2.5 leads by +2.7
Kimi K2.5
86.2
Qwen3 235B A22B Thinking 2507
83.5
OTIS Mock AIME 2024-2025
Kimi K2.5 leads by +5.5
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Kimi K2.5
92.2
Qwen3 235B A22B Thinking 2507
86.7
SimpleQA Verified
Qwen3 235B A22B Thinking 2507 leads by +16.2
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Kimi K2.5
33.9
Qwen3 235B A22B Thinking 2507
50.1
WeirdML
Kimi K2.5 leads by +4.6
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Kimi K2.5
45.6
Qwen3 235B A22B Thinking 2507
41.0
Full benchmark table
BenchmarkKimi K2.5Qwen3 235B A22B Thinking 2507
Chess Puzzles
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
12.012.0
Fiction.LiveBench
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
86.175.0
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
27.98.5
FrontierMath-Tier-4-2025-07-01-Private
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
4.20.1
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
83.573.4
OpenCompass · AIME2025
91.990.9
OpenCompass · GPQA-Diamond
88.179.8
OpenCompass · HLE
28.618.5
OpenCompass · IFEval
93.987.8
OpenCompass · LiveCodeBenchV6
80.670.6
OpenCompass · MMLU-Pro
86.283.5
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
92.286.7
SimpleQA Verified
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
33.950.1
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
45.641.0
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
moonshotai logoKimi K2.5$0.44$2.00262K tokens (~131 books)$8.30
Alibaba Qwen logoQwen3 235B A22B Thinking 2507$0.15$1.50131K tokens (~66 books)$4.86