Compare · ModelsLive · 2 picked · head to head

Kimi K2.5 vs Gemini 3 Flash Preview

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Gemini 3 Flash Preview wins 11 of 16 shared benchmarks. Leads in agentic · reasoning · knowledge.

Category leads
speed·Kimi K2.5agentic·Gemini 3 Flash Previewreasoning·Gemini 3 Flash Previewknowledge·Gemini 3 Flash Previewmath·Gemini 3 Flash Previewcoding·Gemini 3 Flash Preview
Hype vs Reality
Kimi K2.5
#87 by perf·no signal
QUIET
Gemini 3 Flash Preview
#98 by perf·no signal
QUIET
Best value
1.5x better value than Gemini 3 Flash Preview
Kimi K2.5
42.6 pts/$
$1.22/M
Gemini 3 Flash Preview
28.1 pts/$
$1.75/M
Vendor risk
moonshotai logo
moonshotai
private · undisclosed
Unknown
Google DeepMind logo
Google DeepMind
$4.00T·Tier 1
Low risk
Head to head
Kimi K2.5Gemini 3 Flash Preview
Artificial Analysis · Agentic Index
Kimi K2.5 leads by +9.3
Artificial Analysis Agentic Index · a composite score measuring how well a model performs in agentic workflows · multi-step tool use, planning, error recovery, and autonomous task completion. Aggregates results from multiple agentic benchmarks including SWE-bench, tool-use tests, and planning evaluations. The canonical single-number metric for "how good is this model as an agent?"
Kimi K2.5
58.9
Gemini 3 Flash Preview
49.7
Artificial Analysis · Coding Index
Gemini 3 Flash Preview leads by +3.1
Artificial Analysis Coding Index · a composite score that aggregates performance across multiple coding benchmarks into a single index. Tracks code generation quality, debugging ability, multi-language competence, and real-world software engineering tasks. Used by Artificial Analysis to rank model coding capability in a normalized, comparable format. Useful for developers choosing between models for coding-heavy workloads.
Kimi K2.5
39.5
Gemini 3 Flash Preview
42.6
Artificial Analysis · Quality Index
Kimi K2.5 leads by +0.4
Kimi K2.5
46.8
Gemini 3 Flash Preview
46.4
APEX-Agents
Gemini 3 Flash Preview leads by +9.6
APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments.
Kimi K2.5
14.4
Gemini 3 Flash Preview
24.0
ARC-AGI
Kimi K2.5 leads by +43.8
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Kimi K2.5
65.3
Gemini 3 Flash Preview
21.5
ARC-AGI-2
Gemini 3 Flash Preview leads by +21.8
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Kimi K2.5
11.8
Gemini 3 Flash Preview
33.6
Chess Puzzles
Gemini 3 Flash Preview leads by +26.0
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
Kimi K2.5
12.0
Gemini 3 Flash Preview
38.0
FrontierMath-2025-02-28-Private
Gemini 3 Flash Preview leads by +7.7
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Kimi K2.5
27.9
Gemini 3 Flash Preview
35.6
FrontierMath-Tier-4-2025-07-01-Private
Kimi K2.5 leads by +0.0
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Kimi K2.5
4.2
Gemini 3 Flash Preview
4.2
GPQA diamond
Kimi K2.5 leads by +5.9
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Kimi K2.5
83.5
Gemini 3 Flash Preview
77.6
OTIS Mock AIME 2024-2025
Gemini 3 Flash Preview leads by +0.6
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Kimi K2.5
92.2
Gemini 3 Flash Preview
92.8
SimpleBench
Gemini 3 Flash Preview leads by +17.2
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Kimi K2.5
36.2
Gemini 3 Flash Preview
53.3
SimpleQA Verified
Gemini 3 Flash Preview leads by +33.5
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Kimi K2.5
33.9
Gemini 3 Flash Preview
67.4
SWE-Bench verified
Gemini 3 Flash Preview leads by +1.6
SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability.
Kimi K2.5
73.8
Gemini 3 Flash Preview
75.4
Terminal Bench
Gemini 3 Flash Preview leads by +21.1
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
Kimi K2.5
43.2
Gemini 3 Flash Preview
64.3
WeirdML
Gemini 3 Flash Preview leads by +16.0
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Kimi K2.5
45.6
Gemini 3 Flash Preview
61.6
Full benchmark table
BenchmarkKimi K2.5Gemini 3 Flash Preview
Artificial Analysis · Agentic Index
Artificial Analysis Agentic Index · a composite score measuring how well a model performs in agentic workflows · multi-step tool use, planning, error recovery, and autonomous task completion. Aggregates results from multiple agentic benchmarks including SWE-bench, tool-use tests, and planning evaluations. The canonical single-number metric for "how good is this model as an agent?"
58.949.7
Artificial Analysis · Coding Index
Artificial Analysis Coding Index · a composite score that aggregates performance across multiple coding benchmarks into a single index. Tracks code generation quality, debugging ability, multi-language competence, and real-world software engineering tasks. Used by Artificial Analysis to rank model coding capability in a normalized, comparable format. Useful for developers choosing between models for coding-heavy workloads.
39.542.6
Artificial Analysis · Quality Index
46.846.4
APEX-Agents
APEX-Agents · evaluates AI agents on complex, multi-step tasks requiring planning, tool use, and autonomous decision-making in realistic environments.
14.424.0
ARC-AGI
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
65.321.5
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
11.833.6
Chess Puzzles
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
12.038.0
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
27.935.6
FrontierMath-Tier-4-2025-07-01-Private
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
4.24.2
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
83.577.6
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
92.292.8
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
36.253.3
SimpleQA Verified
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
33.967.4
SWE-Bench verified
SWE-bench Verified · 500 human-validated tasks from 12 real Python repositories (Django, Flask, scikit-learn, sympy, and others). Each task requires the model to produce a git patch that resolves a real GitHub issue and passes the test suite. The verified subset eliminates ambiguous tasks from the original SWE-bench. Claude Mythos Preview leads at 93.9%, crossing 90% for the first time in 2026. Opus 4.6 scores 80.8%. The benchmark remains the most-cited evaluation for code-generation capability.
73.875.4
Terminal Bench
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
43.264.3
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
45.661.6
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
moonshotai logoKimi K2.5$0.44$2.00262K tokens (~131 books)$8.30
Google DeepMind logoGemini 3 Flash Preview$0.50$3.001.0M tokens (~524 books)$11.25