Compare · ModelsLive · 2 picked · head to head
Gemini 2.5 Pro vs MiMo-V2-Flash
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
MiMo-V2-Flash wins on 6/11 benchmarks
MiMo-V2-Flash wins 6 of 11 shared benchmarks. Leads in speed · arena · math.
Category leads
speed·MiMo-V2-Flasharena·MiMo-V2-Flashmath·MiMo-V2-Flashknowledge·Gemini 2.5 Prolanguage·Gemini 2.5 Procoding·MiMo-V2-Flash
Hype vs Reality
Attention vs performance
Gemini 2.5 Pro
#59 by perf·no signal
MiMo-V2-Flash
#9 by perf·#12 by attention
Best value
MiMo-V2-Flash
38.6x better value than Gemini 2.5 Pro
Gemini 2.5 Pro
10.0 pts/$
$5.63/M
MiMo-V2-Flash
385.8 pts/$
$0.19/M
Vendor risk
Who is behind the model
Google DeepMind
$4.00T·Tier 1
xiaomi
private · undisclosed
Head to head
11 benchmarks · 2 models
Gemini 2.5 ProMiMo-V2-Flash
Artificial Analysis · Agentic Index
MiMo-V2-Flash leads by +16.1
Gemini 2.5 Pro
32.7
MiMo-V2-Flash
48.8
Artificial Analysis · Coding Index
MiMo-V2-Flash leads by +1.5
Gemini 2.5 Pro
31.9
MiMo-V2-Flash
33.5
Artificial Analysis · Quality Index
MiMo-V2-Flash leads by +6.8
Gemini 2.5 Pro
34.6
MiMo-V2-Flash
41.5
Chatbot Arena Elo · Coding
MiMo-V2-Flash leads by +134.6
Gemini 2.5 Pro
1202.0
MiMo-V2-Flash
1336.5
Chatbot Arena Elo · Overall
Gemini 2.5 Pro leads by +56.1
Gemini 2.5 Pro
1448.2
MiMo-V2-Flash
1392.1
OpenCompass · AIME2025
MiMo-V2-Flash leads by +4.2
Gemini 2.5 Pro
88.7
MiMo-V2-Flash
92.9
OpenCompass · GPQA-Diamond
Gemini 2.5 Pro leads by +2.6
Gemini 2.5 Pro
84.7
MiMo-V2-Flash
82.1
OpenCompass · HLE
Gemini 2.5 Pro leads by +0.6
Gemini 2.5 Pro
21.1
MiMo-V2-Flash
20.5
OpenCompass · IFEval
Gemini 2.5 Pro leads by +0.5
Gemini 2.5 Pro
90.0
MiMo-V2-Flash
89.5
OpenCompass · LiveCodeBenchV6
MiMo-V2-Flash leads by +0.5
Gemini 2.5 Pro
71.3
MiMo-V2-Flash
71.8
OpenCompass · MMLU-Pro
Gemini 2.5 Pro leads by +2.7
Gemini 2.5 Pro
85.8
MiMo-V2-Flash
83.1
Full benchmark table
| Benchmark | Gemini 2.5 Pro | MiMo-V2-Flash |
|---|---|---|
Artificial Analysis · Agentic Index | 32.7 | 48.8 |
Artificial Analysis · Coding Index | 31.9 | 33.5 |
Artificial Analysis · Quality Index | 34.6 | 41.5 |
Chatbot Arena Elo · Coding | 1202.0 | 1336.5 |
Chatbot Arena Elo · Overall | 1448.2 | 1392.1 |
OpenCompass · AIME2025 | 88.7 | 92.9 |
OpenCompass · GPQA-Diamond | 84.7 | 82.1 |
OpenCompass · HLE | 21.1 | 20.5 |
OpenCompass · IFEval | 90.0 | 89.5 |
OpenCompass · LiveCodeBenchV6 | 71.3 | 71.8 |
OpenCompass · MMLU-Pro | 85.8 | 83.1 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $1.25 | $10.00 | 1.0M tokens (~524 books) | $34.38 | |
| $0.09 | $0.29 | 262K tokens (~131 books) | $1.40 |