Compare · ModelsLive · 2 picked · head to head
Gemini 2.5 Pro vs gpt-oss-20b
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Gemini 2.5 Pro wins on 5/6 benchmarks
Gemini 2.5 Pro wins 5 of 6 shared benchmarks. Leads in arena · knowledge · language.
Category leads
arena·Gemini 2.5 Proknowledge·Gemini 2.5 Prolanguage·Gemini 2.5 Promath·gpt-oss-20breasoning·Gemini 2.5 Pro
Hype vs Reality
Attention vs performance
Gemini 2.5 Pro
#59 by perf·no signal
gpt-oss-20b
#22 by perf·no signal
Best value
gpt-oss-20b
79.4x better value than Gemini 2.5 Pro
Gemini 2.5 Pro
10.0 pts/$
$5.63/M
gpt-oss-20b
792.9 pts/$
$0.09/M
Vendor risk
Who is behind the model
Google DeepMind
$4.00T·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
6 benchmarks · 2 models
Gemini 2.5 Progpt-oss-20b
Chatbot Arena Elo · Overall
Gemini 2.5 Pro leads by +130.5
Gemini 2.5 Pro
1448.2
gpt-oss-20b
1317.7
HELM · GPQA
Gemini 2.5 Pro leads by +15.5
Gemini 2.5 Pro
74.9
gpt-oss-20b
59.4
HELM · IFEval
Gemini 2.5 Pro leads by +10.8
Gemini 2.5 Pro
84.0
gpt-oss-20b
73.2
HELM · MMLU-Pro
Gemini 2.5 Pro leads by +12.3
Gemini 2.5 Pro
86.3
gpt-oss-20b
74.0
HELM · Omni-MATH
gpt-oss-20b leads by +14.9
Gemini 2.5 Pro
41.6
gpt-oss-20b
56.5
HELM · WildBench
Gemini 2.5 Pro leads by +12.0
Gemini 2.5 Pro
85.7
gpt-oss-20b
73.7
Full benchmark table
| Benchmark | Gemini 2.5 Pro | gpt-oss-20b |
|---|---|---|
Chatbot Arena Elo · Overall | 1448.2 | 1317.7 |
HELM · GPQA | 74.9 | 59.4 |
HELM · IFEval | 84.0 | 73.2 |
HELM · MMLU-Pro | 86.3 | 74.0 |
HELM · Omni-MATH | 41.6 | 56.5 |
HELM · WildBench | 85.7 | 73.7 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $1.25 | $10.00 | 1.0M tokens (~524 books) | $34.38 | |
| $0.03 | $0.14 | 131K tokens (~66 books) | $0.57 |