Compare · ModelsLive · 2 picked · head to head

Gemma 2B vs Phi 2

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Gemma 2B wins 8 of 14 shared benchmarks. Leads in knowledge · math.

Category leads
knowledge·Gemma 2Breasoning·Phi 2general·Phi 2language·Phi 2math·Gemma 2B
Hype vs Reality
Gemma 2B
#189 by perf·no signal
QUIET
Phi 2
#185 by perf·no signal
QUIET
Best value
Gemma 2B
no price
Phi 2
no price
Vendor risk
Google DeepMind logo
Google DeepMind
$4.00T·Tier 1
Low risk
Microsoft logo
Microsoft
$3.00T·Big Tech
Low risk
Head to head
Gemma 2BPhi 2
ANLI
Gemma 2B leads by +9.3
ANLI (Adversarial NLI) · adversarially constructed natural language inference dataset where each round targets weaknesses found in previous model generations.
Gemma 2B
23.1
Phi 2
13.8
ARC AI2
Phi 2 leads by +45.1
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
Gemma 2B
22.8
Phi 2
67.9
BBH
Phi 2 leads by +32.3
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
Gemma 2B
13.6
Phi 2
45.9
HellaSwag
Gemma 2B leads by +23.7
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
Gemma 2B
61.9
Phi 2
38.1
BBH (HuggingFace)
Phi 2 leads by +6.9
Gemma 2B
21.1
Phi 2
28.0
GPQA
Gemma 2B leads by +2.0
Gemma 2B
4.9
Phi 2
2.9
IFEval
Phi 2 leads by +0.8
Gemma 2B
26.6
Phi 2
27.4
MATH Level 5
Gemma 2B leads by +4.5
Gemma 2B
7.4
Phi 2
3.0
MMLU-PRO
Gemma 2B leads by +3.6
Gemma 2B
21.6
Phi 2
18.1
MUSR
Phi 2 leads by +2.9
Gemma 2B
11.0
Phi 2
13.8
MMLU
Phi 2 leads by +21.5
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Gemma 2B
23.1
Phi 2
44.5
OpenBookQA
Gemma 2B leads by +6.7
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
Gemma 2B
71.5
Phi 2
64.8
TriviaQA
Gemma 2B leads by +8.0
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
Gemma 2B
53.2
Phi 2
45.2
Winogrande
Gemma 2B leads by +21.4
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
Gemma 2B
30.8
Phi 2
9.4
Full benchmark table
BenchmarkGemma 2BPhi 2
ANLI
ANLI (Adversarial NLI) · adversarially constructed natural language inference dataset where each round targets weaknesses found in previous model generations.
23.113.8
ARC AI2
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
22.867.9
BBH
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
13.645.9
HellaSwag
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
61.938.1
BBH (HuggingFace)
21.128.0
GPQA
4.92.9
IFEval
26.627.4
MATH Level 5
7.43.0
MMLU-PRO
21.618.1
MUSR
11.013.8
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
23.144.5
OpenBookQA
OpenBookQA · science questions that require combining a given core fact with broad common knowledge, mimicking an open-book exam setting.
71.564.8
TriviaQA
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
53.245.2
Winogrande
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
30.89.4
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Google DeepMind logoGemma 2B
Microsoft logoPhi 2