Compare · ModelsLive · 2 picked · head to head

Gemini 1.5 Pro (Feb 2024) vs Llama 3 8B Instruct

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Gemini 1.5 Pro (Feb 2024) wins 5 of 5 shared benchmarks. Leads in arena · knowledge · math.

Category leads
arena·Gemini 1.5 Pro (Feb 2024)knowledge·Gemini 1.5 Pro (Feb 2024)math·Gemini 1.5 Pro (Feb 2024)
Hype vs Reality
Gemini 1.5 Pro (Feb 2024)
#138 by perf·no signal
QUIET
Llama 3 8B Instruct
#184 by perf·no signal
QUIET
Best value
Gemini 1.5 Pro (Feb 2024)
no price
Llama 3 8B Instruct
880.0 pts/$
$0.04/M
Vendor risk
Google DeepMind logo
Google DeepMind
$4.00T·Tier 1
Low risk
Meta logo
Meta AI
$1.50T·Tier 1
Low risk
Head to head
Gemini 1.5 Pro (Feb 2024)Llama 3 8B Instruct
Chatbot Arena Elo · Overall
Gemini 1.5 Pro (Feb 2024) leads by +100.3
Gemini 1.5 Pro (Feb 2024)
1322.5
Llama 3 8B Instruct
1222.2
GPQA diamond
Gemini 1.5 Pro (Feb 2024) leads by +26.4
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Gemini 1.5 Pro (Feb 2024)
27.8
Llama 3 8B Instruct
1.4
MATH level 5
Gemini 1.5 Pro (Feb 2024) leads by +34.6
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Gemini 1.5 Pro (Feb 2024)
40.8
Llama 3 8B Instruct
6.1
MMLU
Gemini 1.5 Pro (Feb 2024) leads by +18.5
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Gemini 1.5 Pro (Feb 2024)
76.9
Llama 3 8B Instruct
58.4
OTIS Mock AIME 2024-2025
Gemini 1.5 Pro (Feb 2024) leads by +6.0
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Gemini 1.5 Pro (Feb 2024)
6.7
Llama 3 8B Instruct
0.7
Full benchmark table
BenchmarkGemini 1.5 Pro (Feb 2024)Llama 3 8B Instruct
Chatbot Arena Elo · Overall
1322.51222.2
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
27.81.4
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
40.86.1
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
76.958.4
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
6.70.7
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Google DeepMind logoGemini 1.5 Pro (Feb 2024)
Meta logoLlama 3 8B Instruct$0.03$0.048K tokens (~4 books)$0.33