Compare · ModelsLive · 2 picked · head to head

GPT-4o-mini (2024-07-18) vs Llama 3.1 8B Instruct

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

GPT-4o-mini (2024-07-18) wins 9 of 9 shared benchmarks. Leads in arena · knowledge · math.

Category leads
arena·GPT-4o-mini (2024-07-18)knowledge·GPT-4o-mini (2024-07-18)math·GPT-4o-mini (2024-07-18)coding·GPT-4o-mini (2024-07-18)
Hype vs Reality
GPT-4o-mini (2024-07-18)
#125 by perf·no signal
QUIET
Llama 3.1 8B Instruct
#199 by perf·no signal
QUIET
Best value
6.8x better value than GPT-4o-mini (2024-07-18)
GPT-4o-mini (2024-07-18)
115.2 pts/$
$0.38/M
Llama 3.1 8B Instruct
782.9 pts/$
$0.04/M
Vendor risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Meta logo
Meta AI
$1.50T·Tier 1
Low risk
Head to head
GPT-4o-mini (2024-07-18)Llama 3.1 8B Instruct
Chatbot Arena Elo · Overall
GPT-4o-mini (2024-07-18) leads by +106.2
GPT-4o-mini (2024-07-18)
1317.2
Llama 3.1 8B Instruct
1211.0
Balrog
GPT-4o-mini (2024-07-18) leads by +2.3
Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning.
GPT-4o-mini (2024-07-18)
17.4
Llama 3.1 8B Instruct
15.1
GPQA diamond
GPT-4o-mini (2024-07-18) leads by +15.7
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4o-mini (2024-07-18)
17.0
Llama 3.1 8B Instruct
1.3
GSM8K
GPT-4o-mini (2024-07-18) leads by +8.9
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
GPT-4o-mini (2024-07-18)
91.3
Llama 3.1 8B Instruct
82.4
MATH level 5
GPT-4o-mini (2024-07-18) leads by +29.8
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
GPT-4o-mini (2024-07-18)
52.6
Llama 3.1 8B Instruct
22.9
MMLU
GPT-4o-mini (2024-07-18) leads by +34.3
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
GPT-4o-mini (2024-07-18)
75.7
Llama 3.1 8B Instruct
41.5
OTIS Mock AIME 2024-2025
GPT-4o-mini (2024-07-18) leads by +4.4
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4o-mini (2024-07-18)
6.8
Llama 3.1 8B Instruct
2.4
PIQA
GPT-4o-mini (2024-07-18) leads by +15.0
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
GPT-4o-mini (2024-07-18)
77.4
Llama 3.1 8B Instruct
62.4
WeirdML
GPT-4o-mini (2024-07-18) leads by +10.0
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
GPT-4o-mini (2024-07-18)
11.8
Llama 3.1 8B Instruct
1.7
Full benchmark table
BenchmarkGPT-4o-mini (2024-07-18)Llama 3.1 8B Instruct
Chatbot Arena Elo · Overall
1317.21211.0
Balrog
Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning.
17.415.1
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
17.01.3
GSM8K
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
91.382.4
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
52.622.9
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
75.741.5
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
6.82.4
PIQA
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
77.462.4
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
11.81.7
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
OpenAI logoGPT-4o-mini (2024-07-18)$0.15$0.60128K tokens (~64 books)$2.62
Meta logoLlama 3.1 8B Instruct$0.02$0.0516K tokens (~8 books)$0.28