Compare · ModelsLive · 2 picked · head to head
Llama 3.1 8B Instruct vs GPT-4o-mini (2024-07-18)
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GPT-4o-mini (2024-07-18) wins on 9/9 benchmarks
GPT-4o-mini (2024-07-18) wins 9 of 9 shared benchmarks. Leads in arena · knowledge · math.
Category leads
arena·GPT-4o-mini (2024-07-18)knowledge·GPT-4o-mini (2024-07-18)math·GPT-4o-mini (2024-07-18)coding·GPT-4o-mini (2024-07-18)
Hype vs Reality
Attention vs performance
Llama 3.1 8B Instruct
#199 by perf·no signal
GPT-4o-mini (2024-07-18)
#125 by perf·no signal
Best value
Llama 3.1 8B Instruct
6.8x better value than GPT-4o-mini (2024-07-18)
Llama 3.1 8B Instruct
782.9 pts/$
$0.04/M
GPT-4o-mini (2024-07-18)
115.2 pts/$
$0.38/M
Vendor risk
Who is behind the model
Meta AI
$1.50T·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
9 benchmarks · 2 models
Llama 3.1 8B InstructGPT-4o-mini (2024-07-18)
Chatbot Arena Elo · Overall
GPT-4o-mini (2024-07-18) leads by +106.2
Llama 3.1 8B Instruct
1211.0
GPT-4o-mini (2024-07-18)
1317.2
Balrog
GPT-4o-mini (2024-07-18) leads by +2.3
Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning.
Llama 3.1 8B Instruct
15.1
GPT-4o-mini (2024-07-18)
17.4
GPQA diamond
GPT-4o-mini (2024-07-18) leads by +15.7
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Llama 3.1 8B Instruct
1.3
GPT-4o-mini (2024-07-18)
17.0
GSM8K
GPT-4o-mini (2024-07-18) leads by +8.9
Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve.
Llama 3.1 8B Instruct
82.4
GPT-4o-mini (2024-07-18)
91.3
MATH level 5
GPT-4o-mini (2024-07-18) leads by +29.8
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Llama 3.1 8B Instruct
22.9
GPT-4o-mini (2024-07-18)
52.6
MMLU
GPT-4o-mini (2024-07-18) leads by +34.3
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Llama 3.1 8B Instruct
41.5
GPT-4o-mini (2024-07-18)
75.7
OTIS Mock AIME 2024-2025
GPT-4o-mini (2024-07-18) leads by +4.4
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Llama 3.1 8B Instruct
2.4
GPT-4o-mini (2024-07-18)
6.8
PIQA
GPT-4o-mini (2024-07-18) leads by +15.0
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
Llama 3.1 8B Instruct
62.4
GPT-4o-mini (2024-07-18)
77.4
WeirdML
GPT-4o-mini (2024-07-18) leads by +10.0
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Llama 3.1 8B Instruct
1.7
GPT-4o-mini (2024-07-18)
11.8
Full benchmark table
| Benchmark | Llama 3.1 8B Instruct | GPT-4o-mini (2024-07-18) |
|---|---|---|
Chatbot Arena Elo · Overall | 1211.0 | 1317.2 |
Balrog Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning. | 15.1 | 17.4 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 1.3 | 17.0 |
GSM8K Grade School Math 8K · 8,500 linguistically diverse grade-school math word problems that require multi-step reasoning to solve. | 82.4 | 91.3 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 22.9 | 52.6 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 41.5 | 75.7 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 2.4 | 6.8 |
PIQA PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks. | 62.4 | 77.4 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 1.7 | 11.8 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.02 | $0.05 | 16K tokens (~8 books) | $0.28 | |
| $0.15 | $0.60 | 128K tokens (~64 books) | $2.62 |