Compare · ModelsLive · 2 picked · head to head
gpt-oss-120b vs Llama 3.1 405B
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
gpt-oss-120b wins on 3/4 benchmarks
gpt-oss-120b wins 3 of 4 shared benchmarks. Leads in knowledge · math · coding.
Category leads
knowledge·gpt-oss-120bmath·gpt-oss-120breasoning·Llama 3.1 405Bcoding·gpt-oss-120b
Hype vs Reality
Attention vs performance
gpt-oss-120b
#108 by perf·no signal
Llama 3.1 405B
#153 by perf·no signal
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
Meta AI
$1.50T·Tier 1
Head to head
4 benchmarks · 2 models
gpt-oss-120bLlama 3.1 405B
GPQA diamond
gpt-oss-120b leads by +33.1
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
gpt-oss-120b
67.7
Llama 3.1 405B
34.5
OTIS Mock AIME 2024-2025
gpt-oss-120b leads by +79.3
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
gpt-oss-120b
88.9
Llama 3.1 405B
9.6
SimpleBench
Llama 3.1 405B leads by +1.1
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
gpt-oss-120b
6.5
Llama 3.1 405B
7.6
WeirdML
gpt-oss-120b leads by +26.8
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
gpt-oss-120b
48.2
Llama 3.1 405B
21.4
Full benchmark table
| Benchmark | gpt-oss-120b | Llama 3.1 405B |
|---|---|---|
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 67.7 | 34.5 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 88.9 | 9.6 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 6.5 | 7.6 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 48.2 | 21.4 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.04 | $0.18 | 131K tokens (~66 books) | $0.74 | |
| — | — | — | — |
People also compared