Compare · ModelsLive · 2 picked · head to head
Claude Haiku 4.5 vs gpt-oss-120b
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
gpt-oss-120b wins on 4/5 benchmarks
gpt-oss-120b wins 4 of 5 shared benchmarks. Leads in knowledge · math.
Category leads
knowledge·gpt-oss-120bmath·gpt-oss-120bcoding·Claude Haiku 4.5
Hype vs Reality
Attention vs performance
Claude Haiku 4.5
#159 by perf·no signal
gpt-oss-120b
#106 by perf·no signal
Best value
gpt-oss-120b
33.1x better value than Claude Haiku 4.5
Claude Haiku 4.5
12.4 pts/$
$3.00/M
gpt-oss-120b
409.6 pts/$
$0.11/M
Vendor risk
Who is behind the model
Anthropic
$380.0B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
5 benchmarks · 2 models
Claude Haiku 4.5gpt-oss-120b
GPQA diamond
gpt-oss-120b leads by +6.1
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude Haiku 4.5
61.6
gpt-oss-120b
67.7
OTIS Mock AIME 2024-2025
gpt-oss-120b leads by +22.3
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Claude Haiku 4.5
66.6
gpt-oss-120b
88.9
SimpleQA Verified
gpt-oss-120b leads by +8.0
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Claude Haiku 4.5
5.9
gpt-oss-120b
13.9
Terminal Bench
Claude Haiku 4.5 leads by +16.8
Terminal Bench · tests the ability to accomplish real-world tasks using terminal commands, evaluating shell scripting and CLI tool proficiency.
Claude Haiku 4.5
35.5
gpt-oss-120b
18.7
WeirdML
gpt-oss-120b leads by +2.8
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Claude Haiku 4.5
45.4
gpt-oss-120b
48.2
Full benchmark table
| Benchmark | Claude Haiku 4.5 | gpt-oss-120b |
|---|---|---|
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 61.6 | 67.7 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 66.6 | 88.9 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | 5.9 | 13.9 |
Terminal Bench Terminal Bench · tests the ability to accomplish real-world tasks using terminal commands, evaluating shell scripting and CLI tool proficiency. | 35.5 | 18.7 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 45.4 | 48.2 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $1.00 | $5.00 | 200K tokens (~100 books) | $20.00 | |
| $0.04 | $0.19 | 131K tokens (~66 books) | $0.77 |
People also compared