Compare · ModelsLive · 2 picked · head to head
GPT-4 Turbo vs R1 0528
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
R1 0528 wins on 5/5 benchmarks
R1 0528 wins 5 of 5 shared benchmarks. Leads in knowledge · math · reasoning.
Category leads
knowledge·R1 0528math·R1 0528reasoning·R1 0528coding·R1 0528
Hype vs Reality
Attention vs performance
GPT-4 Turbo
#90 by perf·no signal
R1 0528
#53 by perf·no signal
Best value
R1 0528
17.1x better value than GPT-4 Turbo
GPT-4 Turbo
2.5 pts/$
$20.00/M
R1 0528
43.7 pts/$
$1.32/M
Vendor risk
Mixed exposure
One or more vendors flagged
OpenAI
$840.0B·Tier 1
DeepSeek
$3.4B·Tier 1
Head to head
5 benchmarks · 2 models
GPT-4 TurboR1 0528
GPQA diamond
R1 0528 leads by +60.9
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4 Turbo
7.5
R1 0528
68.4
MATH level 5
R1 0528 leads by +73.7
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
GPT-4 Turbo
23.0
R1 0528
96.6
OTIS Mock AIME 2024-2025
R1 0528 leads by +65.3
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4 Turbo
1.0
R1 0528
66.4
SimpleBench
R1 0528 leads by +18.8
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
GPT-4 Turbo
10.1
R1 0528
29.0
WeirdML
R1 0528 leads by +29.2
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
GPT-4 Turbo
12.4
R1 0528
41.6
Full benchmark table
| Benchmark | GPT-4 Turbo | R1 0528 |
|---|---|---|
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 7.5 | 68.4 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 23.0 | 96.6 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 1.0 | 66.4 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 10.1 | 29.0 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 12.4 | 41.6 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $10.00 | $30.00 | 128K tokens (~64 books) | $150.00 | |
| $0.50 | $2.15 | 164K tokens (~82 books) | $9.13 |