Compare · ModelsLive · 2 picked · head to head
R1 0528 vs GPT-4 Turbo
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
R1 0528 wins on 5/5 benchmarks
R1 0528 wins 5 of 5 shared benchmarks. Leads in knowledge · math · reasoning.
Category leads
knowledge·R1 0528math·R1 0528reasoning·R1 0528coding·R1 0528
Hype vs Reality
Attention vs performance
R1 0528
#53 by perf·no signal
GPT-4 Turbo
#90 by perf·no signal
Best value
R1 0528
17.1x better value than GPT-4 Turbo
R1 0528
43.7 pts/$
$1.32/M
GPT-4 Turbo
2.5 pts/$
$20.00/M
Vendor risk
Mixed exposure
One or more vendors flagged
DeepSeek
$3.4B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
5 benchmarks · 2 models
R1 0528GPT-4 Turbo
GPQA diamond
R1 0528 leads by +60.9
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
R1 0528
68.4
GPT-4 Turbo
7.5
MATH level 5
R1 0528 leads by +73.7
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
R1 0528
96.6
GPT-4 Turbo
23.0
OTIS Mock AIME 2024-2025
R1 0528 leads by +65.3
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
R1 0528
66.4
GPT-4 Turbo
1.0
SimpleBench
R1 0528 leads by +18.8
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
R1 0528
29.0
GPT-4 Turbo
10.1
WeirdML
R1 0528 leads by +29.2
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
R1 0528
41.6
GPT-4 Turbo
12.4
Full benchmark table
| Benchmark | R1 0528 | GPT-4 Turbo |
|---|---|---|
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 68.4 | 7.5 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 96.6 | 23.0 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 66.4 | 1.0 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 29.0 | 10.1 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 41.6 | 12.4 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.50 | $2.15 | 164K tokens (~82 books) | $9.13 | |
| $10.00 | $30.00 | 128K tokens (~64 books) | $150.00 |