Compare · ModelsLive · 2 picked · head to head

GPT-4 Turbo vs R1 0528

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

R1 0528 wins 5 of 5 shared benchmarks. Leads in knowledge · math · reasoning.

Category leads
knowledge·R1 0528math·R1 0528reasoning·R1 0528coding·R1 0528
Hype vs Reality
GPT-4 Turbo
#90 by perf·no signal
QUIET
R1 0528
#53 by perf·no signal
QUIET
Best value
17.1x better value than GPT-4 Turbo
GPT-4 Turbo
2.5 pts/$
$20.00/M
R1 0528
43.7 pts/$
$1.32/M
Vendor risk
One or more vendors flagged
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
DeepSeek logo
DeepSeek
$3.4B·Tier 1
Higher risk
Head to head
GPT-4 TurboR1 0528
GPQA diamond
R1 0528 leads by +60.9
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4 Turbo
7.5
R1 0528
68.4
MATH level 5
R1 0528 leads by +73.7
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
GPT-4 Turbo
23.0
R1 0528
96.6
OTIS Mock AIME 2024-2025
R1 0528 leads by +65.3
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4 Turbo
1.0
R1 0528
66.4
SimpleBench
R1 0528 leads by +18.8
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
GPT-4 Turbo
10.1
R1 0528
29.0
WeirdML
R1 0528 leads by +29.2
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
GPT-4 Turbo
12.4
R1 0528
41.6
Full benchmark table
BenchmarkGPT-4 TurboR1 0528
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
7.568.4
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
23.096.6
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
1.066.4
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
10.129.0
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
12.441.6
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
OpenAI logoGPT-4 Turbo$10.00$30.00128K tokens (~64 books)$150.00
DeepSeek logoR1 0528$0.50$2.15164K tokens (~82 books)$9.13