Compare · ModelsLive · 2 picked · head to head

R1 0528 vs GPT-4 Turbo

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

R1 0528 wins 5 of 5 shared benchmarks. Leads in knowledge · math · reasoning.

Category leads
knowledge·R1 0528math·R1 0528reasoning·R1 0528coding·R1 0528
Hype vs Reality
R1 0528
#53 by perf·no signal
QUIET
GPT-4 Turbo
#90 by perf·no signal
QUIET
Best value
17.1x better value than GPT-4 Turbo
R1 0528
43.7 pts/$
$1.32/M
GPT-4 Turbo
2.5 pts/$
$20.00/M
Vendor risk
One or more vendors flagged
DeepSeek logo
DeepSeek
$3.4B·Tier 1
Higher risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Head to head
R1 0528GPT-4 Turbo
GPQA diamond
R1 0528 leads by +60.9
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
R1 0528
68.4
GPT-4 Turbo
7.5
MATH level 5
R1 0528 leads by +73.7
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
R1 0528
96.6
GPT-4 Turbo
23.0
OTIS Mock AIME 2024-2025
R1 0528 leads by +65.3
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
R1 0528
66.4
GPT-4 Turbo
1.0
SimpleBench
R1 0528 leads by +18.8
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
R1 0528
29.0
GPT-4 Turbo
10.1
WeirdML
R1 0528 leads by +29.2
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
R1 0528
41.6
GPT-4 Turbo
12.4
Full benchmark table
BenchmarkR1 0528GPT-4 Turbo
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
68.47.5
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
96.623.0
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
66.41.0
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
29.010.1
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
41.612.4
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
DeepSeek logoR1 0528$0.50$2.15164K tokens (~82 books)$9.13
OpenAI logoGPT-4 Turbo$10.00$30.00128K tokens (~64 books)$150.00