Beta
Compare · ModelsLive · 2 picked · head to head

R1 vs o3

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

o3 wins 12 of 12 shared benchmarks. Leads in coding · reasoning · knowledge.

Category leads
coding·o3reasoning·o3knowledge·o3math·o3
Hype vs Reality
R1
#114 by perf·no signal
QUIET
o3
#67 by perf·no signal
QUIET
Best value
2.6x better value than o3
R1
28.2 pts/$
$1.60/M
o3
11.0 pts/$
$5.00/M
Vendor risk
One or more vendors flagged
DeepSeek logo
DeepSeek
$3.4B·Tier 1
Higher risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Head to head
R1o3
Aider polyglot
o3 leads by +24.4
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
R1
56.9
o3
81.3
ARC-AGI
o3 leads by +45.0
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
R1
15.8
o3
60.8
ARC-AGI-2
o3 leads by +5.2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
R1
1.3
o3
6.5
DeepResearch Bench
o3 leads by +11.5
DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses.
R1
35.1
o3
46.6
Fiction.LiveBench
o3 leads by +19.5
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
R1
69.4
o3
88.9
GPQA diamond
o3 leads by +13.5
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
R1
62.3
o3
75.8
Lech Mazur Writing
o3 leads by +0.9
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
R1
83.0
o3
83.9
MATH level 5
o3 leads by +4.7
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
R1
93.0
o3
97.8
OTIS Mock AIME 2024-2025
o3 leads by +30.6
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
R1
53.3
o3
83.9
SimpleBench
o3 leads by +26.6
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
R1
17.1
o3
43.7
SimpleQA Verified
o3 leads by +25.6
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
R1
27.4
o3
53.0
WeirdML
o3 leads by +15.9
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
R1
36.5
o3
52.4
Full benchmark table
BenchmarkR1o3
Aider polyglot
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
56.981.3
ARC-AGI
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
15.860.8
ARC-AGI-2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
1.36.5
DeepResearch Bench
DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses.
35.146.6
Fiction.LiveBench
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
69.488.9
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
62.375.8
Lech Mazur Writing
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
83.083.9
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
93.097.8
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
53.383.9
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
17.143.7
SimpleQA Verified
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
27.453.0
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
36.552.4
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
DeepSeek logoR1$0.70$2.5064K tokens (~32 books)$11.50
OpenAI logoo3$2.00$8.00200K tokens (~100 books)$35.00