Compare · ModelsLive · 2 picked · head to head
R1 vs o3
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
o3 wins on 12/12 benchmarks
o3 wins 12 of 12 shared benchmarks. Leads in coding · reasoning · knowledge.
Category leads
coding·o3reasoning·o3knowledge·o3math·o3
Hype vs Reality
Attention vs performance
R1
#114 by perf·no signal
o3
#67 by perf·no signal
Vendor risk
Mixed exposure
One or more vendors flagged
DeepSeek
$3.4B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
12 benchmarks · 2 models
R1o3
Aider polyglot
o3 leads by +24.4
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
R1
56.9
o3
81.3
ARC-AGI
o3 leads by +45.0
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
R1
15.8
o3
60.8
ARC-AGI-2
o3 leads by +5.2
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
R1
1.3
o3
6.5
DeepResearch Bench
o3 leads by +11.5
DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses.
R1
35.1
o3
46.6
Fiction.LiveBench
o3 leads by +19.5
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
R1
69.4
o3
88.9
GPQA diamond
o3 leads by +13.5
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
R1
62.3
o3
75.8
Lech Mazur Writing
o3 leads by +0.9
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
R1
83.0
o3
83.9
MATH level 5
o3 leads by +4.7
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
R1
93.0
o3
97.8
OTIS Mock AIME 2024-2025
o3 leads by +30.6
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
R1
53.3
o3
83.9
SimpleBench
o3 leads by +26.6
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
R1
17.1
o3
43.7
SimpleQA Verified
o3 leads by +25.6
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
R1
27.4
o3
53.0
WeirdML
o3 leads by +15.9
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
R1
36.5
o3
52.4
Full benchmark table
| Benchmark | R1 | o3 |
|---|---|---|
Aider polyglot Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework. | 56.9 | 81.3 |
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 15.8 | 60.8 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 1.3 | 6.5 |
DeepResearch Bench DeepResearch Bench · evaluates AI on complex multi-step research tasks requiring information gathering, synthesis, and producing comprehensive analyses. | 35.1 | 46.6 |
Fiction.LiveBench Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination. | 69.4 | 88.9 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 62.3 | 75.8 |
Lech Mazur Writing Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication. | 83.0 | 83.9 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 93.0 | 97.8 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 53.3 | 83.9 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 17.1 | 43.7 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | 27.4 | 53.0 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 36.5 | 52.4 |
Pricing · per 1M tokens · projected $/mo at 10M tokens