Compare · ModelsLive · 2 picked · head to head
R1 vs o1
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
o1 wins on 8/9 benchmarks
o1 wins 8 of 9 shared benchmarks. Leads in coding · reasoning · knowledge.
Category leads
coding·o1reasoning·o1knowledge·o1math·o1
Hype vs Reality
Attention vs performance
R1
#114 by perf·no signal
o1
#57 by perf·no signal
Vendor risk
Mixed exposure
One or more vendors flagged
DeepSeek
$3.4B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
9 benchmarks · 2 models
R1o1
Aider polyglot
o1 leads by +4.8
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
R1
56.9
o1
61.7
ARC-AGI
o1 leads by +14.9
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
R1
15.8
o1
30.7
Fiction.LiveBench
o1 leads by +13.9
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
R1
69.4
o1
83.3
GPQA diamond
o1 leads by +6.7
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
R1
62.3
o1
69.0
Lech Mazur Writing
R1 leads by +12.8
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
R1
83.0
o1
70.2
MATH level 5
o1 leads by +1.7
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
R1
93.0
o1
94.7
OTIS Mock AIME 2024-2025
o1 leads by +20.0
OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
R1
53.3
o1
73.3
SimpleBench
o1 leads by +11.0
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
R1
17.1
o1
28.1
WeirdML
o1 leads by +7.3
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
R1
36.5
o1
43.8
Full benchmark table
| Benchmark | R1 | o1 |
|---|---|---|
Aider polyglot Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework. | 56.9 | 61.7 |
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 15.8 | 30.7 |
Fiction.LiveBench Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination. | 69.4 | 83.3 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 62.3 | 69.0 |
Lech Mazur Writing Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication. | 83.0 | 70.2 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 93.0 | 94.7 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024–2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 53.3 | 73.3 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 17.1 | 28.1 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 36.5 | 43.8 |
Pricing · per 1M tokens · projected $/mo at 10M tokens