OpenAI logo

o1

by OpenAI · Released Dec 2024

Multimodal
59.3
avg score
Rank #70
Compare
Better than 70% of all models
Context
200K tokens (~100 books)
Input $/1M
$15.00
Output $/1M
$60.00
Type
multimodal
License
Proprietary
Benchmarks
14 tested
Data updated today
About

The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding. The o1 model series is trained with large-scale reinforcement learning to reason...

Tested on 14 benchmarks with 56.4% average. Top scores: MATH level 5 (94.7%), Aider — Code Editing (84.2%), Fiction.LiveBench (83.3%).

Looking for similar performance at lower cost?
Qwen3 235B A22B Thinking 2507 scores 59.4 (100% as good) at $0.15/1M input · 99% cheaper
Capabilities
coding
61.4
#35 globally
reasoning
29.4
#83 globally
math
59.1
#50 globally
knowledge
61.6
#40 globally
Benchmark Scores
Compare All
Tested on 14 benchmarks · Ranked across 4 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
Aider — Code Editing

Code editing benchmark from the Aider project. Measures ability to apply targeted code changes while maintaining correctness and style.

84.2
Aider polyglot

Multi-language code editing from Aider. Tests editing ability across Python, JavaScript, TypeScript, Java, C++, Go, Rust, and more.

61.7
CadEval

Computer-aided design evaluation. Tests understanding of CAD concepts, 3D modeling, and engineering design principles.

56.0
ARC-AGI

Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.

30.7
SimpleBench

Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.

28.1
MATH level 5

Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.

94.7
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

73.3
FrontierMath-2025-02-28-Private

Original research-level math problems created by professional mathematicians. Problems are unpublished and cannot be memorized.

9.3
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
o1
Specifications
  • Typemultimodal
  • Context200K tokens (~100 books)
  • ReleasedDec 2024
  • LicenseProprietary
  • StatusActive
  • Cost / Message~$0.090
Available On
OpenAI logoOpenAI$15.00
Share & Export
Tweet
o1 is a proprietary multimodal AI model by OpenAI, released in December 2024. It has an average benchmark score of 59.3. Context window: 200K tokens.