Home/Models/o1-mini
OpenAI logo

o1-mini

by OpenAI · Released Jan 2024

32.4
avg score
Rank #171
Compare
Better than 27% of all models
Context
N/A
Input $/1M
TBD
Output $/1M
TBD
Type
text
License
Proprietary
Benchmarks
13 tested
Data updated today
About

Tested on 13 benchmarks with 34.9% average. Top scores: Chatbot Arena Elo — Overall (1336.6%), MATH level 5 (89.2%), Aider — Code Editing (70.7%).

Capabilities
coding
37.5
#103 globally
reasoning
5.5
#162 globally
math
45.9
#91 globally
knowledge
57.4
#63 globally
Benchmark Scores
Compare All
Tested on 13 benchmarks · Ranked across 5 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
Aider — Code Editing

Code editing benchmark from the Aider project. Measures ability to apply targeted code changes while maintaining correctness and style.

70.7
WeirdML

Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.

36.3
Aider polyglot

Multi-language code editing from Aider. Tests editing ability across Python, JavaScript, TypeScript, Java, C++, Go, Rust, and more.

32.9
ARC-AGI

Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.

14.0
SimpleBench

Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.

1.7
ARC-AGI-2

ARC-AGI 2, harder sequel to ARC. More complex abstract reasoning patterns that test generalization ability beyond training data.

0.8
MATH level 5

Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.

89.2
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

46.9
FrontierMath-2025-02-28-Private

Original research-level math problems created by professional mathematicians. Problems are unpublished and cannot be memorized.

1.7
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
o1-mini
Specifications
  • Typetext
  • ContextN/A
  • ReleasedJan 2024
  • LicenseProprietary
  • Statusbenchmark-only
Available On
OpenAI logoOpenAITBD
Share & Export
Tweet
o1-mini is a proprietary text AI model by OpenAI, released in January 2024. It has an average benchmark score of 32.4.