Home/Models/GPT-4o-mini
OpenAI logo

GPT-4o-mini

by OpenAI · Released Jul 2024

Multimodal
37.5
avg score
Rank #157
Compare
Better than 33% of all models
Context
128K tokens (~64 books)
Input $/1M
$0.15
Output $/1M
$0.60
Type
multimodal
License
Proprietary
Benchmarks
15 tested
Data updated today
About

GPT-4o mini is OpenAI's newest model after [GPT-4 Omni](/models/openai/gpt-4o), supporting both text and image inputs with text outputs. As their most advanced small model, it is many multiples more affordable...

Tested on 15 benchmarks with 39.6% average. Top scores: GSM8K (91.3%), PIQA (77.4%), MMLU (75.7%).

Looking for similar performance at lower cost?
Mistral Nemo scores 37.4 (100% as good) at $0.02/1M input · 87% cheaper
Capabilities
coding
23.7
#123 globally
reasoning
0.1
#185 globally
math
50.3
#75 globally
knowledge
45.7
#124 globally
multimodal
53.1
#6 globally
Benchmark Scores
Compare All
Tested on 15 benchmarks · Ranked across 5 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
Aider — Code Editing

Code editing benchmark from the Aider project. Measures ability to apply targeted code changes while maintaining correctness and style.

55.6
WeirdML

Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.

11.8
Aider polyglot

Multi-language code editing from Aider. Tests editing ability across Python, JavaScript, TypeScript, Java, C++, Go, Rust, and more.

3.6
ARC-AGI-2

ARC-AGI 2, harder sequel to ARC. More complex abstract reasoning patterns that test generalization ability beyond training data.

0.1
GSM8K

Grade school math word problems. 8,500 problems testing multi-step arithmetic reasoning. A foundational math benchmark.

91.3
MATH level 5

Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.

52.6
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

6.8
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
gpt-4o-mini
Specifications
  • Typemultimodal
  • Context128K tokens (~64 books)
  • ReleasedJul 2024
  • LicenseProprietary
  • StatusActive
  • Cost / Message~$0.001
Available On
OpenAI logoOpenAI$0.15
Share & Export
Tweet
GPT-4o-mini is a proprietary multimodal AI model by OpenAI, released in July 2024. It has an average benchmark score of 37.5. Context window: 128K tokens.