Home/Models/GPT-4o (2024-11-20)
OpenAI logo

GPT-4o (2024-11-20)

by OpenAI · Released Nov 2024

Multimodal
36.3
avg score
Rank #160
Compare
Better than 31% of all models
Context
128K tokens (~64 books)
Input $/1M
$2.50
Output $/1M
$10.00
Type
multimodal
License
Proprietary
Benchmarks
28 tested
Data updated today
About

The 2024-11-20 version of GPT-4o offers a leveled-up creative writing ability with more natural, engaging, and tailored writing to improve relevance & readability. It’s also better at working with uploaded...

Tested on 28 benchmarks with 37.7% average. Top scores: ScienceQA (84.7%), HELM — WildBench (82.8%), Lech Mazur Writing (81.8%).

Looking for similar performance at lower cost?
Mistral Nemo scores 37.4 (103% as good) at $0.02/1M input · 99% cheaper
Capabilities
coding
26.4
#121 globally
reasoning
22.2
#99 globally
math
22.3
#151 globally
knowledge
57.2
#66 globally
agentic
8.6
#28 globally
multimodal
62.5
#4 globally
language
81.7
#46 globally
Benchmark Scores
Compare All
Tested on 28 benchmarks · Ranked across 7 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
Aider — Code Editing

Code editing benchmark from the Aider project. Measures ability to apply targeted code changes while maintaining correctness and style.

71.4
SWE-Bench verified

Real-world software engineering tasks from GitHub issues. Models must diagnose bugs and write patches that pass test suites. Human-verified subset of SWE-bench.

31.0
CadEval

Computer-aided design evaluation. Tests understanding of CAD concepts, 3D modeling, and engineering design principles.

26.0
HELM — WildBench

Stanford HELM WildBench evaluation. Tests reasoning on challenging real-world tasks.

82.8
ARC-AGI

Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.

4.5
SimpleBench

Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.

1.4
MATH level 5

Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.

53.3
HELM — Omni-MATH

Stanford HELM evaluation of mathematical reasoning across diverse problem types.

29.3
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

6.3
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
gpt-4o-2024-11-20
Specifications
  • Typemultimodal
  • Context128K tokens (~64 books)
  • ReleasedNov 2024
  • LicenseProprietary
  • StatusActive
  • Cost / Message~$0.015
Available On
OpenAI logoOpenAI$2.50
Share & Export
Tweet
GPT-4o (2024-11-20) is a proprietary multimodal AI model by OpenAI, released in November 2024. It has an average benchmark score of 36.3. Context window: 128K tokens.