Compare · ModelsLive · 2 picked · head to head
GPT-4o (2024-08-06) vs GPT-4o (2024-11-20)
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
GPT-4o (2024-08-06) wins on 10/10 benchmarks
GPT-4o (2024-08-06) wins 10 of 10 shared benchmarks. Leads in coding · math · knowledge.
Category leads
coding·GPT-4o (2024-08-06)math·GPT-4o (2024-08-06)knowledge·GPT-4o (2024-08-06)reasoning·GPT-4o (2024-08-06)multimodal·GPT-4o (2024-08-06)
Hype vs Reality
Attention vs performance
GPT-4o (2024-08-06)
#167 by perf·no signal
GPT-4o (2024-11-20)
#156 by perf·no signal
Best value
GPT-4o (2024-11-20)
1.1x better value than GPT-4o (2024-08-06)
GPT-4o (2024-08-06)
5.7 pts/$
$6.25/M
GPT-4o (2024-11-20)
6.0 pts/$
$6.25/M
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
OpenAI
$840.0B·Tier 1
Head to head
10 benchmarks · 2 models
GPT-4o (2024-08-06)GPT-4o (2024-11-20)
Aider · Code Editing
GPT-4o (2024-08-06)
71.4
GPT-4o (2024-11-20)
71.4
Aider polyglot
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
GPT-4o (2024-08-06)
23.1
GPT-4o (2024-11-20)
23.1
CadEval
CadEval · evaluates the ability to generate and reason about Computer-Aided Design code, testing spatial reasoning and engineering knowledge.
GPT-4o (2024-08-06)
26.0
GPT-4o (2024-11-20)
26.0
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
GPT-4o (2024-08-06)
0.3
GPT-4o (2024-11-20)
0.3
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4o (2024-08-06)
32.3
GPT-4o (2024-11-20)
32.3
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
GPT-4o (2024-08-06)
53.3
GPT-4o (2024-11-20)
53.3
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
GPT-4o (2024-08-06)
79.1
GPT-4o (2024-11-20)
79.1
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4o (2024-08-06)
6.3
GPT-4o (2024-11-20)
6.3
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
GPT-4o (2024-08-06)
1.4
GPT-4o (2024-11-20)
1.4
VideoMME
VideoMME · multimodal benchmark testing video understanding across diverse domains, requiring temporal reasoning and cross-frame comprehension.
GPT-4o (2024-08-06)
62.5
GPT-4o (2024-11-20)
62.5
Full benchmark table
| Benchmark | GPT-4o (2024-08-06) | GPT-4o (2024-11-20) |
|---|---|---|
Aider · Code Editing | 71.4 | 71.4 |
Aider polyglot Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework. | 23.1 | 23.1 |
CadEval CadEval · evaluates the ability to generate and reason about Computer-Aided Design code, testing spatial reasoning and engineering knowledge. | 26.0 | 26.0 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 0.3 | 0.3 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 32.3 | 32.3 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 53.3 | 53.3 |
MMLU Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge. | 79.1 | 79.1 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 6.3 | 6.3 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 1.4 | 1.4 |
VideoMME VideoMME · multimodal benchmark testing video understanding across diverse domains, requiring temporal reasoning and cross-frame comprehension. | 62.5 | 62.5 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $2.50 | $10.00 | 128K tokens (~64 books) | $43.75 | |
| $2.50 | $10.00 | 128K tokens (~64 books) | $43.75 |