Compare · ModelsLive · 2 picked · head to head

GPT-4o (2024-08-06) vs GPT-4o (2024-11-20)

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

GPT-4o (2024-08-06) wins 10 of 10 shared benchmarks. Leads in coding · math · knowledge.

Category leads
coding·GPT-4o (2024-08-06)math·GPT-4o (2024-08-06)knowledge·GPT-4o (2024-08-06)reasoning·GPT-4o (2024-08-06)multimodal·GPT-4o (2024-08-06)
Hype vs Reality
GPT-4o (2024-08-06)
#167 by perf·no signal
QUIET
GPT-4o (2024-11-20)
#156 by perf·no signal
QUIET
Best value
1.1x better value than GPT-4o (2024-08-06)
GPT-4o (2024-08-06)
5.7 pts/$
$6.25/M
GPT-4o (2024-11-20)
6.0 pts/$
$6.25/M
Vendor risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Head to head
GPT-4o (2024-08-06)GPT-4o (2024-11-20)
Aider · Code Editing
GPT-4o (2024-08-06)
71.4
GPT-4o (2024-11-20)
71.4
Aider polyglot
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
GPT-4o (2024-08-06)
23.1
GPT-4o (2024-11-20)
23.1
CadEval
CadEval · evaluates the ability to generate and reason about Computer-Aided Design code, testing spatial reasoning and engineering knowledge.
GPT-4o (2024-08-06)
26.0
GPT-4o (2024-11-20)
26.0
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
GPT-4o (2024-08-06)
0.3
GPT-4o (2024-11-20)
0.3
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
GPT-4o (2024-08-06)
32.3
GPT-4o (2024-11-20)
32.3
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
GPT-4o (2024-08-06)
53.3
GPT-4o (2024-11-20)
53.3
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
GPT-4o (2024-08-06)
79.1
GPT-4o (2024-11-20)
79.1
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
GPT-4o (2024-08-06)
6.3
GPT-4o (2024-11-20)
6.3
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
GPT-4o (2024-08-06)
1.4
GPT-4o (2024-11-20)
1.4
VideoMME
VideoMME · multimodal benchmark testing video understanding across diverse domains, requiring temporal reasoning and cross-frame comprehension.
GPT-4o (2024-08-06)
62.5
GPT-4o (2024-11-20)
62.5
Full benchmark table
BenchmarkGPT-4o (2024-08-06)GPT-4o (2024-11-20)
Aider · Code Editing
71.471.4
Aider polyglot
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
23.123.1
CadEval
CadEval · evaluates the ability to generate and reason about Computer-Aided Design code, testing spatial reasoning and engineering knowledge.
26.026.0
FrontierMath-2025-02-28-Private
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
0.30.3
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
32.332.3
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
53.353.3
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
79.179.1
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
6.36.3
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
1.41.4
VideoMME
VideoMME · multimodal benchmark testing video understanding across diverse domains, requiring temporal reasoning and cross-frame comprehension.
62.562.5
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
OpenAI logoGPT-4o (2024-08-06)$2.50$10.00128K tokens (~64 books)$43.75
OpenAI logoGPT-4o (2024-11-20)$2.50$10.00128K tokens (~64 books)$43.75