Compare · ModelsLive · 3 picked · head to head

Gemini 1.5 Pro (Feb 2024) vs Qwen2.5 72B Instruct vs GPT-4o (2024-08-06)

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Gemini 1.5 Pro (Feb 2024) wins 5 of 12 shared benchmarks. Leads in multimodal · reasoning.

Category leads
coding·GPT-4o (2024-08-06)arena·GPT-4o (2024-08-06)knowledge·GPT-4o (2024-08-06)math·Qwen2.5 72B Instructmultimodal·Gemini 1.5 Pro (Feb 2024)reasoning·Gemini 1.5 Pro (Feb 2024)agentic·Qwen2.5 72B Instruct
Hype vs Reality
Gemini 1.5 Pro (Feb 2024)
#138 by perf·no signal
QUIET
Qwen2.5 72B Instruct
#82 by perf·no signal
QUIET
GPT-4o (2024-08-06)
#167 by perf·no signal
QUIET
Best value
24.6x better value than GPT-4o (2024-08-06)
Gemini 1.5 Pro (Feb 2024)
no price
Qwen2.5 72B Instruct
140.0 pts/$
$0.38/M
GPT-4o (2024-08-06)
5.7 pts/$
$6.25/M
Vendor risk
Google DeepMind logo
Google DeepMind
$4.00T·Tier 1
Low risk
Alibaba Qwen logo
Alibaba (Qwen)
$293.0B·Tier 1
Low risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Head to head
Gemini 1.5 Pro (Feb 2024)Qwen2.5 72B InstructGPT-4o (2024-08-06)
Aider · Code Editing
GPT-4o (2024-08-06) leads by +6.0
Gemini 1.5 Pro (Feb 2024)
57.1
Qwen2.5 72B Instruct
65.4
GPT-4o (2024-08-06)
71.4
Chatbot Arena Elo · Overall
GPT-4o (2024-08-06) leads by +11.8
Gemini 1.5 Pro (Feb 2024)
1322.5
Qwen2.5 72B Instruct
1302.3
GPT-4o (2024-08-06)
1334.3
GPQA diamond
GPT-4o (2024-08-06) leads by +0.1
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Gemini 1.5 Pro (Feb 2024)
27.8
Qwen2.5 72B Instruct
32.2
GPT-4o (2024-08-06)
32.3
MATH level 5
Qwen2.5 72B Instruct leads by +9.9
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
Gemini 1.5 Pro (Feb 2024)
40.8
Qwen2.5 72B Instruct
63.2
GPT-4o (2024-08-06)
53.3
MMLU
Qwen2.5 72B Instruct leads by +1.3
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
Gemini 1.5 Pro (Feb 2024)
76.9
Qwen2.5 72B Instruct
80.4
GPT-4o (2024-08-06)
79.1
OTIS Mock AIME 2024-2025
Qwen2.5 72B Instruct leads by +1.3
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Gemini 1.5 Pro (Feb 2024)
6.7
Qwen2.5 72B Instruct
8.0
GPT-4o (2024-08-06)
6.3
VideoMME
Gemini 1.5 Pro (Feb 2024) leads by +2.0
VideoMME · multimodal benchmark testing video understanding across diverse domains, requiring temporal reasoning and cross-frame comprehension.
Gemini 1.5 Pro (Feb 2024)
66.7
Qwen2.5 72B Instruct
64.7
GPT-4o (2024-08-06)
62.5
Balrog
Gemini 1.5 Pro (Feb 2024) leads by +4.8
Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning.
Gemini 1.5 Pro (Feb 2024)
21.0
Qwen2.5 72B Instruct
16.2
BBH
Gemini 1.5 Pro (Feb 2024) leads by +5.6
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
Gemini 1.5 Pro (Feb 2024)
78.7
Qwen2.5 72B Instruct
73.1
CadEval
Gemini 1.5 Pro (Feb 2024) leads by +8.0
CadEval · evaluates the ability to generate and reason about Computer-Aided Design code, testing spatial reasoning and engineering knowledge.
Gemini 1.5 Pro (Feb 2024)
34.0
GPT-4o (2024-08-06)
26.0
SimpleBench
Gemini 1.5 Pro (Feb 2024) leads by +11.2
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
Gemini 1.5 Pro (Feb 2024)
12.5
GPT-4o (2024-08-06)
1.4
The Agent Company
Qwen2.5 72B Instruct leads by +2.3
The Agent Company · tests AI agents on realistic corporate tasks like email management, code review, data analysis, and cross-tool workflows.
Gemini 1.5 Pro (Feb 2024)
3.4
Qwen2.5 72B Instruct
5.7
Full benchmark table
BenchmarkGemini 1.5 Pro (Feb 2024)Qwen2.5 72B InstructGPT-4o (2024-08-06)
Aider · Code Editing
57.165.471.4
Chatbot Arena Elo · Overall
1322.51302.31334.3
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
27.832.232.3
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
40.863.253.3
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
76.980.479.1
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
6.78.06.3
VideoMME
VideoMME · multimodal benchmark testing video understanding across diverse domains, requiring temporal reasoning and cross-frame comprehension.
66.764.762.5
Balrog
Balrog · benchmarks AI agents on text-based adventure games, testing language understanding, strategic planning, and long-horizon reasoning.
21.016.2
BBH
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
78.773.1
CadEval
CadEval · evaluates the ability to generate and reason about Computer-Aided Design code, testing spatial reasoning and engineering knowledge.
34.026.0
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
12.51.4
The Agent Company
The Agent Company · tests AI agents on realistic corporate tasks like email management, code review, data analysis, and cross-tool workflows.
3.45.7
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Google DeepMind logoGemini 1.5 Pro (Feb 2024)
Alibaba Qwen logoQwen2.5 72B Instruct$0.36$0.4033K tokens (~16 books)$3.70
OpenAI logoGPT-4o (2024-08-06)$2.50$10.00128K tokens (~64 books)$43.75