Compare · ModelsLive · 3 picked · head to head

DeepSeek V3 vs Gemini 1.5 Pro (Feb 2024) vs Llama 3.1 405B

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

DeepSeek V3 wins 12 of 20 shared benchmarks. Leads in reasoning · knowledge · math.

Category leads
reasoning·DeepSeek V3knowledge·DeepSeek V3math·DeepSeek V3coding·DeepSeek V3arena·DeepSeek V3language·Gemini 1.5 Pro (Feb 2024)agentic·Llama 3.1 405B
Hype vs Reality
DeepSeek V3
#45 by perf·no signal
QUIET
Gemini 1.5 Pro (Feb 2024)
#138 by perf·no signal
QUIET
Llama 3.1 405B
#153 by perf·no signal
QUIET
Best value
DeepSeek V3
97.5 pts/$
$0.60/M
Gemini 1.5 Pro (Feb 2024)
no price
Llama 3.1 405B
no price
Vendor risk
One or more vendors flagged
DeepSeek logo
DeepSeek
$3.4B·Tier 1
Higher risk
Google DeepMind logo
Google DeepMind
$4.00T·Tier 1
Low risk
Meta logo
Meta AI
$1.50T·Tier 1
Low risk
Head to head
DeepSeek V3Gemini 1.5 Pro (Feb 2024)Llama 3.1 405B
BBH
DeepSeek V3 leads by +4.7
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
DeepSeek V3
83.3
Gemini 1.5 Pro (Feb 2024)
78.7
Llama 3.1 405B
77.2
GPQA diamond
DeepSeek V3 leads by +7.5
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
DeepSeek V3
42.0
Gemini 1.5 Pro (Feb 2024)
27.8
Llama 3.1 405B
34.5
MATH level 5
DeepSeek V3 leads by +15.1
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
DeepSeek V3
64.8
Gemini 1.5 Pro (Feb 2024)
40.8
Llama 3.1 405B
49.8
MMLU
DeepSeek V3 leads by +3.6
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
DeepSeek V3
82.9
Gemini 1.5 Pro (Feb 2024)
76.9
Llama 3.1 405B
79.3
OTIS Mock AIME 2024-2025
DeepSeek V3 leads by +6.1
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
DeepSeek V3
15.8
Gemini 1.5 Pro (Feb 2024)
6.7
Llama 3.1 405B
9.6
SimpleBench
Gemini 1.5 Pro (Feb 2024) leads by +4.9
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
DeepSeek V3
2.7
Gemini 1.5 Pro (Feb 2024)
12.5
Llama 3.1 405B
7.6
WeirdML
DeepSeek V3 leads by +13.9
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
DeepSeek V3
36.1
Gemini 1.5 Pro (Feb 2024)
22.2
Llama 3.1 405B
21.4
ARC AI2
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
DeepSeek V3
93.7
Llama 3.1 405B
93.7
Chatbot Arena Elo · Overall
DeepSeek V3 leads by +35.6
DeepSeek V3
1358.2
Gemini 1.5 Pro (Feb 2024)
1322.5
Cybench
Cybench · evaluates AI on real Capture-The-Flag cybersecurity challenges, testing vulnerability analysis, exploitation, and security reasoning.
Gemini 1.5 Pro (Feb 2024)
7.5
Llama 3.1 405B
7.5
HellaSwag
Llama 3.1 405B leads by +0.4
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
DeepSeek V3
85.2
Llama 3.1 405B
85.6
HELM · GPQA
DeepSeek V3 leads by +0.4
DeepSeek V3
53.8
Gemini 1.5 Pro (Feb 2024)
53.4
HELM · IFEval
Gemini 1.5 Pro (Feb 2024) leads by +0.5
DeepSeek V3
83.2
Gemini 1.5 Pro (Feb 2024)
83.7
HELM · MMLU-Pro
Gemini 1.5 Pro (Feb 2024) leads by +1.4
DeepSeek V3
72.3
Gemini 1.5 Pro (Feb 2024)
73.7
HELM · Omni-MATH
DeepSeek V3 leads by +3.9
DeepSeek V3
40.3
Gemini 1.5 Pro (Feb 2024)
36.4
HELM · WildBench
DeepSeek V3 leads by +1.8
DeepSeek V3
83.1
Gemini 1.5 Pro (Feb 2024)
81.3
PIQA
Llama 3.1 405B leads by +2.4
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
DeepSeek V3
69.4
Llama 3.1 405B
71.8
The Agent Company
Llama 3.1 405B leads by +4.0
The Agent Company · tests AI agents on realistic corporate tasks like email management, code review, data analysis, and cross-tool workflows.
Gemini 1.5 Pro (Feb 2024)
3.4
Llama 3.1 405B
7.4
TriviaQA
DeepSeek V3 leads by +0.2
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
DeepSeek V3
82.9
Llama 3.1 405B
82.7
Winogrande
Llama 3.1 405B leads by +8.0
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
DeepSeek V3
70.4
Llama 3.1 405B
78.4
Full benchmark table
BenchmarkDeepSeek V3Gemini 1.5 Pro (Feb 2024)Llama 3.1 405B
BBH
BIG-Bench Hard · a curated subset of 23 challenging tasks from BIG-Bench where language models previously failed to outperform average humans.
83.378.777.2
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
42.027.834.5
MATH level 5
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
64.840.849.8
MMLU
Massive Multitask Language Understanding · 57 subjects spanning STEM, humanities, social sciences, and more. The standard benchmark for broad knowledge.
82.976.979.3
OTIS Mock AIME 2024-2025
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
15.86.79.6
SimpleBench
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
2.712.57.6
WeirdML
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
36.122.221.4
ARC AI2
AI2 Reasoning Challenge · tests grade-school level science knowledge with multiple-choice questions requiring reasoning beyond simple retrieval.
93.793.7
Chatbot Arena Elo · Overall
1358.21322.5
Cybench
Cybench · evaluates AI on real Capture-The-Flag cybersecurity challenges, testing vulnerability analysis, exploitation, and security reasoning.
7.57.5
HellaSwag
HellaSwag · tests commonsense reasoning by asking models to predict the most plausible continuation of everyday scenarios.
85.285.6
HELM · GPQA
53.853.4
HELM · IFEval
83.283.7
HELM · MMLU-Pro
72.373.7
HELM · Omni-MATH
40.336.4
HELM · WildBench
83.181.3
PIQA
PIQA (Physical Interaction QA) · tests intuitive physical reasoning by asking models to select the correct approach for everyday physical tasks.
69.471.8
The Agent Company
The Agent Company · tests AI agents on realistic corporate tasks like email management, code review, data analysis, and cross-tool workflows.
3.47.4
TriviaQA
TriviaQA · reading comprehension benchmark with trivia questions, requiring models to find and reason over evidence from provided documents.
82.982.7
Winogrande
WinoGrande · large-scale commonsense reasoning benchmark where models must resolve ambiguous pronouns in carefully constructed sentence pairs.
70.478.4
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
DeepSeek logoDeepSeek V3$0.32$0.89164K tokens (~82 books)$4.63
Google DeepMind logoGemini 1.5 Pro (Feb 2024)
Meta logoLlama 3.1 405B