Home/Models/Mistral Large 2411
Mistral AI logo

Mistral Large 2411

by Mistral AI · Released Nov 2024

Open Source
35.7
avg score
Rank #163
Compare
Better than 30% of all models
Context
131K tokens (~66 books)
Input $/1M
$2.00
Output $/1M
$6.00
Type
text
License
Open Source
Benchmarks
11 tested
Data updated today
About

Mistral Large 2 2411 is an update of [Mistral Large 2](/mistralai/mistral-large) released together with [Pixtral Large 2411](/mistralai/pixtral-large-2411) It provides a significant upgrade on the previous [Mistral Large 24.07](/mistralai/mistral-large-2407), with notable...

Tested on 11 benchmarks with 45.8% average. Top scores: Chatbot Arena Elo — Overall (1304.7%), HELM — IFEval (87.6%), HELM — WildBench (80.1%).

Looking for similar performance at lower cost?
Gemma 3 27B (free) scores 35.0 (98% as good) at $0.00/1M input · 100% cheaper
Capabilities
coding
65.4
#26 globally
reasoning
80.1
#9 globally
math
21.6
#154 globally
knowledge
46.2
#120 globally
language
87.6
#22 globally
Benchmark Scores
Compare All
Tested on 11 benchmarks · Ranked across 6 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
Aider — Code Editing

Code editing benchmark from the Aider project. Measures ability to apply targeted code changes while maintaining correctness and style.

65.4
HELM — WildBench

Stanford HELM WildBench evaluation. Tests reasoning on challenging real-world tasks.

80.1
MATH level 5

Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.

50.3
HELM — Omni-MATH

Stanford HELM evaluation of mathematical reasoning across diverse problem types.

28.1
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

7.7
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
mistral-large-2411
Specifications
  • Typetext
  • Context131K tokens (~66 books)
  • ReleasedNov 2024
  • LicenseOpen Source
  • StatusActive
  • Cost / Message~$0.010
Available On
Mistral AI logoMistral AI$2.00
Share & Export
Tweet
Mistral Large 2411 is an open-source text AI model by Mistral AI, released in November 2024. It has an average benchmark score of 35.7. Context window: 131K tokens.