Home/Models/MiniMax M2
minimax logo

MiniMax M2

by minimax · Released Oct 2025

Open Source
72.4
avg score
Rank #34
Compare
Better than 85% of all models
Context
197K tokens (~98 books)
Input $/1M
$0.26
Output $/1M
$1.00
Type
text
License
Open Source
Benchmarks
8 tested
Data updated today
About

MiniMax-M2 is a compact, high-efficiency large language model optimized for end-to-end coding and agentic workflows. With 10 billion activated parameters (230 billion total), it delivers near-frontier intelligence across general reasoning,...

Tested on 8 benchmarks with 69.5% average. Top scores: Chatbot Arena Elo — Overall (1346.6%), Chatbot Arena Elo — Coding (1303.3%), OpenCompass — IFEval (90.2%).

Capabilities
coding
74.0
#11 globally
math
79.1
#21 globally
knowledge
57.9
#60 globally
language
90.2
#12 globally
Benchmark Scores
Compare All
Tested on 8 benchmarks · Ranked across 5 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
OpenCompass — LiveCodeBenchV6

OpenCompass Live Code Bench v6. Fresh competitive programming problems to evaluate code generation without memorization.

74.0
OpenCompass — AIME2025

OpenCompass evaluation on AIME 2025 problems. Tests mathematical reasoning on fresh competition problems.

79.1
OpenCompass — MMLU-Pro

OpenCompass MMLU-Pro evaluation. Harder knowledge test with more answer choices.

81.6
OpenCompass — GPQA-Diamond

OpenCompass evaluation of GPQA Diamond. PhD-level science questions from the hardest subset.

78.7
OpenCompass — HLE

OpenCompass evaluation of Humanitys Last Exam. Expert-level cross-discipline knowledge test.

13.4
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
minimax-m2
Specifications
  • Typetext
  • Context197K tokens (~98 books)
  • ReleasedOct 2025
  • LicenseOpen Source
  • StatusActive
  • Cost / Message~$0.002
Available On
minimax logominimax$0.26
Share & Export
Tweet
MiniMax M2 is an open-source text AI model by minimax, released in October 2025. It has an average benchmark score of 72.4. Context window: 197K tokens.