Home/Models/Magnum v4 72B
anthracite-org logo

Magnum v4 72B

by anthracite-org · Released Oct 2024

Open Source
51.2
avg score
Rank #103
Compare
Better than 56% of all models
Context
16K tokens (~8 books)
Input $/1M
$3.00
Output $/1M
$5.00
Type
text
License
Open Source
Benchmarks
6 tested
Data updated today
About

This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet(https://openrouter.ai/anthropic/claude-3.5-sonnet) and Opus(https://openrouter.ai/anthropic/claude-3-opus). The model is fine-tuned on top of [Qwen2.5 72B](https://openrouter.ai/qwen/qwen-

Tested on 6 benchmarks with 27.9% average. Top scores: IFEval (56.3%), BBH (HuggingFace) (35.5%), MMLU-PRO (31.4%).

Looking for similar performance at lower cost?
Qwen3 32B scores 51.7 (101% as good) at $0.08/1M input · 97% cheaper
Capabilities
reasoning
13.4
#118 globally
math
20.0
#160 globally
knowledge
20.9
#197 globally
language
56.3
#96 globally
general
35.5
#26 globally
Benchmark Scores
Compare All
Tested on 6 benchmarks · Ranked across 5 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
MUSR

HuggingFace MuSR (Multi-Step Reasoning). Tests multi-hop reasoning requiring chaining multiple facts together.

13.4
MATH Level 5

HuggingFace evaluation of MATH Level 5 problems. Competition math requiring advanced reasoning and proof construction.

20.0
MMLU-PRO

HuggingFace MMLU-Pro. Harder version of MMLU with 10 answer choices instead of 4 and more challenging questions.

31.4
GPQA

HuggingFace evaluation of GPQA (Graduate-Level Google-Proof Q&A). PhD-level science questions that cannot be easily searched.

10.4
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
BenchGecko API
magnum-v4-72b
Specifications
  • Typetext
  • Context16K tokens (~8 books)
  • ReleasedOct 2024
  • LicenseOpen Source
  • StatusActive
  • Cost / Message~$0.011
Available On
anthracite-org logoanthracite-org$3.00
Share & Export
Tweet
Magnum v4 72B is an open-source text AI model by anthracite-org, released in October 2024. It has an average benchmark score of 51.2. Context window: 16K tokens.