anthracite-org
Model Categories
Open Source Ratio
All anthracite-org Models1 total
| #▲ | Model | Avg | aider edit | aider poly? | ANLI? | APEX-Agents? | ARC AI2? | ARC-AGI? | ARC-AGI-2? | aa agentic? | aa coding ? | aa quality | seal audio | seal audio | seal audio | Balrog? | BBH? | hf bbh | C-Eval | CadEval? | charxiv re? | charxiv re? | arena elo | arena elo | chess puzz? | CMMLU | CSQA2 | Cybench? | deepresear? | EnigmaEval | fiction li? | Fortress | frontierma? | frontierma? | GeoBench? | GPQA | GPQA diamond? | graphwalks? | GSM8K? | GSO-Bench? | HellaSwag? | HELM · GPQA | helm ifeva | helm mmlu | helm omni | helm wildb | HLE? | hle tools | seal human | seal human | IFEval | jp jcommon | JHumanEval | JMMLU | JNLI | JSQuAD | LAMBADA? | lech mazur? | livebench | livebench | livebench | livebench | livebench | livebench | livebench | livebench | jp overall | MASK | MATH level 5? | MATH Level 5 | MCP Atlas | MMLU? | MMLU-PRO | MMMLU | mmmlu ar | mmmlu bn | mmmlu zh | mmmlu fr | mmmlu de | mmmlu hi | mmmlu id | mmmlu it | mmmlu ja | mmmlu ko | mmmlu pt | mmmlu es | mmmlu sw | mmmlu yo | seal multi | MultiNRC | MUSR | OpenBookQA? | oc aime202 | oc gpqa di | oc hle | oc ifeval | oc livecod | oc mmlu pr | OSWorld? | otis mock ? | PIQA? | posttrainb | seal pro r | seal pro r | seal prope | seal remot | ScienceQA? | SciPredict | SimpleBench? | simpleqa v? | seal swe a | seal swe a | swe bench | swe bench | swe bench | seal swe b | seal swe b | swe bench ? | swe bench ? | terminal b? | the agent ? | TriviaQA? | TutorBench | USAMO | VideoMME? | VISTA | seal visua | VPCT? | WeirdML? | Winogrande? | $/1M in | Context | Released |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | 51.2 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 35.5 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 10.4 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 56.3 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 20.0 | - | - | 31.4 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 13.4 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | $3.00 | 16K | Oct 241y ago |
About anthracite-org
Quick answers · sourced from our data
How many models does anthracite-org have?
BenchGecko tracks 1 model from anthracite-org, of which 1 (100%) are open source. Every entry is updated daily from live provider feeds.
What is the best model from anthracite-org?
Magnum v4 72B is currently the highest scoring anthracite-org model we track, with an average benchmark score of 27.9. Scores are computed across every public benchmark we have data for.
What is the cheapest anthracite-org model?
The cheapest anthracite-org model on BenchGecko starts at $3.00 per 1M input tokens. Pricing is pulled from OpenRouter and cross-checked against official provider rate cards.
How does anthracite-org compare on benchmarks?
anthracite-org models average 27.9 across the benchmarks we track · see the All Providers page for the full ranking by model count, open source ratio, and average score.
Is anthracite-org open source?
Every anthracite-org model we track is open source (1 of 1).