Cohere
Model Categories
Pricing Range — $/1M input tokens
Open Source Ratio
All Cohere Models4 total
| #▲ | Model | Avg | aider edit | aider poly? | ANLI? | APEX-Agents? | ARC AI2? | ARC-AGI? | ARC-AGI-2? | aa agentic? | aa coding ? | aa quality | seal audio | seal audio | seal audio | Balrog? | BBH? | hf bbh | C-Eval | CadEval? | charxiv re? | charxiv re? | arena elo | arena elo | chess puzz? | CMMLU | CSQA2 | Cybench? | deepresear? | EnigmaEval | fiction li? | Fortress | frontierma? | frontierma? | GeoBench? | GPQA | GPQA diamond? | graphwalks? | GSM8K? | GSO-Bench? | HellaSwag? | HELM · GPQA | helm ifeva | helm mmlu | helm omni | helm wildb | HLE? | hle tools | seal human | seal human | IFEval | jp jcommon | JHumanEval | JMMLU | JNLI | JSQuAD | LAMBADA? | lech mazur? | livebench | livebench | livebench | livebench | livebench | livebench | livebench | livebench | jp overall | MASK | MATH level 5? | MATH Level 5 | MCP Atlas | MMLU? | MMLU-PRO | MMMLU | mmmlu ar | mmmlu bn | mmmlu zh | mmmlu fr | mmmlu de | mmmlu hi | mmmlu id | mmmlu it | mmmlu ja | mmmlu ko | mmmlu pt | mmmlu es | mmmlu sw | mmmlu yo | seal multi | MultiNRC | MUSR | OpenBookQA? | oc aime202 | oc gpqa di | oc hle | oc ifeval | oc livecod | oc mmlu pr | OSWorld? | otis mock ? | PIQA? | posttrainb | seal pro r | seal pro r | seal prope | seal remot | ScienceQA? | SciPredict | SimpleBench? | simpleqa v? | seal swe a | seal swe a | swe bench | swe bench | swe bench | seal swe b | seal swe b | swe bench ? | swe bench ? | terminal b? | the agent ? | TriviaQA? | TutorBench | USAMO | VideoMME? | VISTA | seal visua | VPCT? | WeirdML? | Winogrande? | $/1M in | Context | Released |
|---|
No models match the current filters.
About Cohere
Quick answers · sourced from our data
How many models does Cohere have?
BenchGecko tracks 4 models from Cohere, of which 1 (25%) are open source. Every entry is updated daily from live provider feeds.
What is the best model from Cohere?
Command R+ (08-2024) is currently the highest scoring Cohere model we track, with an average benchmark score of 38.3. Scores are computed across every public benchmark we have data for.
What is the cheapest Cohere model?
The cheapest Cohere model on BenchGecko starts at $0.04 per 1M input tokens. Pricing is pulled from OpenRouter and cross-checked against official provider rate cards.
How does Cohere compare on benchmarks?
Cohere models average 12.8 across the benchmarks we track · see the All Providers page for the full ranking by model count, open source ratio, and average score.
Where is Cohere based?
Cohere is headquartered in Canada. BenchGecko groups providers by region to make it easy to compare US, EU, China, and Rest of World markets.
Is Cohere open source?
1 of 4 Cohere models are open source (25%). The rest are proprietary · closed weights served via API.