Better than 35% of all models
Context
128K tokens (~64 books)
Input $/1M
$2.50
Output $/1M
$10.00
Type
text
License
Proprietary
Benchmarks
2 tested
Data updated today
About
command-r-plus-08-2024 is an update of the [Command R+](/models/cohere/command-r-plus) with roughly 50% higher throughput and 25% lower latencies as compared to the previous Command R+ version, while keeping the hardware footprint...
Tested on 2 benchmarks with 38.3% average. Top scores: Chatbot Arena Elo — Overall (1275.5%), Aider — Code Editing (38.3%).
Looking for similar performance at lower cost?
GLM 4 32B scores 37.8 (99% as good) at $0.10/1M input · 96% cheaper
GLM 4 32B scores 37.8 (99% as good) at $0.10/1M input · 96% cheaper
Capabilities
coding
38.3
#102 globally
Benchmark Scores
Compare AllTested on 2 benchmarks · Ranked across 2 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
codingCompare coding →
Aider — Code Editing
38.3—Code editing benchmark from the Aider project. Measures ability to apply targeted code changes while maintaining correctness and style.
arenaCompare arena →
Chatbot Arena Elo — Overall
1275—Chatbot Arena overall Elo rating. Crowdsourced human preference ranking from blind head-to-head comparisons across all topics.
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Research
Documentation
Community
BenchGecko API
command-r-plus-08-2024
Specifications
- Typetext
- Context128K tokens (~64 books)
- ReleasedAug 2024
- LicenseProprietary
- StatusActive
- Cost / Message~$0.015
Available On
Categories
Learn More
Share & Export
Frequently Asked Questions
Command R+ (08-2024) is a proprietary text AI model by Cohere, released in August 2024. It has an average benchmark score of 38.3. Context window: 128K tokens.