Home/Models/Claude 3 Opus
Anthropic logo

Claude 3 Opus

by Anthropic · Released Jan 2024

38.4
avg score
Rank #150
Compare
Better than 36% of all models
Context
N/A
Input $/1M
TBD
Output $/1M
TBD
Type
text
License
Proprietary
Benchmarks
8 tested
Data updated today
About

Tested on 8 benchmarks with 33.7% average. Top scores: MMLU (79.5%), Winogrande (77.0%), MATH level 5 (37.5%).

Capabilities
coding
16.6
#130 globally
reasoning
8.2
#148 globally
math
21.1
#156 globally
knowledge
62.0
#36 globally
Benchmark Scores
Compare All
Tested on 8 benchmarks · Ranked across 4 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
WeirdML

Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.

23.2
Cybench

Capture-the-flag cybersecurity challenges. Tests vulnerability analysis, reverse engineering, cryptography, and exploitation skills.

10.0
SimpleBench

Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.

8.2
MATH level 5

Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.

37.5
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

4.6
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
claude-3-opus
Specifications
  • Typetext
  • ContextN/A
  • ReleasedJan 2024
  • LicenseProprietary
  • Statusbenchmark-only
Available On
Anthropic logoAnthropicTBD
Share & Export
Tweet
Claude 3 Opus is a proprietary text AI model by Anthropic, released in January 2024. It has an average benchmark score of 38.4.