Home/Models/Claude Opus 4.5
Anthropic logo

Claude Opus 4.5

by Anthropic · Released Nov 2025

Multimodal
69.2
avg score
Rank #40
Compare
Better than 83% of all models
Context
200K tokens (~100 books)
Input $/1M
$5.00
Output $/1M
$25.00
Type
multimodal
License
Proprietary
Benchmarks
28 tested
Data updated today
About

Claude Opus 4.5 is Anthropic’s frontier reasoning model optimized for complex software engineering, agentic workflows, and long-horizon computer use. It offers strong multimodal capabilities, competitive performance across real-world coding and...

Tested on 28 benchmarks with 45.4% average. Top scores: Chatbot Arena Elo — Overall (1467.7%), Chatbot Arena Elo — Coding (1465.2%), OTIS Mock AIME 2024-2025 (86.1%).

Looking for similar performance at lower cost?
Gemma 4 31B scores 68.2 (99% as good) at $0.13/1M input · 97% cheaper
Capabilities
coding
64.4
#31 globally
reasoning
57.3
#36 globally
math
37.0
#114 globally
knowledge
35.2
#162 globally
agentic
43.3
#4 globally
safety
13.6
#4 globally
Benchmark Scores
Compare All
Tested on 28 benchmarks · Ranked across 7 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
Cybench

Capture-the-flag cybersecurity challenges. Tests vulnerability analysis, reverse engineering, cryptography, and exploitation skills.

82.0
SWE-Bench verified

Real-world software engineering tasks from GitHub issues. Models must diagnose bugs and write patches that pass test suites. Human-verified subset of SWE-bench.

76.7
SWE-Bench Verified (Bash Only)

SWE-bench Verified solved using only bash commands, no specialized frameworks. Tests raw terminal-based problem solving.

74.4
ARC-AGI

Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.

80.0
SimpleBench

Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.

54.4
ARC-AGI-2

ARC-AGI 2, harder sequel to ARC. More complex abstract reasoning patterns that test generalization ability beyond training data.

37.6
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

86.1
FrontierMath-2025-02-28-Private

Original research-level math problems created by professional mathematicians. Problems are unpublished and cannot be memorized.

20.7
FrontierMath-Tier-4-2025-07-01-Private

Hardest tier of FrontierMath. Problems at the frontier of human mathematical ability, many unsolved by most mathematicians.

4.2
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Recently Happened
Claude Opus 4.5 input price dropped · $5.00 per 1M tokens
Mar 30, 2026
Links
Documentation
Community
BenchGecko API
claude-opus-4-5
Specifications
  • Typemultimodal
  • Context200K tokens (~100 books)
  • ReleasedNov 2025
  • LicenseProprietary
  • StatusActive
  • Cost / Message~$0.035
Available On
Anthropic logoAnthropic$5.00
Share & Export
Tweet
Claude Opus 4.5 is a proprietary multimodal AI model by Anthropic, released in November 2025. It has an average benchmark score of 69.2. Context window: 200K tokens.