Home/Models/Claude Opus 4.6
Anthropic logo

Claude Opus 4.6

by Anthropic · Released Feb 2026

Multimodal1M Context
81.0
avg score
Rank #22
Compare
Better than 91% of all models
Context
1.0M tokens (~500 books)
Input $/1M
$5.00
Output $/1M
$25.00
Type
multimodal
License
Proprietary
Benchmarks
19 tested
Data updated today
About

Opus 4.6 is Anthropic’s strongest model for coding and long-running professional tasks. It is built for agents that operate across entire workflows rather than single prompts, making it especially effective...

Tested on 19 benchmarks with 57.5% average. Top scores: Chatbot Arena Elo — Coding (1542.9%), Chatbot Arena Elo — Overall (1496.6%), OTIS Mock AIME 2024-2025 (94.4%).

Looking for similar performance at lower cost?
MiMo-V2-Flash scores 81.7 (101% as good) at $0.09/1M input · 98% cheaper
Capabilities
coding
71.5
#16 globally
reasoning
74.8
#17 globally
math
52.7
#68 globally
knowledge
41.0
#148 globally
agentic
31.7
#16 globally
Benchmark Scores
Compare All
Tested on 19 benchmarks · Ranked across 6 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
Cybench

Capture-the-flag cybersecurity challenges. Tests vulnerability analysis, reverse engineering, cryptography, and exploitation skills.

93.0
SWE-Bench verified

Real-world software engineering tasks from GitHub issues. Models must diagnose bugs and write patches that pass test suites. Human-verified subset of SWE-bench.

78.7
WeirdML

Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.

77.9
ARC-AGI

Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.

94.0
ARC-AGI-2

ARC-AGI 2, harder sequel to ARC. More complex abstract reasoning patterns that test generalization ability beyond training data.

69.2
SimpleBench

Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.

61.1
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

94.4
FrontierMath-2025-02-28-Private

Original research-level math problems created by professional mathematicians. Problems are unpublished and cannot be memorized.

40.7
FrontierMath-Tier-4-2025-07-01-Private

Hardest tier of FrontierMath. Problems at the frontier of human mathematical ability, many unsolved by most mathematicians.

22.9
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Recently Happened
Claude Opus 4.6 released by Anthropic
Mar 27, 2026
Links
Documentation
Community
BenchGecko API
claude-opus-4-6
Specifications
  • Typemultimodal
  • Context1.0M tokens (~500 books)
  • ReleasedFeb 2026
  • LicenseProprietary
  • StatusActive
  • Cost / Message~$0.035
Available On
Anthropic logoAnthropic$5.00
Share & Export
Tweet
Claude Opus 4.6 is a proprietary multimodal AI model by Anthropic, released in February 2026. It has an average benchmark score of 81.0. Context window: 1M tokens.