Beta
Home/Models/Claude Opus 4.6 (Fast)
Anthropic logo

Claude Opus 4.6 (Fast)

by Anthropic · Released Apr 2026

Multimodal1M Context
83.3
avg score
Rank #17
Compare
Better than 93% of all models
Context
1.0M tokens (~500 books)
Input $/1M
$30.00
Output $/1M
$150.00
Type
multimodal
License
Proprietary
Benchmarks
12 tested
Data updated today
About

Fast-mode variant of [Opus 4.6](/anthropic/claude-opus-4.6) - identical capabilities with higher output speed at premium 6x pricing. Learn more in Anthropic's docs: https://platform.claude.com/docs/en/build-with-claude/fast-mode

Tested on 12 benchmarks with 43.4% average. Top scores: Chatbot Arena Elo — Coding (1546.2%), Chatbot Arena Elo — Overall (1502.8%), MASK (96.3%).

Looking for similar performance at lower cost?
MiMo-V2-Flash scores 81.7 (98% as good) at $0.09/1M input · 100% cheaper
Capabilities
knowledge
44.4
#126 globally
agentic
24.7
#18 globally
safety
96.3
#1 globally
speed
93.7
#4 globally
Benchmark Scores
Compare All
Tested on 12 benchmarks · Ranked across 5 categories
Score Distribution (all 231 models)
0255075100
▲ You are here
Professional Reasoning — Finance

SEAL Pro Reasoning Finance. Tests financial analysis and reasoning with real-world data.

53.3
Professional Reasoning — Legal

SEAL Pro Reasoning Legal. Tests legal reasoning and case analysis ability.

52.3
VisualToolBench (VTB)

SEAL Visual Tool Bench. Tests ability to use visual tools for image analysis and manipulation.

27.5
SWE Atlas — Test Writing

SEAL SWE Atlas Test Writing. Tests ability to write effective test cases for existing code.

36.7
SWE Atlas — Codebase QnA

SEAL SWE Atlas Codebase Q&A. Tests understanding of large codebases through question answering.

33.3
Remote Labor Index (RLI)

SEAL Remote Labor Index. Evaluates AI performance on tasks typically done by remote knowledge workers.

4.2
Chatbot Arena Elo — Coding

Chatbot Arena coding Elo. Human preference ranking specifically for coding tasks and technical questions.

1546
Chatbot Arena Elo — Overall

Chatbot Arena overall Elo rating. Crowdsourced human preference ranking from blind head-to-head comparisons across all topics.

1503
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
claude-opus-4-6-fast
Specifications
  • Typemultimodal
  • Context1.0M tokens (~500 books)
  • ReleasedApr 2026
  • LicenseProprietary
  • StatusActive
  • Cost / Message~$0.210
Available On
Anthropic logoAnthropic$30.00
Share & Export
Tweet
Claude Opus 4.6 (Fast) is a proprietary multimodal AI model by Anthropic, released in April 2026. It has an average benchmark score of 83.3. Context window: 1M tokens.