Home/Models/Claude Opus 4.1
Anthropic logo

Claude Opus 4.1

by Anthropic · Released Aug 2025

Multimodal
47.4
avg score
Rank #118
Compare
Better than 49% of all models
Context
200K tokens (~100 books)
Input $/1M
$15.00
Output $/1M
$75.00
Type
multimodal
License
Proprietary
Benchmarks
14 tested
Data updated today
About

Claude Opus 4.1 is an updated version of Anthropic’s flagship model, offering improved performance in coding, reasoning, and agentic tasks. It achieves 74.5% on SWE-bench Verified and shows notable gains...

Tested on 14 benchmarks with 41.3% average. Top scores: Lech Mazur Writing (85.4%), SWE-Bench verified (73.3%), GPQA diamond (69.7%).

Looking for similar performance at lower cost?
Grok 3 Mini scores 46.0 (97% as good) at $0.30/1M input · 98% cheaper
Capabilities
coding
49.0
#70 globally
reasoning
52.0
#45 globally
math
26.8
#141 globally
knowledge
41.5
#144 globally
Benchmark Scores
Compare All
Tested on 14 benchmarks · Ranked across 4 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
SWE-Bench verified

Real-world software engineering tasks from GitHub issues. Models must diagnose bugs and write patches that pass test suites. Human-verified subset of SWE-bench.

73.3
WeirdML

Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.

42.8
Cybench

Capture-the-flag cybersecurity challenges. Tests vulnerability analysis, reverse engineering, cryptography, and exploitation skills.

42.0
SimpleBench

Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.

52.0
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

68.9
FrontierMath-2025-02-28-Private

Original research-level math problems created by professional mathematicians. Problems are unpublished and cannot be memorized.

7.2
FrontierMath-Tier-4-2025-07-01-Private

Hardest tier of FrontierMath. Problems at the frontier of human mathematical ability, many unsolved by most mathematicians.

4.2
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Recently Happened
Claude Opus 4.1 pricing increased · $15/$75 per 1M tokens
Mar 26, 2026
Links
Documentation
Community
BenchGecko API
claude-opus-4-1
Specifications
  • Typemultimodal
  • Context200K tokens (~100 books)
  • ReleasedAug 2025
  • LicenseProprietary
  • StatusActive
  • Cost / Message~$0.105
Available On
Anthropic logoAnthropic$15.00
Share & Export
Tweet
Claude Opus 4.1 is a proprietary multimodal AI model by Anthropic, released in August 2025. It has an average benchmark score of 47.4. Context window: 200K tokens.