Home/Models/Claude Sonnet 4.5
Anthropic logo

Claude Sonnet 4.5

by Anthropic · Released Sep 2025

Multimodal1M Context
50.7
avg score
Rank #105
Compare
Better than 55% of all models
Context
1.0M tokens (~500 books)
Input $/1M
$3.00
Output $/1M
$15.00
Type
multimodal
License
Proprietary
Benchmarks
21 tested
Data updated today
About

Claude Sonnet 4.5 is Anthropic’s most advanced Sonnet model to date, optimized for real-world agents and coding workflows. It delivers state-of-the-art performance on coding benchmarks such as SWE-bench Verified, with...

Tested on 21 benchmarks with 42.1% average. Top scores: MATH level 5 (97.7%), OTIS Mock AIME 2024-2025 (77.8%), GPQA diamond (76.4%).

Looking for similar performance at lower cost?
Gemma 2 9B scores 50.3 (99% as good) at $0.03/1M input · 99% cheaper
Capabilities
coding
51.8
#62 globally
reasoning
40.8
#65 globally
math
48.7
#78 globally
knowledge
27.7
#182 globally
agentic
62.9
#3 globally
Benchmark Scores
Compare All
Tested on 21 benchmarks · Ranked across 5 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
SWE-Bench verified

Real-world software engineering tasks from GitHub issues. Models must diagnose bugs and write patches that pass test suites. Human-verified subset of SWE-bench.

71.3
SWE-Bench Verified (Bash Only)

SWE-bench Verified solved using only bash commands, no specialized frameworks. Tests raw terminal-based problem solving.

70.6
Cybench

Capture-the-flag cybersecurity challenges. Tests vulnerability analysis, reverse engineering, cryptography, and exploitation skills.

60.0
ARC-AGI

Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.

63.7
SimpleBench

Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.

45.2
ARC-AGI-2

ARC-AGI 2, harder sequel to ARC. More complex abstract reasoning patterns that test generalization ability beyond training data.

13.6
MATH level 5

Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.

97.7
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

77.8
FrontierMath-2025-02-28-Private

Original research-level math problems created by professional mathematicians. Problems are unpublished and cannot be memorized.

15.2
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Recently Happened
Claude Sonnet 4.5 scores 91.7% on MMLU
Mar 16, 2026
Links
Documentation
Community
BenchGecko API
claude-sonnet-4-5
Specifications
  • Typemultimodal
  • Context1.0M tokens (~500 books)
  • ReleasedSep 2025
  • LicenseProprietary
  • StatusActive
  • Cost / Message~$0.021
Available On
Anthropic logoAnthropic$3.00
Share & Export
Tweet
Claude Sonnet 4.5 is a proprietary multimodal AI model by Anthropic, released in September 2025. It has an average benchmark score of 50.7. Context window: 1M tokens.