Claude Opus 4.5 is Anthropic’s frontier reasoning model optimized for complex software engineering, agentic workflows, and long-horizon computer use. It offers strong multimodal capabilities, competitive performance across real-world coding and...
Tested on 28 benchmarks with 45.4% average. Top scores: Chatbot Arena Elo — Overall (1467.7%), Chatbot Arena Elo — Coding (1465.2%), OTIS Mock AIME 2024-2025 (86.1%).
Gemma 4 31B scores 68.2 (99% as good) at $0.13/1M input · 97% cheaper
Capture-the-flag cybersecurity challenges. Tests vulnerability analysis, reverse engineering, cryptography, and exploitation skills.
Real-world software engineering tasks from GitHub issues. Models must diagnose bugs and write patches that pass test suites. Human-verified subset of SWE-bench.
SWE-bench Verified solved using only bash commands, no specialized frameworks. Tests raw terminal-based problem solving.
Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.
Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.
ARC-AGI 2, harder sequel to ARC. More complex abstract reasoning patterns that test generalization ability beyond training data.
Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.
Original research-level math problems created by professional mathematicians. Problems are unpublished and cannot be memorized.
Hardest tier of FrontierMath. Problems at the frontier of human mathematical ability, many unsolved by most mathematicians.
- Typemultimodal
- Context200K tokens (~100 books)
- ReleasedNov 2025
- LicenseProprietary
- StatusActive
- Cost / Message~$0.035