Claude Opus 4.5 vs GPT-5.2
Côte à côte. Chaque métrique. Chaque benchmark.
| Type | Claude Opus 4.5 | GPT-5.2 |
|---|---|---|
| Provider | ||
| score moyen | 45.4 | 54.0 |
| Prix d'entrée | $5.00 | $1.75 |
| Prix de sortie | $25.00 | $14.00 |
| Fenêtre de contexte | 200K tokens (~100 books) | 400K tokens (~200 books) |
| Sorti le | 2025-11-24 | 2025-12-10 |
| Code source ouvert | Proprietary | Proprietary |
Scores de benchmark
20 benchmarks · Claude Opus 4.5: 6, GPT-5.2: 14
| Benchmark | Catégorie | Claude Opus 4.5 | GPT-5.2 |
|---|---|---|---|
| APEX-Agents | agentic | 18.4 | 34.3 |
| ARC-AGI | reasoning | 80.0 | 86.2 |
| ARC-AGI-2 | reasoning | 37.6 | 52.9 |
| Chatbot Arena Elo — Coding | arena | 1465.2 | 1403.1 |
| Chatbot Arena Elo — Overall | arena | 1467.7 | 1439.5 |
| Chess Puzzles | knowledge | 12.0 | 49.0 |
| FrontierMath-2025-02-28-Private | math | 20.7 | 40.7 |
| FrontierMath-Tier-4-2025-07-01-Private | math | 4.2 | 18.8 |
| GPQA diamond | knowledge | 81.4 | 88.5 |
| GSO-Bench | coding | 26.5 | 27.4 |
| HLE | knowledge | 21.4 | 24.2 |
| OTIS Mock AIME 2024-2025 | math | 86.1 | 96.1 |
| PostTrainBench | knowledge | 17.3 | 21.4 |
| SimpleBench | reasoning | 54.4 | 35.0 |
| SimpleQA Verified | knowledge | 41.8 | 38.9 |
| SWE-Bench verified | coding | 76.7 | 73.8 |
| SWE-Bench Verified (Bash Only) | coding | 74.4 | 71.8 |
| Terminal Bench | coding | 63.1 | 64.9 |
| VPCT | knowledge | 10.0 | 76.0 |
| WeirdML | coding | 63.7 | 72.2 |