GPT-5.4 vs Claude Opus 4.6
Lado a lado. Cada métrica. Cada benchmark.
| Tipo | GPT-5.4 | Claude Opus 4.6 |
|---|---|---|
| Provider | ||
| puntuación promedio | 59.0 | 57.5 |
| Precio de entrada | $2.50 | $5.00 |
| Precio de salida | $15.00 | $25.00 |
| Ventana de contexto | 1.1M tokens (~525 books) | 1.0M tokens (~500 books) |
| Publicado el | 2026-03-05 | 2026-02-04 |
| Código abierto | Proprietary | Proprietary |
Puntuaciones de benchmark
13 benchmarks · GPT-5.4: 7, Claude Opus 4.6: 6
| Benchmark | Categoría | GPT-5.4 | Claude Opus 4.6 |
|---|---|---|---|
| APEX-Agents | agentic | 35.9 | 31.7 |
| ARC-AGI | reasoning | 93.7 | 94.0 |
| ARC-AGI-2 | reasoning | 74.0 | 69.2 |
| Chatbot Arena Elo — Overall | arena | 1465.8 | 1496.6 |
| Chess Puzzles | knowledge | 44.0 | 17.0 |
| FrontierMath-2025-02-28-Private | math | 47.6 | 40.7 |
| FrontierMath-Tier-4-2025-07-01-Private | math | 27.1 | 22.9 |
| GPQA diamond | knowledge | 91.1 | 87.4 |
| OTIS Mock AIME 2024-2025 | math | 95.3 | 94.4 |
| PostTrainBench | knowledge | 20.2 | 23.2 |
| SimpleQA Verified | knowledge | 44.8 | 46.5 |
| SWE-Bench verified | coding | 76.9 | 78.7 |
| WeirdML | coding | 57.4 | 77.9 |