Claude Opus 4 vs GPT-5
Lado a lado. Cada métrica. Cada benchmark.
| Tipo | Claude Opus 4 | GPT-5 |
|---|---|---|
| Provider | ||
| pontuação média | 41.7 | 54.4 |
| Preço de entrada | $15.00 | $1.25 |
| Preço de saída | $75.00 | $10.00 |
| Janela de contexto | 200K tokens (~100 books) | 400K tokens (~200 books) |
| Lançado em | 2025-05-22 | 2025-08-07 |
| Código aberto | Proprietary | Proprietary |
Pontuações de benchmark
18 benchmarks · Claude Opus 4: 2, GPT-5: 15
| Benchmark | Categoria | Claude Opus 4 | GPT-5 |
|---|---|---|---|
| Aider polyglot | coding | 72.0 | 88.0 |
| ARC-AGI | reasoning | 35.7 | 65.7 |
| ARC-AGI-2 | reasoning | 8.6 | 9.9 |
| DeepResearch Bench | knowledge | 49.0 | 55.1 |
| Fiction.LiveBench | knowledge | 61.1 | 97.2 |
| FrontierMath-2025-02-28-Private | math | 4.5 | 32.4 |
| FrontierMath-Tier-4-2025-07-01-Private | math | 4.2 | 12.5 |
| GeoBench | knowledge | 49.0 | 81.0 |
| GPQA diamond | knowledge | 68.3 | 81.6 |
| GSO-Bench | coding | 6.9 | 6.9 |
| HLE | knowledge | 6.2 | 21.6 |
| MATH level 5 | math | 85.0 | 98.1 |
| OTIS Mock AIME 2024-2025 | math | 64.4 | 91.4 |
| SimpleBench | reasoning | 50.6 | 48.0 |
| SWE-Bench verified | coding | 70.7 | 73.5 |
| SWE-Bench Verified (Bash Only) | coding | 67.6 | 65.0 |
| VPCT | knowledge | 7.0 | 49.0 |
| WeirdML | coding | 43.4 | 60.7 |