Beta
Home/Comparar/Claude Opus 4 vs GPT-5

Claude Opus 4 vs GPT-5

Lado a lado. Cada métrica. Cada benchmark.

Anthropic
41.7
puntuación promedio
2/18
benchmarks
OpenAI logoGPT-5Ganador
OpenAI
54.4
puntuación promedio
15/18
benchmarks
TipoClaude Opus 4GPT-5
ProviderAnthropic logoAnthropicOpenAI logoOpenAI
puntuación promedio41.754.4
Precio de entrada$15.00$1.25
Precio de salida$75.00$10.00
Ventana de contexto200K tokens (~100 books)400K tokens (~200 books)
Publicado el2025-05-222025-08-07
Código abiertoProprietaryProprietary

18 benchmarks · Claude Opus 4: 2, GPT-5: 15

BenchmarkCategoríaClaude Opus 4GPT-5
Aider polyglotcoding72.088.0
ARC-AGIreasoning35.765.7
ARC-AGI-2reasoning8.69.9
DeepResearch Benchknowledge49.055.1
Fiction.LiveBenchknowledge61.197.2
FrontierMath-2025-02-28-Privatemath4.532.4
FrontierMath-Tier-4-2025-07-01-Privatemath4.212.5
GeoBenchknowledge49.081.0
GPQA diamondknowledge68.381.6
GSO-Benchcoding6.96.9
HLEknowledge6.221.6
MATH level 5math85.098.1
OTIS Mock AIME 2024-2025math64.491.4
SimpleBenchreasoning50.648.0
SWE-Bench verifiedcoding70.773.5
SWE-Bench Verified (Bash Only)coding67.665.0
VPCTknowledge7.049.0
WeirdMLcoding43.460.7