Beta
Home/Comparar/Claude Opus 4.6 vs GPT-5.4

Claude Opus 4.6 vs GPT-5.4

Lado a lado. Cada métrica. Cada benchmark.

Anthropic
57.5
puntuación promedio
6/13
benchmarks
OpenAI logoGPT-5.4Ganador
OpenAI
59.0
puntuación promedio
7/13
benchmarks
TipoClaude Opus 4.6GPT-5.4
ProviderAnthropic logoAnthropicOpenAI logoOpenAI
puntuación promedio57.559.0
Precio de entrada$5.00$2.50
Precio de salida$25.00$15.00
Ventana de contexto1.0M tokens (~500 books)1.1M tokens (~525 books)
Publicado el2026-02-042026-03-05
Código abiertoProprietaryProprietary

13 benchmarks · Claude Opus 4.6: 6, GPT-5.4: 7

BenchmarkCategoríaClaude Opus 4.6GPT-5.4
APEX-Agentsagentic31.735.9
ARC-AGIreasoning94.093.7
ARC-AGI-2reasoning69.274.0
Chatbot Arena Elo — Overallarena1496.61465.8
Chess Puzzlesknowledge17.044.0
FrontierMath-2025-02-28-Privatemath40.747.6
FrontierMath-Tier-4-2025-07-01-Privatemath22.927.1
GPQA diamondknowledge87.491.1
OTIS Mock AIME 2024-2025math94.495.3
PostTrainBenchknowledge23.220.2
SimpleQA Verifiedknowledge46.544.8
SWE-Bench verifiedcoding78.776.9
WeirdMLcoding77.957.4