Beta
Home/Comparar/GPT-5.4 vs Claude Opus 4.5

GPT-5.4 vs Claude Opus 4.5

Lado a lado. Cada métrica. Cada benchmark.

OpenAI logoGPT-5.4Vencedor
OpenAI
59.0
pontuação média
11/13
benchmarks
Anthropic
45.4
pontuação média
2/13
benchmarks
TipoGPT-5.4Claude Opus 4.5
ProviderOpenAI logoOpenAIAnthropic logoAnthropic
pontuação média59.045.4
Preço de entrada$2.50$5.00
Preço de saída$15.00$25.00
Janela de contexto1.1M tokens (~525 books)200K tokens (~100 books)
Lançado em2026-03-052025-11-24
Código abertoProprietaryProprietary

13 benchmarks · GPT-5.4: 11, Claude Opus 4.5: 2

BenchmarkCategoriaGPT-5.4Claude Opus 4.5
APEX-Agentsagentic35.918.4
ARC-AGIreasoning93.780.0
ARC-AGI-2reasoning74.037.6
Chatbot Arena Elo — Overallarena1465.81467.7
Chess Puzzlesknowledge44.012.0
FrontierMath-2025-02-28-Privatemath47.620.7
FrontierMath-Tier-4-2025-07-01-Privatemath27.14.2
GPQA diamondknowledge91.181.4
OTIS Mock AIME 2024-2025math95.386.1
PostTrainBenchknowledge20.217.3
SimpleQA Verifiedknowledge44.841.8
SWE-Bench verifiedcoding76.976.7
WeirdMLcoding57.463.7