Beta
Home/Comparar/GPT-5.4 vs Claude Opus 4.6

GPT-5.4 vs Claude Opus 4.6

Lado a lado. Cada métrica. Cada benchmark.

OpenAI logoGPT-5.4Vencedor
OpenAI
59.0
pontuação média
7/13
benchmarks
Anthropic
57.5
pontuação média
6/13
benchmarks
TipoGPT-5.4Claude Opus 4.6
ProviderOpenAI logoOpenAIAnthropic logoAnthropic
pontuação média59.057.5
Preço de entrada$2.50$5.00
Preço de saída$15.00$25.00
Janela de contexto1.1M tokens (~525 books)1.0M tokens (~500 books)
Lançado em2026-03-052026-02-04
Código abertoProprietaryProprietary

13 benchmarks · GPT-5.4: 7, Claude Opus 4.6: 6

BenchmarkCategoriaGPT-5.4Claude Opus 4.6
APEX-Agentsagentic35.931.7
ARC-AGIreasoning93.794.0
ARC-AGI-2reasoning74.069.2
Chatbot Arena Elo — Overallarena1465.81496.6
Chess Puzzlesknowledge44.017.0
FrontierMath-2025-02-28-Privatemath47.640.7
FrontierMath-Tier-4-2025-07-01-Privatemath27.122.9
GPQA diamondknowledge91.187.4
OTIS Mock AIME 2024-2025math95.394.4
PostTrainBenchknowledge20.223.2
SimpleQA Verifiedknowledge44.846.5
SWE-Bench verifiedcoding76.978.7
WeirdMLcoding57.477.9