Beta
Home/Confronto/Claude Opus 4.6 vs GPT-5.4

Claude Opus 4.6 vs GPT-5.4

Fianco a fianco. Ogni metrica. Ogni benchmark.

Anthropic
57.5
punteggio medio
6/13
benchmarks
OpenAI logoGPT-5.4Vincitore
OpenAI
59.0
punteggio medio
7/13
benchmarks
TipoClaude Opus 4.6GPT-5.4
ProviderAnthropic logoAnthropicOpenAI logoOpenAI
punteggio medio57.559.0
Prezzo Input$5.00$2.50
Prezzo Output$25.00$15.00
Finestra di Contesto1.0M tokens (~500 books)1.1M tokens (~525 books)
Rilascio2026-02-042026-03-05
Open SourceProprietaryProprietary

13 benchmarks · Claude Opus 4.6: 6, GPT-5.4: 7

BenchmarkCategoriaClaude Opus 4.6GPT-5.4
APEX-Agentsagentic31.735.9
ARC-AGIreasoning94.093.7
ARC-AGI-2reasoning69.274.0
Chatbot Arena Elo — Overallarena1496.61465.8
Chess Puzzlesknowledge17.044.0
FrontierMath-2025-02-28-Privatemath40.747.6
FrontierMath-Tier-4-2025-07-01-Privatemath22.927.1
GPQA diamondknowledge87.491.1
OTIS Mock AIME 2024-2025math94.495.3
PostTrainBenchknowledge23.220.2
SimpleQA Verifiedknowledge46.544.8
SWE-Bench verifiedcoding78.776.9
WeirdMLcoding77.957.4