Beta
Home/Comparer/Claude Opus 4.6 vs GPT-5.4

Claude Opus 4.6 vs GPT-5.4

Côte à côte. Chaque métrique. Chaque benchmark.

Anthropic
57.5
score moyen
6/13
benchmarks
OpenAI logoGPT-5.4Gagnant
OpenAI
59.0
score moyen
7/13
benchmarks
TypeClaude Opus 4.6GPT-5.4
ProviderAnthropic logoAnthropicOpenAI logoOpenAI
score moyen57.559.0
Prix d'entrée$5.00$2.50
Prix de sortie$25.00$15.00
Fenêtre de contexte1.0M tokens (~500 books)1.1M tokens (~525 books)
Sorti le2026-02-042026-03-05
Code source ouvertProprietaryProprietary

13 benchmarks · Claude Opus 4.6: 6, GPT-5.4: 7

BenchmarkCatégorieClaude Opus 4.6GPT-5.4
APEX-Agentsagentic31.735.9
ARC-AGIreasoning94.093.7
ARC-AGI-2reasoning69.274.0
Chatbot Arena Elo — Overallarena1496.61465.8
Chess Puzzlesknowledge17.044.0
FrontierMath-2025-02-28-Privatemath40.747.6
FrontierMath-Tier-4-2025-07-01-Privatemath22.927.1
GPQA diamondknowledge87.491.1
OTIS Mock AIME 2024-2025math94.495.3
PostTrainBenchknowledge23.220.2
SimpleQA Verifiedknowledge46.544.8
SWE-Bench verifiedcoding78.776.9
WeirdMLcoding77.957.4