Beta
Home/Comparer/GPT-5.4 vs Claude Opus 4.5

GPT-5.4 vs Claude Opus 4.5

Côte à côte. Chaque métrique. Chaque benchmark.

OpenAI logoGPT-5.4Gagnant
OpenAI
59.0
score moyen
11/13
benchmarks
Anthropic
45.4
score moyen
2/13
benchmarks
TypeGPT-5.4Claude Opus 4.5
ProviderOpenAI logoOpenAIAnthropic logoAnthropic
score moyen59.045.4
Prix d'entrée$2.50$5.00
Prix de sortie$15.00$25.00
Fenêtre de contexte1.1M tokens (~525 books)200K tokens (~100 books)
Sorti le2026-03-052025-11-24
Code source ouvertProprietaryProprietary

13 benchmarks · GPT-5.4: 11, Claude Opus 4.5: 2

BenchmarkCatégorieGPT-5.4Claude Opus 4.5
APEX-Agentsagentic35.918.4
ARC-AGIreasoning93.780.0
ARC-AGI-2reasoning74.037.6
Chatbot Arena Elo — Overallarena1465.81467.7
Chess Puzzlesknowledge44.012.0
FrontierMath-2025-02-28-Privatemath47.620.7
FrontierMath-Tier-4-2025-07-01-Privatemath27.14.2
GPQA diamondknowledge91.181.4
OTIS Mock AIME 2024-2025math95.386.1
PostTrainBenchknowledge20.217.3
SimpleQA Verifiedknowledge44.841.8
SWE-Bench verifiedcoding76.976.7
WeirdMLcoding57.463.7