Beta
Home/Comparer/GPT-5.4 vs Claude Opus 4.6

GPT-5.4 vs Claude Opus 4.6

Côte à côte. Chaque métrique. Chaque benchmark.

OpenAI logoGPT-5.4Gagnant
OpenAI
59.0
score moyen
7/13
benchmarks
Anthropic
57.5
score moyen
6/13
benchmarks
TypeGPT-5.4Claude Opus 4.6
ProviderOpenAI logoOpenAIAnthropic logoAnthropic
score moyen59.057.5
Prix d'entrée$2.50$5.00
Prix de sortie$15.00$25.00
Fenêtre de contexte1.1M tokens (~525 books)1.0M tokens (~500 books)
Sorti le2026-03-052026-02-04
Code source ouvertProprietaryProprietary

13 benchmarks · GPT-5.4: 7, Claude Opus 4.6: 6

BenchmarkCatégorieGPT-5.4Claude Opus 4.6
APEX-Agentsagentic35.931.7
ARC-AGIreasoning93.794.0
ARC-AGI-2reasoning74.069.2
Chatbot Arena Elo — Overallarena1465.81496.6
Chess Puzzlesknowledge44.017.0
FrontierMath-2025-02-28-Privatemath47.640.7
FrontierMath-Tier-4-2025-07-01-Privatemath27.122.9
GPQA diamondknowledge91.187.4
OTIS Mock AIME 2024-2025math95.394.4
PostTrainBenchknowledge20.223.2
SimpleQA Verifiedknowledge44.846.5
SWE-Bench verifiedcoding76.978.7
WeirdMLcoding57.477.9