Beta
Home/Vergleichen/Claude Opus 4.6 vs GPT-5.4

Claude Opus 4.6 vs GPT-5.4

Seite an Seite. Jede Metrik. Jeder Benchmark.

Anthropic
57.5
Durchschn. Score
6/13
benchmarks
OpenAI logoGPT-5.4Gewinner
OpenAI
59.0
Durchschn. Score
7/13
benchmarks
TypClaude Opus 4.6GPT-5.4
ProviderAnthropic logoAnthropicOpenAI logoOpenAI
Durchschn. Score57.559.0
Eingabepreis$5.00$2.50
Ausgabepreis$25.00$15.00
Kontextfenster1.0M tokens (~500 books)1.1M tokens (~525 books)
Veroeffentlicht2026-02-042026-03-05
Open SourceProprietaryProprietary

13 benchmarks · Claude Opus 4.6: 6, GPT-5.4: 7

BenchmarkKategorieClaude Opus 4.6GPT-5.4
APEX-Agentsagentic31.735.9
ARC-AGIreasoning94.093.7
ARC-AGI-2reasoning69.274.0
Chatbot Arena Elo — Overallarena1496.61465.8
Chess Puzzlesknowledge17.044.0
FrontierMath-2025-02-28-Privatemath40.747.6
FrontierMath-Tier-4-2025-07-01-Privatemath22.927.1
GPQA diamondknowledge87.491.1
OTIS Mock AIME 2024-2025math94.495.3
PostTrainBenchknowledge23.220.2
SimpleQA Verifiedknowledge46.544.8
SWE-Bench verifiedcoding78.776.9
WeirdMLcoding77.957.4