Beta
Home/Vergleichen/Claude Opus 4 vs GPT-5

Claude Opus 4 vs GPT-5

Seite an Seite. Jede Metrik. Jeder Benchmark.

Anthropic
41.7
Durchschn. Score
2/18
benchmarks
OpenAI logoGPT-5Gewinner
OpenAI
54.4
Durchschn. Score
15/18
benchmarks
TypClaude Opus 4GPT-5
ProviderAnthropic logoAnthropicOpenAI logoOpenAI
Durchschn. Score41.754.4
Eingabepreis$15.00$1.25
Ausgabepreis$75.00$10.00
Kontextfenster200K tokens (~100 books)400K tokens (~200 books)
Veroeffentlicht2025-05-222025-08-07
Open SourceProprietaryProprietary

18 benchmarks · Claude Opus 4: 2, GPT-5: 15

BenchmarkKategorieClaude Opus 4GPT-5
Aider polyglotcoding72.088.0
ARC-AGIreasoning35.765.7
ARC-AGI-2reasoning8.69.9
DeepResearch Benchknowledge49.055.1
Fiction.LiveBenchknowledge61.197.2
FrontierMath-2025-02-28-Privatemath4.532.4
FrontierMath-Tier-4-2025-07-01-Privatemath4.212.5
GeoBenchknowledge49.081.0
GPQA diamondknowledge68.381.6
GSO-Benchcoding6.96.9
HLEknowledge6.221.6
MATH level 5math85.098.1
OTIS Mock AIME 2024-2025math64.491.4
SimpleBenchreasoning50.648.0
SWE-Bench verifiedcoding70.773.5
SWE-Bench Verified (Bash Only)coding67.665.0
VPCTknowledge7.049.0
WeirdMLcoding43.460.7