Beta
Home/Comparer/Claude Opus 4.5 vs GPT-5.2

Claude Opus 4.5 vs GPT-5.2

Côte à côte. Chaque métrique. Chaque benchmark.

Anthropic
45.4
score moyen
6/20
benchmarks
OpenAI logoGPT-5.2Gagnant
OpenAI
54.0
score moyen
14/20
benchmarks
TypeClaude Opus 4.5GPT-5.2
ProviderAnthropic logoAnthropicOpenAI logoOpenAI
score moyen45.454.0
Prix d'entrée$5.00$1.75
Prix de sortie$25.00$14.00
Fenêtre de contexte200K tokens (~100 books)400K tokens (~200 books)
Sorti le2025-11-242025-12-10
Code source ouvertProprietaryProprietary

20 benchmarks · Claude Opus 4.5: 6, GPT-5.2: 14

BenchmarkCatégorieClaude Opus 4.5GPT-5.2
APEX-Agentsagentic18.434.3
ARC-AGIreasoning80.086.2
ARC-AGI-2reasoning37.652.9
Chatbot Arena Elo — Codingarena1465.21403.1
Chatbot Arena Elo — Overallarena1467.71439.5
Chess Puzzlesknowledge12.049.0
FrontierMath-2025-02-28-Privatemath20.740.7
FrontierMath-Tier-4-2025-07-01-Privatemath4.218.8
GPQA diamondknowledge81.488.5
GSO-Benchcoding26.527.4
HLEknowledge21.424.2
OTIS Mock AIME 2024-2025math86.196.1
PostTrainBenchknowledge17.321.4
SimpleBenchreasoning54.435.0
SimpleQA Verifiedknowledge41.838.9
SWE-Bench verifiedcoding76.773.8
SWE-Bench Verified (Bash Only)coding74.471.8
Terminal Benchcoding63.164.9
VPCTknowledge10.076.0
WeirdMLcoding63.772.2