Beta
Compare · ModelsLive · 2 picked · head to head

Claude Mythos Preview vs GPT-5.5

Side by side · benchmarks, pricing, and signals you can act on.

Winner summary

Claude Mythos Preview wins 2 of 3 shared benchmarks. Leads in knowledge · agentic.

Category leads
knowledge·Claude Mythos Previewagentic·Claude Mythos Previewcoding·GPT-5.5
Hype vs Reality
Claude Mythos Preview
#4 by perf·#2 by attention
DESERVED
GPT-5.5
#2 by perf·no signal
QUIET
Best value
Claude Mythos Preview
no price
GPT-5.5
4.9 pts/$
$17.50/M
Vendor risk
Anthropic logo
Anthropic
$380.0B·Tier 1
Medium risk
OpenAI logo
OpenAI
$840.0B·Tier 1
Medium risk
Head to head
Claude Mythos PreviewGPT-5.5
GPQA diamond
Claude Mythos Preview leads by +0.9
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Claude Mythos Preview
94.5
GPT-5.5
93.6
OSWorld
Claude Mythos Preview leads by +0.9
OSWorld · tests AI agents on real-world computer tasks across operating systems, including web browsing, file management, and application use.
Claude Mythos Preview
79.6
GPT-5.5
78.7
Terminal Bench
GPT-5.5 leads by +0.7
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
Claude Mythos Preview
82.0
GPT-5.5
82.7
Full benchmark table
BenchmarkClaude Mythos PreviewGPT-5.5
GPQA diamond
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
94.593.6
OSWorld
OSWorld · tests AI agents on real-world computer tasks across operating systems, including web browsing, file management, and application use.
79.678.7
Terminal Bench
Terminal-Bench 2.0 · evaluates AI agents on real terminal-based coding tasks · writing scripts, debugging, running tests, and managing projects entirely through command-line interaction. Tests both code quality and terminal fluency. Claude Opus 4.7 scores 69.4%, demonstrating significant agentic terminal competence.
82.082.7
Pricing · per 1M tokens · projected $/mo at 10M tokens
ModelInputOutputContextProjected $/mo
Anthropic logoClaude Mythos Preview1.0M tokens (~500 books)
OpenAI logoGPT-5.5$5.00$30.00400K tokens (~200 books)$112.50