Compare · ModelsLive · 2 picked · head to head
o3 vs Qwen3 235B A22B
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
o3 wins on 7/7 benchmarks
o3 wins 7 of 7 shared benchmarks. Leads in coding · knowledge · math.
Category leads
coding·o3knowledge·o3math·o3reasoning·o3
Hype vs Reality
Attention vs performance
o3
#69 by perf·no signal
Qwen3 235B A22B
#60 by perf·no signal
Best value
Qwen3 235B A22B
4.5x better value than o3
o3
11.0 pts/$
$5.00/M
Qwen3 235B A22B
49.6 pts/$
$1.14/M
Vendor risk
Who is behind the model
OpenAI
$840.0B·Tier 1
Alibaba (Qwen)
$293.0B·Tier 1
Head to head
7 benchmarks · 2 models
o3Qwen3 235B A22B
Aider polyglot
o3 leads by +21.7
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
o3
81.3
Qwen3 235B A22B
59.6
Fiction.LiveBench
o3 leads by +21.2
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
o3
88.9
Qwen3 235B A22B
67.7
GPQA diamond
o3 leads by +14.8
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
o3
75.8
Qwen3 235B A22B
60.9
Lech Mazur Writing
o3 leads by +0.9
Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication.
o3
83.9
Qwen3 235B A22B
83.0
MATH level 5
o3 leads by +28.9
MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics.
o3
97.8
Qwen3 235B A22B
68.9
SimpleBench
o3 leads by +26.5
SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking.
o3
43.7
Qwen3 235B A22B
17.2
WeirdML
o3 leads by +15.1
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
o3
52.4
Qwen3 235B A22B
37.3
Full benchmark table
| Benchmark | o3 | Qwen3 235B A22B |
|---|---|---|
Aider polyglot Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework. | 81.3 | 59.6 |
Fiction.LiveBench Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination. | 88.9 | 67.7 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 75.8 | 60.9 |
Lech Mazur Writing Lech Mazur Writing · evaluates creative writing ability, assessing prose quality, narrative coherence, and stylistic sophistication. | 83.9 | 83.0 |
MATH level 5 MATH Level 5 · the hardest tier of the MATH benchmark, featuring competition-level problems from AMC, AIME, and Olympiad-style mathematics. | 97.8 | 68.9 |
SimpleBench SimpleBench · tests fundamental reasoning capabilities with straightforward problems designed to expose gaps in basic logical and spatial thinking. | 43.7 | 17.2 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 52.4 | 37.3 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $2.00 | $8.00 | 200K tokens (~100 books) | $35.00 | |
| $0.46 | $1.82 | 131K tokens (~66 books) | $7.96 |