Compare · ModelsLive · 2 picked · head to head
Claude Opus 4 vs Qwen3 235B A22B Instruct 2507
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Claude Opus 4 wins on 5/5 benchmarks
Claude Opus 4 wins 5 of 5 shared benchmarks. Leads in coding · reasoning · knowledge.
Category leads
coding·Claude Opus 4reasoning·Claude Opus 4knowledge·Claude Opus 4
Hype vs Reality
Attention vs performance
Claude Opus 4
#133 by perf·no signal
Qwen3 235B A22B Instruct 2507
#99 by perf·no signal
Best value
Qwen3 235B A22B Instruct 2507
612.1x better value than Claude Opus 4
Claude Opus 4
0.9 pts/$
$45.00/M
Qwen3 235B A22B Instruct 2507
567.3 pts/$
$0.09/M
Vendor risk
Who is behind the model
Anthropic
$380.0B·Tier 1
Alibaba (Qwen)
$293.0B·Tier 1
Head to head
5 benchmarks · 2 models
Claude Opus 4Qwen3 235B A22B Instruct 2507
Aider polyglot
Claude Opus 4 leads by +12.4
Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework.
Claude Opus 4
72.0
Qwen3 235B A22B Instruct 2507
59.6
ARC-AGI
Claude Opus 4 leads by +24.7
ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization.
Claude Opus 4
35.7
Qwen3 235B A22B Instruct 2507
11.0
ARC-AGI-2
Claude Opus 4 leads by +7.4
ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data.
Claude Opus 4
8.6
Qwen3 235B A22B Instruct 2507
1.3
Fiction.LiveBench
Claude Opus 4 leads by +8.2
Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination.
Claude Opus 4
61.1
Qwen3 235B A22B Instruct 2507
52.9
WeirdML
Claude Opus 4 leads by +4.7
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Claude Opus 4
43.4
Qwen3 235B A22B Instruct 2507
38.7
Full benchmark table
| Benchmark | Claude Opus 4 | Qwen3 235B A22B Instruct 2507 |
|---|---|---|
Aider polyglot Aider Polyglot · measures how well AI models can edit code across multiple programming languages using the Aider coding assistant framework. | 72.0 | 59.6 |
ARC-AGI ARC-AGI · the original Abstraction and Reasoning Corpus, testing whether AI can solve novel visual pattern recognition tasks without memorization. | 35.7 | 11.0 |
ARC-AGI-2 ARC-AGI-2 · the second iteration of the Abstraction and Reasoning Corpus, testing novel pattern recognition and abstract reasoning without prior training data. | 8.6 | 1.3 |
Fiction.LiveBench Fiction.LiveBench · a continuously updated benchmark using recently published fiction to test reading comprehension and reasoning, preventing data contamination. | 61.1 | 52.9 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 43.4 | 38.7 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $15.00 | $75.00 | 200K tokens (~100 books) | $300.00 | |
| $0.07 | $0.10 | 262K tokens (~131 books) | $0.78 |
People also compared