Compare · ModelsLive · 2 picked · head to head
Gemini 3 Flash Preview vs Qwen3 235B A22B Thinking 2507
Side by side · benchmarks, pricing, and signals you can act on.
Winner summary
Gemini 3 Flash Preview wins on 8/8 benchmarks
Gemini 3 Flash Preview wins 8 of 8 shared benchmarks. Leads in arena · knowledge · math.
Category leads
arena·Gemini 3 Flash Previewknowledge·Gemini 3 Flash Previewmath·Gemini 3 Flash Previewcoding·Gemini 3 Flash Preview
Hype vs Reality
Attention vs performance
Gemini 3 Flash Preview
#98 by perf·no signal
Qwen3 235B A22B Thinking 2507
#66 by perf·no signal
Best value
Qwen3 235B A22B Thinking 2507
2.4x better value than Gemini 3 Flash Preview
Gemini 3 Flash Preview
28.1 pts/$
$1.75/M
Qwen3 235B A22B Thinking 2507
68.0 pts/$
$0.82/M
Vendor risk
Who is behind the model
Google DeepMind
$4.00T·Tier 1
Alibaba (Qwen)
$293.0B·Tier 1
Head to head
8 benchmarks · 2 models
Gemini 3 Flash PreviewQwen3 235B A22B Thinking 2507
Chatbot Arena Elo · Overall
Gemini 3 Flash Preview leads by +74.1
Gemini 3 Flash Preview
1473.9
Qwen3 235B A22B Thinking 2507
1399.8
Chess Puzzles
Gemini 3 Flash Preview leads by +26.0
Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities.
Gemini 3 Flash Preview
38.0
Qwen3 235B A22B Thinking 2507
12.0
FrontierMath-2025-02-28-Private
Gemini 3 Flash Preview leads by +27.2
FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning.
Gemini 3 Flash Preview
35.6
Qwen3 235B A22B Thinking 2507
8.5
FrontierMath-Tier-4-2025-07-01-Private
Gemini 3 Flash Preview leads by +4.1
FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning.
Gemini 3 Flash Preview
4.2
Qwen3 235B A22B Thinking 2507
0.1
GPQA diamond
Gemini 3 Flash Preview leads by +4.2
Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs.
Gemini 3 Flash Preview
77.6
Qwen3 235B A22B Thinking 2507
73.4
OTIS Mock AIME 2024-2025
Gemini 3 Flash Preview leads by +6.1
OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills.
Gemini 3 Flash Preview
92.8
Qwen3 235B A22B Thinking 2507
86.7
SimpleQA Verified
Gemini 3 Flash Preview leads by +17.3
SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information.
Gemini 3 Flash Preview
67.4
Qwen3 235B A22B Thinking 2507
50.1
WeirdML
Gemini 3 Flash Preview leads by +20.6
WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns.
Gemini 3 Flash Preview
61.6
Qwen3 235B A22B Thinking 2507
41.0
Full benchmark table
| Benchmark | Gemini 3 Flash Preview | Qwen3 235B A22B Thinking 2507 |
|---|---|---|
Chatbot Arena Elo · Overall | 1473.9 | 1399.8 |
Chess Puzzles Chess Puzzles · tests strategic and tactical reasoning by having models solve chess puzzle positions, evaluating lookahead and pattern recognition abilities. | 38.0 | 12.0 |
FrontierMath-2025-02-28-Private FrontierMath (Feb 2025) · original research-level math problems created by mathematicians, testing capabilities at the boundary of current AI mathematical reasoning. | 35.6 | 8.5 |
FrontierMath-Tier-4-2025-07-01-Private FrontierMath Tier 4 (Jul 2025) · the most challenging tier of frontier mathematics, containing problems that push the absolute limits of AI mathematical reasoning. | 4.2 | 0.1 |
GPQA diamond Graduate-Level Google-Proof QA (Diamond set) · expert-crafted questions in physics, biology, and chemistry that are difficult even for domain PhDs. | 77.6 | 73.4 |
OTIS Mock AIME 2024-2025 OTIS Mock AIME 2024-2025 · simulated American Invitational Mathematics Examination problems testing advanced problem-solving skills. | 92.8 | 86.7 |
SimpleQA Verified SimpleQA Verified · short factual questions with verified answers, measuring factual accuracy and the tendency to hallucinate or provide incorrect information. | 67.4 | 50.1 |
WeirdML WeirdML · tests models on unusual and adversarial machine learning tasks that require creative problem-solving beyond standard patterns. | 61.6 | 41.0 |
Pricing · per 1M tokens · projected $/mo at 10M tokens
| Model | Input | Output | Context | Projected $/mo |
|---|---|---|---|---|
| $0.50 | $3.00 | 1.0M tokens (~524 books) | $11.25 | |
| $0.15 | $1.50 | 131K tokens (~66 books) | $4.86 |