Home/Models/Gemini 3 Flash Preview
Google DeepMind logo

Gemini 3 Flash Preview

by Google DeepMind · Released Dec 2025

Multimodal1M ContextPreview
59.1
avg score
Rank #71
Compare
Better than 70% of all models
Context
1.0M tokens (~524 books)
Input $/1M
$0.50
Output $/1M
$3.00
Type
multimodal
License
Proprietary
Benchmarks
24 tested
Data updated today
About

Gemini 3 Flash Preview is a high speed, high value thinking model designed for agentic workflows, multi turn chat, and coding assistance. It delivers near Pro level reasoning and tool...

Tested on 24 benchmarks with 49.1% average. Top scores: Chatbot Arena Elo — Overall (1473.9%), Chatbot Arena Elo — Coding (1436.4%), OTIS Mock AIME 2024-2025 (92.8%).

Looking for similar performance at lower cost?
Qwen3 235B A22B Thinking 2507 scores 59.4 (101% as good) at $0.15/1M input · 70% cheaper
Capabilities
coding
52.8
#58 globally
reasoning
36.1
#75 globally
math
44.2
#96 globally
knowledge
57.2
#65 globally
agentic
40.7
#8 globally
speed
77.1
#16 globally
Benchmark Scores
Compare All
Tested on 24 benchmarks · Ranked across 7 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
SWE-Bench verified

Real-world software engineering tasks from GitHub issues. Models must diagnose bugs and write patches that pass test suites. Human-verified subset of SWE-bench.

75.4
Terminal Bench

Complex terminal-based engineering tasks. Models must use command-line tools, navigate filesystems, and debug systems through shell interaction.

64.3
WeirdML

Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.

61.6
SimpleBench

Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.

53.3
ARC-AGI-2

ARC-AGI 2, harder sequel to ARC. More complex abstract reasoning patterns that test generalization ability beyond training data.

33.6
ARC-AGI

Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.

21.5
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

92.8
FrontierMath-2025-02-28-Private

Original research-level math problems created by professional mathematicians. Problems are unpublished and cannot be memorized.

35.6
FrontierMath-Tier-4-2025-07-01-Private

Hardest tier of FrontierMath. Problems at the frontier of human mathematical ability, many unsolved by most mathematicians.

4.2
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
BenchGecko API
gemini-3-flash-preview
Specifications
  • Typemultimodal
  • Context1.0M tokens (~524 books)
  • ReleasedDec 2025
  • LicenseProprietary
  • Statuspreview
  • Cost / Message~$0.004
Available On
Google DeepMind logoGoogle DeepMind$0.50
Share & Export
Tweet
Gemini 3 Flash Preview is a proprietary multimodal AI model by Google DeepMind, released in December 2025. It has an average benchmark score of 59.1. Context window: 1M tokens.