Home/Models/GPT-5 Nano
OpenAI logo

GPT-5 Nano

by OpenAI · Released Aug 2025

Multimodal
44.2
avg score
Rank #127
Compare
Better than 45% of all models
Context
400K tokens (~200 books)
Input $/1M
$0.05
Output $/1M
$0.40
Type
multimodal
License
Proprietary
Benchmarks
26 tested
Data updated today
About

GPT-5-Nano is the smallest and fastest variant in the GPT-5 system, optimized for developer tools, rapid interactions, and ultra-low latency environments. While limited in reasoning depth compared to its larger...

Tested on 26 benchmarks with 45.3% average. Top scores: MATH level 5 (95.2%), HELM — IFEval (93.2%), OTIS Mock AIME 2024-2025 (81.1%).

Looking for similar performance at lower cost?
gpt-oss-20b scores 44.4 (100% as good) at $0.03/1M input · 40% cheaper
Capabilities
coding
36.0
#106 globally
reasoning
36.7
#72 globally
math
51.0
#73 globally
knowledge
45.1
#126 globally
language
64.3
#80 globally
Benchmark Scores
Compare All
Tested on 26 benchmarks · Ranked across 5 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
LiveBench — Coding

Regularly refreshed coding problems that avoid data contamination. New problems added monthly to prevent memorization.

67.4
WeirdML

Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.

38.1
SWE-Bench Verified (Bash Only)

SWE-bench Verified solved using only bash commands, no specialized frameworks. Tests raw terminal-based problem solving.

34.8
HELM — WildBench

Stanford HELM WildBench evaluation. Tests reasoning on challenging real-world tasks.

80.6
LiveBench — Data Analysis

Fresh data analysis tasks testing ability to interpret tables, charts, and statistical data.

44.3
LiveBench — Reasoning

Regularly refreshed reasoning problems testing logical deduction, spatial reasoning, and analytical thinking.

35.5
MATH level 5

Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.

95.2
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

81.1
LiveBench — Mathematics

Regularly updated math problems that test numerical reasoning, algebra, calculus, and combinatorics.

64.7
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
gpt-5-nano
Specifications
  • Typemultimodal
  • Context400K tokens (~200 books)
  • ReleasedAug 2025
  • LicenseProprietary
  • StatusActive
  • Cost / Message~$0.001
Available On
OpenAI logoOpenAI$0.05
Share & Export
Tweet
GPT-5 Nano is a proprietary multimodal AI model by OpenAI, released in August 2025. It has an average benchmark score of 44.2. Context window: 400K tokens.