Better than 78% of all models
Context
128K tokens (~64 books)
Input $/1M
$10.00
Output $/1M
$30.00
Type
text
License
Proprietary
Benchmarks
2 tested
Data updated today
About
The latest GPT-4 Turbo model with vision capabilities. Vision requests can now use JSON mode and function calling. Training data: up to April 2023.
Tested on 2 benchmarks with 65.4% average. Top scores: Chatbot Arena Elo — Overall (1312.0%), Aider — Code Editing (65.4%).
Looking for similar performance at lower cost?
Qwen2.5 72B Instruct scores 65.8 (101% as good) at $0.36/1M input · 96% cheaper
Qwen2.5 72B Instruct scores 65.8 (101% as good) at $0.36/1M input · 96% cheaper
Capabilities
coding
65.4
#25 globally
Benchmark Scores
Compare AllTested on 2 benchmarks · Ranked across 2 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
codingCompare coding →
Aider — Code Editing
65.4—Code editing benchmark from the Aider project. Measures ability to apply targeted code changes while maintaining correctness and style.
arenaCompare arena →
Chatbot Arena Elo — Overall
1312—Chatbot Arena overall Elo rating. Crowdsourced human preference ranking from blind head-to-head comparisons across all topics.
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Research
Documentation
Community
BenchGecko API
gpt-4-1106-preview
Specifications
- Typetext
- Context128K tokens (~64 books)
- ReleasedNov 2023
- LicenseProprietary
- Statuspreview
- Cost / Message~$0.050
Available On
Categories
Learn More
Share & Export
Frequently Asked Questions
GPT-4 Turbo (older v1106) is a proprietary text AI model by OpenAI, released in November 2023. It has an average benchmark score of 65.4. Context window: 128K tokens.