Better than 82% of all models
Context
8K tokens (~4 books)
Input $/1M
$30.00
Output $/1M
$60.00
Type
text
License
Proprietary
Benchmarks
1 tested
Data updated today
About
OpenAI's flagship model, GPT-4 is a large-scale multimodal language model capable of solving difficult problems with greater accuracy than previous models due to its broader general knowledge and advanced reasoning...
Tested on 1 benchmarks with 68.7% average. Top scores: C-Eval (68.7%).
Looking for similar performance at lower cost?
Gemma 4 31B scores 68.2 (99% as good) at $0.13/1M input · 100% cheaper
Gemma 4 31B scores 68.2 (99% as good) at $0.13/1M input · 100% cheaper
Capabilities
knowledge
68.7
#21 globally
Benchmark Scores
Compare AllTested on 1 benchmarks · Ranked across 1 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
knowledgeCompare knowledge →
C-Eval
68.7—Chinese evaluation benchmark. Tests knowledge across 52 disciplines in Chinese education system.
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Research
Documentation
Community
BenchGecko API
gpt-4
Specifications
- Typetext
- Context8K tokens (~4 books)
- ReleasedMay 2023
- LicenseProprietary
- StatusActive
- Cost / Message~$0.120
Available On
Categories
Learn More
Share & Export
Frequently Asked Questions
GPT-4 is a proprietary text AI model by OpenAI, released in May 2023. It has an average benchmark score of 68.7. Context window: 8K tokens.