C-Eval
Distribution
Where models cluster
Correlated benchmarks
Pearson r · original research
Full rankings
2 models tested · sorted by score
Frequently asked
Pulled from the C-Eval dataset · updated daily
What does C-Eval measure?
C-Eval is a knowledge benchmark in the BenchGecko catalog. 2 AI models have been tested on it. Scores range from 38.8 to 68.7 out of 100.
Which model leads on C-Eval?
GPT-4 from OpenAI leads C-Eval with a score of 68.7. The median score across 2 tested models is 53.8.
Is C-Eval saturated?
No · the top score is 68.7 out of 100 (69%). There is still meaningful room for improvement on C-Eval.
What makes C-Eval distinctive?
C-Eval is a knowledge benchmark with limited overlap to the rest of the catalog · it measures capabilities that are not well-covered by other benchmarks we track.
How often is C-Eval data refreshed?
BenchGecko pulls updates daily. New model scores on C-Eval appear as soon as they are published by Epoch AI or the model provider.
More knowledge benchmarks
Same category · related evaluations