EnigmaEval
Distribution
Where models cluster
Correlated benchmarks
Pearson r · original research
Full rankings
2 models tested · sorted by score
| # | Model | Score |
|---|---|---|
| 1 | 19.8 | |
| 2 | 13.1 |
Frequently asked
Pulled from the EnigmaEval dataset · updated daily
What does EnigmaEval measure?
EnigmaEval is a knowledge benchmark in the BenchGecko catalog. 2 AI models have been tested on it. Scores range from 13.1 to 19.8 out of 100.
Which model leads on EnigmaEval?
Gemini 3.1 Pro Preview from Google DeepMind leads EnigmaEval with a score of 19.8. The median score across 2 tested models is 16.4.
Is EnigmaEval saturated?
No · the top score is 19.8 out of 100 (20%). There is still meaningful room for improvement on EnigmaEval.
What makes EnigmaEval distinctive?
EnigmaEval is a knowledge benchmark with limited overlap to the rest of the catalog · it measures capabilities that are not well-covered by other benchmarks we track.
How often is EnigmaEval data refreshed?
BenchGecko pulls updates daily. New model scores on EnigmaEval appear as soon as they are published by Epoch AI or the model provider.
More knowledge benchmarks
Same category · related evaluations