Beta
Benchmark · Knowledge

EnigmaEval

Updated 2026-02-19
Models tested
2
Top score
19.8
Gemini 3.1 Pro Preview
Median
16.4
min 13.1
Top-5 spread
σ 3.3
competitive

Where models cluster

SCORE DISTRIBUTION0–10210–2020–3030–4040–5050–6060–7070–8080–9090–100MEDIAN · 16.4SCORE BUCKET → (0 TO 100)MODELSbenchgecko.ai

Pearson r · original research

Not enough overlapping models yet.

2 models tested · sorted by score

#ModelScore
1Google DeepMind logoGemini 3.1 Pro Preview19.8
2OpenAI logoo313.1

Pulled from the EnigmaEval dataset · updated daily

What does EnigmaEval measure?

EnigmaEval is a knowledge benchmark in the BenchGecko catalog. 2 AI models have been tested on it. Scores range from 13.1 to 19.8 out of 100.

Which model leads on EnigmaEval?

Gemini 3.1 Pro Preview from Google DeepMind leads EnigmaEval with a score of 19.8. The median score across 2 tested models is 16.4.

Is EnigmaEval saturated?

No · the top score is 19.8 out of 100 (20%). There is still meaningful room for improvement on EnigmaEval.

What makes EnigmaEval distinctive?

EnigmaEval is a knowledge benchmark with limited overlap to the rest of the catalog · it measures capabilities that are not well-covered by other benchmarks we track.

How often is EnigmaEval data refreshed?

BenchGecko pulls updates daily. New model scores on EnigmaEval appear as soon as they are published by Epoch AI or the model provider.

Same category · related evaluations