Beta
Benchmark · Knowledge

SWE Atlas · Codebase QnA

Updated 2026-04-07
Models tested
3
Top score
33.3
Claude Opus 4.6 (Fast)
Median
32.6
min 31.2
Top-5 spread
σ 0.9
settled

Where models cluster

SCORE DISTRIBUTION0–1010–2020–30330–4040–5050–6060–7070–8080–9090–100MEDIAN · 32.6SCORE BUCKET → (0 TO 100)MODELSbenchgecko.ai

Pearson r · original research

Not enough overlapping models yet.

3 models tested · sorted by score

Pulled from the SWE Atlas · Codebase QnA dataset · updated daily

What does SWE Atlas · Codebase QnA measure?

SWE Atlas · Codebase QnA is a knowledge benchmark in the BenchGecko catalog. 3 AI models have been tested on it. Scores range from 31.2 to 33.3 out of 100.

Which model leads on SWE Atlas · Codebase QnA?

Claude Opus 4.6 (Fast) from Anthropic leads SWE Atlas · Codebase QnA with a score of 33.3. The median score across 3 tested models is 32.6.

Is SWE Atlas · Codebase QnA saturated?

No · the top score is 33.3 out of 100 (33%). There is still meaningful room for improvement on SWE Atlas · Codebase QnA.

What makes SWE Atlas · Codebase QnA distinctive?

SWE Atlas · Codebase QnA is a knowledge benchmark with limited overlap to the rest of the catalog · it measures capabilities that are not well-covered by other benchmarks we track.

How often is SWE Atlas · Codebase QnA data refreshed?

BenchGecko pulls updates daily. New model scores on SWE Atlas · Codebase QnA appear as soon as they are published by Epoch AI or the model provider.

Same category · related evaluations