CadEval
CadEval is a coding benchmark tracked by BenchGecko across every frontier and open-weight model.
CadEval is a coding benchmark tracked by BenchGecko across every frontier and open-weight model.
Basic
CadEval is a coding benchmark tracked by BenchGecko across every frontier and open-weight model.
Deep
CadEval is a coding benchmark tracked by BenchGecko across every frontier and open-weight model. BenchGecko tracks CadEval scores for every eligible frontier and open-weight model. The metric is % with a maximum of 100. Category · coding. Source · public leaderboard. See the live leaderboard for the current top 10.
Expert
CadEval is a coding benchmark tracked by BenchGecko across every frontier and open-weight model. Technical details: the benchmark uses % scored against 100 as max. Primary category is coding. Source ingestion runs through the public leaderboard with the update cadence documented on the methodology page. Correlations with downstream capability have been studied in the public literature and the benchmark's authors' release notes.
Depending on why you're here
- ·CadEval measures coding capability with % scoring
- ·Source: public leaderboard · tracked on BenchGecko's /benchmark page
- ·Used to compare frontier models on coding-specific tasks
- ·Pick models with high CadEval if your workload matches coding
- ·Benchmark scores correlate with real-world quality only for matched task types
- ·Check the live leaderboard before locking in a model · rankings shift weekly
- ·CadEval is one of the citations labs use in launch announcements
- ·Saturation at the top of the leaderboard signals the benchmark is aging
- ·Watch for new benchmarks when all frontier models cluster within 2 points
- ·CadEval is a test that scores how good an AI is at coding
- ·Higher score = better model on that specific kind of task
- ·Not every score matters for every use · match the test to your goal
CadEval correlates with real-world coding revenue · watch the delta between base and agent scaffold scores.