WeirdML
WeirdML is a coding benchmark tracked by BenchGecko across every frontier and open-weight model.
WeirdML is a coding benchmark tracked by BenchGecko across every frontier and open-weight model.
Basic
WeirdML is a coding benchmark tracked by BenchGecko across every frontier and open-weight model.
Deep
WeirdML is a coding benchmark tracked by BenchGecko across every frontier and open-weight model. BenchGecko tracks WeirdML scores for every eligible frontier and open-weight model. The metric is % with a maximum of 100. Category · coding. Source · public leaderboard. See the live leaderboard for the current top 10.
Expert
WeirdML is a coding benchmark tracked by BenchGecko across every frontier and open-weight model. Technical details: the benchmark uses % scored against 100 as max. Primary category is coding. Source ingestion runs through the public leaderboard with the update cadence documented on the methodology page. Correlations with downstream capability have been studied in the public literature and the benchmark's authors' release notes.
Depending on why you're here
- ·WeirdML measures coding capability with % scoring
- ·Source: public leaderboard · tracked on BenchGecko's /benchmark page
- ·Used to compare frontier models on coding-specific tasks
- ·Pick models with high WeirdML if your workload matches coding
- ·Benchmark scores correlate with real-world quality only for matched task types
- ·Check the live leaderboard before locking in a model · rankings shift weekly
- ·WeirdML is one of the citations labs use in launch announcements
- ·Saturation at the top of the leaderboard signals the benchmark is aging
- ·Watch for new benchmarks when all frontier models cluster within 2 points
- ·WeirdML is a test that scores how good an AI is at coding
- ·Higher score = better model on that specific kind of task
- ·Not every score matters for every use · match the test to your goal
WeirdML correlates with real-world coding revenue · watch the delta between base and agent scaffold scores.