MMLU
MMLU is a knowledge benchmark tracked by BenchGecko across every frontier and open-weight model.
MMLU is a knowledge benchmark tracked by BenchGecko across every frontier and open-weight model.
Basic
MMLU is a knowledge benchmark tracked by BenchGecko across every frontier and open-weight model.
Deep
MMLU is a knowledge benchmark tracked by BenchGecko across every frontier and open-weight model. BenchGecko tracks MMLU scores for every eligible frontier and open-weight model. The metric is % with a maximum of 100. Category · knowledge. Source · public leaderboard. See the live leaderboard for the current top 10.
Expert
MMLU is a knowledge benchmark tracked by BenchGecko across every frontier and open-weight model. Technical details: the benchmark uses % scored against 100 as max. Primary category is knowledge. Source ingestion runs through the public leaderboard with the update cadence documented on the methodology page. Correlations with downstream capability have been studied in the public literature and the benchmark's authors' release notes.
Depending on why you're here
- ·MMLU measures knowledge capability with % scoring
- ·Source: public leaderboard · tracked on BenchGecko's /benchmark page
- ·Used to compare frontier models on knowledge-specific tasks
- ·Pick models with high MMLU if your workload matches knowledge
- ·Benchmark scores correlate with real-world quality only for matched task types
- ·Check the live leaderboard before locking in a model · rankings shift weekly
- ·MMLU is one of the citations labs use in launch announcements
- ·Saturation at the top of the leaderboard signals the benchmark is aging
- ·Watch for new benchmarks when all frontier models cluster within 2 points
- ·MMLU is a test that scores how good an AI is at knowledge
- ·Higher score = better model on that specific kind of task
- ·Not every score matters for every use · match the test to your goal
MMLU all frontier models score 90%+ · differentiator fading.