HLE
HLE is a knowledge benchmark tracked by BenchGecko across every frontier and open-weight model.
HLE is a knowledge benchmark tracked by BenchGecko across every frontier and open-weight model.
Basic
HLE is a knowledge benchmark tracked by BenchGecko across every frontier and open-weight model.
Deep
HLE is a knowledge benchmark tracked by BenchGecko across every frontier and open-weight model. BenchGecko tracks HLE scores for every eligible frontier and open-weight model. The metric is % with a maximum of 100. Category · knowledge. Source · public leaderboard. See the live leaderboard for the current top 10.
Expert
HLE is a knowledge benchmark tracked by BenchGecko across every frontier and open-weight model. Technical details: the benchmark uses % scored against 100 as max. Primary category is knowledge. Source ingestion runs through the public leaderboard with the update cadence documented on the methodology page. Correlations with downstream capability have been studied in the public literature and the benchmark's authors' release notes.
Depending on why you're here
- ·HLE measures knowledge capability with % scoring
- ·Source: public leaderboard · tracked on BenchGecko's /benchmark page
- ·Used to compare frontier models on knowledge-specific tasks
- ·Pick models with high HLE if your workload matches knowledge
- ·Benchmark scores correlate with real-world quality only for matched task types
- ·Check the live leaderboard before locking in a model · rankings shift weekly
- ·HLE is one of the citations labs use in launch announcements
- ·Saturation at the top of the leaderboard signals the benchmark is aging
- ·Watch for new benchmarks when all frontier models cluster within 2 points
- ·HLE is a test that scores how good an AI is at knowledge
- ·Higher score = better model on that specific kind of task
- ·Not every score matters for every use · match the test to your goal
HLE all frontier models score 90%+ · differentiator fading.