OpenBookQA
OpenBookQA is a knowledge benchmark tracked by BenchGecko across every frontier and open-weight model.
OpenBookQA is a knowledge benchmark tracked by BenchGecko across every frontier and open-weight model.
Basic
OpenBookQA is a knowledge benchmark tracked by BenchGecko across every frontier and open-weight model.
Deep
OpenBookQA is a knowledge benchmark tracked by BenchGecko across every frontier and open-weight model. BenchGecko tracks OpenBookQA scores for every eligible frontier and open-weight model. The metric is % with a maximum of 100. Category · knowledge. Source · public leaderboard. See the live leaderboard for the current top 10.
Expert
OpenBookQA is a knowledge benchmark tracked by BenchGecko across every frontier and open-weight model. Technical details: the benchmark uses % scored against 100 as max. Primary category is knowledge. Source ingestion runs through the public leaderboard with the update cadence documented on the methodology page. Correlations with downstream capability have been studied in the public literature and the benchmark's authors' release notes.
Depending on why you're here
- ·OpenBookQA measures knowledge capability with % scoring
- ·Source: public leaderboard · tracked on BenchGecko's /benchmark page
- ·Used to compare frontier models on knowledge-specific tasks
- ·Pick models with high OpenBookQA if your workload matches knowledge
- ·Benchmark scores correlate with real-world quality only for matched task types
- ·Check the live leaderboard before locking in a model · rankings shift weekly
- ·OpenBookQA is one of the citations labs use in launch announcements
- ·Saturation at the top of the leaderboard signals the benchmark is aging
- ·Watch for new benchmarks when all frontier models cluster within 2 points
- ·OpenBookQA is a test that scores how good an AI is at knowledge
- ·Higher score = better model on that specific kind of task
- ·Not every score matters for every use · match the test to your goal
OpenBookQA all frontier models score 90%+ · differentiator fading.