JCommonsenseQA
The Frontier
Best score over time · one chart, every benchmark
Distribution
Where models cluster
Correlated benchmarks
Pearson r · original research
Benchmarks that track with JCommonsenseQA
Pearson correlation across models scored on both benchmarks. Closer to 1 = strongly predictive.
Full rankings
11 models tested · sorted by score
| # | Model | Score |
|---|---|---|
| 1 | 93.7 | |
| 2 | 89.1 | |
| 3 | 87.8 | |
| 4 | 87.7 | |
| 5 | 82.9 | |
| 6 | 78.2 | |
| 7 | 62.4 | |
| 8 | 59.8 | |
| 9 | 52.6 | |
| 10 | 25.5 | |
| 11 | HF SmolLM2 135M Instruct | 17.0 |
Frequently asked
Pulled from the JCommonsenseQA dataset · updated daily
What does JCommonsenseQA measure?
JCommonsenseQA is a knowledge benchmark in the BenchGecko catalog. 11 AI models have been tested on it. Scores range from 17.0 to 93.7 out of 100.
Which model leads on JCommonsenseQA?
DeepSeek R1 Distill Qwen 14B from DeepSeek leads JCommonsenseQA with a score of 93.7. The median score across 11 tested models is 78.2.
Is JCommonsenseQA saturated?
No · the top score is 93.7 out of 100 (94%). There is still meaningful room for improvement on JCommonsenseQA.
Does JCommonsenseQA predict performance on other benchmarks?
Yes · JCommonsenseQA scores correlate 0.90 with LLM-JP · Overall across 11 shared models. Models that do well on JCommonsenseQA tend to do well on LLM-JP · Overall.
How often is JCommonsenseQA data refreshed?
BenchGecko pulls updates daily. New model scores on JCommonsenseQA appear as soon as they are published by Epoch AI or the model provider.
More knowledge benchmarks
Same category · related evaluations