Home/Models/INTELLECT-1
U

INTELLECT-1

by Unknown · Released Jan 2024

19.8
avg score
Rank #206
Compare
Better than 12% of all models
Context
N/A
Input $/1M
TBD
Output $/1M
TBD
Type
text
License
Proprietary
Benchmarks
12 tested
Data updated today
About

Tested on 12 benchmarks with 20.2% average. Top scores: HellaSwag (61.9%), ARC AI2 (39.4%), GSM8K (38.6%).

Capabilities
reasoning
8.6
#144 globally
math
19.3
#166 globally
knowledge
27.9
#181 globally
language
17.6
#148 globally
general
1.0
#73 globally
Benchmark Scores
Compare All
Tested on 12 benchmarks · Ranked across 5 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
BBH

BIG-Bench Hard. 23 challenging tasks from BIG-Bench where prior language models fell below average human performance.

13.1
MUSR

HuggingFace MuSR (Multi-Step Reasoning). Tests multi-hop reasoning requiring chaining multiple facts together.

4.1
GSM8K

Grade school math word problems. 8,500 problems testing multi-step arithmetic reasoning. A foundational math benchmark.

38.6
MATH Level 5

HuggingFace evaluation of MATH Level 5 problems. Competition math requiring advanced reasoning and proof construction.

0.0
HellaSwag

Sentence completion requiring commonsense reasoning about physical and social situations. Tests real-world understanding.

61.9
ARC AI2

AI2 Reasoning Challenge. Grade-school science questions requiring multi-step reasoning. Easy and Challenge sets test different difficulty levels.

39.4
MMLU

Massive Multitask Language Understanding. 57 subjects from STEM, humanities, and social sciences. The most widely-cited knowledge benchmark.

33.2
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
intellect-1
Specifications
  • Typetext
  • ContextN/A
  • ReleasedJan 2024
  • LicenseProprietary
  • Statusbenchmark-only
Available On
U
UnknownTBD
Share & Export
Tweet
INTELLECT-1 is a proprietary text AI model by Unknown, released in January 2024. It has an average benchmark score of 19.8.