Home/Models/Llama 3.2 90B
Meta logo

Llama 3.2 90B

by Meta · Released Jan 2024

Open Source
37.8
avg score
Rank #156
Compare
Better than 33% of all models
Context
N/A
Input $/1M
TBD
Output $/1M
TBD
Type
text
License
Open Source
Benchmarks
6 tested
Data updated today
About

Tested on 6 benchmarks with 36.1% average. Top scores: MMLU (73.7%), GeoBench (52.0%), MATH level 5 (39.4%).

Capabilities
math
21.0
#158 globally
knowledge
43.6
#131 globally
Benchmark Scores
Compare All
Tested on 6 benchmarks · Ranked across 2 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
MATH level 5

Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.

39.4
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

2.5
MMLU

Massive Multitask Language Understanding. 57 subjects from STEM, humanities, and social sciences. The most widely-cited knowledge benchmark.

73.7
GeoBench

Geography benchmark testing knowledge of world geography, landmarks, borders, and geopolitical facts.

52.0
Balrog

Broad Assessment of Language and Reasoning Over Games. Tests strategic and logical reasoning through game scenarios.

27.3
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
llama-3-2-90b
Specifications
  • Typetext
  • ContextN/A
  • ReleasedJan 2024
  • LicenseOpen Source
  • Statusbenchmark-only
Available On
Meta logoMetaTBD
Categories
Share & Export
Tweet
Llama 3.2 90B is an open-source text AI model by Meta, released in January 2024. It has an average benchmark score of 37.8.