Home/Models/Llama 3.1 70B Instruct
Meta logo

Llama 3.1 70B Instruct

by Meta · Released Jul 2024

Open Source
53.8
avg score
Rank #93
Compare
Better than 60% of all models
Context
131K tokens (~66 books)
Input $/1M
$0.40
Output $/1M
$0.40
Type
text
License
Open Source
Benchmarks
16 tested
Data updated today
About

Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 70B instruct-tuned version is optimized for high quality dialogue usecases. It has demonstrated strong...

Tested on 16 benchmarks with 37.8% average. Top scores: Chatbot Arena Elo — Overall (1292.8%), IFEval (86.7%), MMLU (73.5%).

Looking for similar performance at lower cost?
Phi 4 scores 54.2 (101% as good) at $0.07/1M input · 84% cheaper
Capabilities
coding
33.8
#111 globally
reasoning
17.7
#104 globally
math
26.1
#143 globally
knowledge
42.2
#138 globally
agentic
6.9
#30 globally
general
55.9
#5 globally
language
86.7
#26 globally
Benchmark Scores
Compare All
Tested on 16 benchmarks · Ranked across 8 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
Aider — Code Editing

Code editing benchmark from the Aider project. Measures ability to apply targeted code changes while maintaining correctness and style.

58.6
WeirdML

Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.

9.0
MUSR

HuggingFace MuSR (Multi-Step Reasoning). Tests multi-hop reasoning requiring chaining multiple facts together.

17.7
MATH Level 5

HuggingFace evaluation of MATH Level 5 problems. Competition math requiring advanced reasoning and proof construction.

38.1
MATH level 5

Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.

36.7
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

3.5
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
llama-3-1-70b-instruct
Specifications
  • Typetext
  • Context131K tokens (~66 books)
  • ReleasedJul 2024
  • LicenseOpen Source
  • StatusActive
  • Cost / Message~$0.001
Available On
Meta logoMeta$0.40
Share & Export
Tweet
Llama 3.1 70B Instruct is an open-source text AI model by Meta, released in July 2024. It has an average benchmark score of 53.8. Context window: 131K tokens.