Home/Models/Mistral 7B Instruct v0.1
Mistral AI logo

Mistral 7B Instruct v0.1

by Mistral AI · Released Sep 2023

Open Source
18.4
avg score
Rank #211
Compare
Better than 9% of all models
Context
3K tokens (~1 books)
Input $/1M
$0.11
Output $/1M
$0.19
Type
text
License
Open Source
Benchmarks
6 tested
Data updated today
About

A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations for speed and context length.

Tested on 6 benchmarks with 12.8% average. Top scores: IFEval (44.9%), MMLU-PRO (15.7%), BBH (HuggingFace) (7.7%).

Capabilities
reasoning
6.1
#159 globally
math
2.3
#203 globally
knowledge
7.9
#214 globally
language
44.9
#107 globally
general
7.6
#56 globally
Benchmark Scores
Compare All
Tested on 6 benchmarks · Ranked across 5 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
MUSR

HuggingFace MuSR (Multi-Step Reasoning). Tests multi-hop reasoning requiring chaining multiple facts together.

6.1
MATH Level 5

HuggingFace evaluation of MATH Level 5 problems. Competition math requiring advanced reasoning and proof construction.

2.3
MMLU-PRO

HuggingFace MMLU-Pro. Harder version of MMLU with 10 answer choices instead of 4 and more challenging questions.

15.7
GPQA

HuggingFace evaluation of GPQA (Graduate-Level Google-Proof Q&A). PhD-level science questions that cannot be easily searched.

0.0
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Documentation
Community
BenchGecko API
mistral-7b-instruct-v0-1
Specifications
  • Typetext
  • Context3K tokens (~1 books)
  • ReleasedSep 2023
  • LicenseOpen Source
  • StatusActive
  • Cost / Message~$0.000
Available On
Mistral AI logoMistral AI$0.11
Share & Export
Tweet
Mistral 7B Instruct v0.1 is an open-source text AI model by Mistral AI, released in September 2023. It has an average benchmark score of 18.4. Context window: 3K tokens.