18.4
avg score
Rank #211
Better than 9% of all models
Context
3K tokens (~1 books)
Input $/1M
$0.11
Output $/1M
$0.19
Type
text
License
Open Source
Benchmarks
6 tested
Data updated today
About
A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations for speed and context length.
Tested on 6 benchmarks with 12.8% average. Top scores: IFEval (44.9%), MMLU-PRO (15.7%), BBH (HuggingFace) (7.7%).
Capabilities
reasoning
6.1
#159 globally
math
2.3
#203 globally
knowledge
7.9
#214 globally
language
44.9
#107 globally
general
7.6
#56 globally
Benchmark Scores
Compare AllTested on 6 benchmarks · Ranked across 5 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
reasoningCompare reasoning →
MUSR
6.1—HuggingFace MuSR (Multi-Step Reasoning). Tests multi-hop reasoning requiring chaining multiple facts together.
mathCompare math →
MATH Level 5
2.3—HuggingFace evaluation of MATH Level 5 problems. Competition math requiring advanced reasoning and proof construction.
knowledgeCompare knowledge →
MMLU-PRO
15.7—HuggingFace MMLU-Pro. Harder version of MMLU with 10 answer choices instead of 4 and more challenging questions.
GPQA
0.0—HuggingFace evaluation of GPQA (Graduate-Level Google-Proof Q&A). PhD-level science questions that cannot be easily searched.
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Links
Research
Documentation
Community
Source Code
BenchGecko API
mistral-7b-instruct-v0-1
Specifications
- Typetext
- Context3K tokens (~1 books)
- ReleasedSep 2023
- LicenseOpen Source
- StatusActive
- Cost / Message~$0.000
Available On
Learn More
Share & Export
Frequently Asked Questions
Mistral 7B Instruct v0.1 is an open-source text AI model by Mistral AI, released in September 2023. It has an average benchmark score of 18.4. Context window: 3K tokens.