DeepSeek R1 Distill Llama 8B vs Meta Llama 3 8B Instruct
Lado a lado. Cada métrica. Cada benchmark.
| Tipo | DeepSeek R1 Distill Llama 8B | Meta Llama 3 8B Instruct |
|---|---|---|
| Provider | ||
| puntuación promedio | 33.6 | 45.2 |
| Precio de entrada | - | - |
| Precio de salida | - | - |
| Ventana de contexto | - | - |
| Publicado el | 2025-01-20 | 2024-04-17 |
| Código abierto | Open Source | Open Source |
Puntuaciones de benchmark
11 benchmarks · DeepSeek R1 Distill Llama 8B: 2, Meta Llama 3 8B Instruct: 9
| Benchmark | Categoría | DeepSeek R1 Distill Llama 8B | Meta Llama 3 8B Instruct |
|---|---|---|---|
| BBH (HuggingFace) | general | 5.3 | 28.2 |
| GPQA | knowledge | 0.7 | 1.2 |
| IFEval | language | 37.8 | 74.1 |
| JCommonsenseQA | language | 62.4 | 87.7 |
| JMMLU | language | 37.8 | 46.7 |
| JNLI | language | 69.4 | 61.1 |
| JSQuAD | language | 80.2 | 89.5 |
| LLM-JP — Overall | language | 41.4 | 49.6 |
| MATH Level 5 | math | 22.0 | 8.7 |
| MMLU-PRO | knowledge | 12.1 | 29.6 |
| MUSR | reasoning | 0.5 | 1.6 |