DeepSeek V3
Open Sourcedi DeepSeek · Rilascio 2024-12-26
59.0
punteggio medio
$0.32/1M
Prezzo Input
$0.89/1M
Prezzo Output
164K tokens (~82 books)
Finestra di Contesto
text
Tipo
Tested on 22 benchmarks with 59.0% average. Top scores: Chatbot Arena Elo — Overall (1358.2%), ARC AI2 (93.7%), HellaSwag (85.2%).
Punteggi Benchmark
| Benchmark | Categoria | Punteggio | Bar |
|---|---|---|---|
| Chatbot Arena Elo — Overall | arena | 1358.2 | |
| ARC AI2 | knowledge | 93.7 | |
| HellaSwag | knowledge | 85.2 | |
| BBH | reasoning | 83.3 | |
| HELM — IFEval | language | 83.2 | |
| HELM — WildBench | reasoning | 83.1 | |
| MMLU | knowledge | 82.9 | |
| TriviaQA | knowledge | 82.9 | |
| Lech Mazur Writing | knowledge | 77.0 | |
| HELM — MMLU-Pro | knowledge | 72.3 | |
| Winogrande | knowledge | 70.4 | |
| PIQA | knowledge | 69.4 | |
| MATH level 5 | math | 64.8 | |
| HELM — GPQA | knowledge | 53.8 | |
| Fiction.LiveBench | knowledge | 50.0 | |
| Aider polyglot | coding | 48.4 | |
| GPQA diamond | knowledge | 42.0 | |
| HELM — Omni-MATH | math | 40.3 | |
| WeirdML | coding | 36.1 | |
| OTIS Mock AIME 2024-2025 | math | 15.8 | |
| SimpleBench | reasoning | 2.7 | |
| FrontierMath-2025-02-28-Private | math | 1.7 |
Modelli Simili
OpenAI
59.0
U
Muse SparkUnknown
59.0
Google DeepMind
59.1
Microsoft
58.6