May 28th update to the [original DeepSeek R1](/deepseek/deepseek-r1) Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active...
Tested on 25 benchmarks with 57.9% average. Top scores: Chatbot Arena Elo — Overall (1421.7%), MATH level 5 (96.6%), OpenCompass — AIME2025 (89.0%).
Qwen2.5 Coder 7B Instruct scores 56.0 (99% as good) at $0.03/1M input · 94% cheaper
Multi-language code editing from Aider. Tests editing ability across Python, JavaScript, TypeScript, Java, C++, Go, Rust, and more.
OpenCompass Live Code Bench v6. Fresh competitive programming problems to evaluate code generation without memorization.
Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.
Stanford HELM WildBench evaluation. Tests reasoning on challenging real-world tasks.
Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.
Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.
Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.
OpenCompass evaluation on AIME 2025 problems. Tests mathematical reasoning on fresh competition problems.
Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.
- Typetext
- Context164K tokens (~82 books)
- ReleasedMay 2025
- LicenseOpen Source
- StatusActive
- Cost / Message~$0.003