A lightweight model that thinks before responding. Fast, smart, and great for logic-based tasks that do not require deep domain knowledge. The raw thinking traces are accessible.
Tested on 11 benchmarks with 46.6% average. Top scores: MATH level 5 (90.9%), OTIS Mock AIME 2024-2025 (77.8%), Lech Mazur Writing (73.5%).
Qwen3 235B A22B Instruct 2507 scores 45.7 (99% as good) at $0.07/1M input · 76% cheaper
Multi-language code editing from Aider. Tests editing ability across Python, JavaScript, TypeScript, Java, C++, Go, Rust, and more.
Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.
Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.
ARC-AGI 2, harder sequel to ARC. More complex abstract reasoning patterns that test generalization ability beyond training data.
Competition-level math from AMC, AIME, and olympiad problems. Level 5 is the hardest tier, requiring creative problem-solving.
Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.
Original research-level math problems created by professional mathematicians. Problems are unpublished and cannot be memorized.
- Typetext
- Context131K tokens (~66 books)
- ReleasedJun 2025
- LicenseProprietary
- StatusActive
- Cost / Message~$0.001