Grok 4 is xAI's latest reasoning model with a 256k context window. It supports parallel tool calling, structured outputs, and both image and text inputs. Note that reasoning is not...
Tested on 24 benchmarks with 54.8% average. Top scores: HELM — IFEval (94.9%), Fiction.LiveBench (94.4%), HELM — MMLU-Pro (85.1%).
MiniMax M2.5 scores 61.7 (99% as good) at $0.15/1M input · 95% cheaper
Multi-language code editing from Aider. Tests editing ability across Python, JavaScript, TypeScript, Java, C++, Go, Rust, and more.
Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.
Capture-the-flag cybersecurity challenges. Tests vulnerability analysis, reverse engineering, cryptography, and exploitation skills.
Stanford HELM WildBench evaluation. Tests reasoning on challenging real-world tasks.
Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.
Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.
Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.
Stanford HELM evaluation of mathematical reasoning across diverse problem types.
Original research-level math problems created by professional mathematicians. Problems are unpublished and cannot be memorized.
- Typemultimodal
- Context256K tokens (~128 books)
- ReleasedJul 2025
- LicenseProprietary
- StatusActive
- Cost / Message~$0.021