Home/Models/Grok 4
xAI logo

Grok 4

by xAI · Released Jul 2025

Multimodal
62.2
avg score
Rank #59
Compare
Better than 75% of all models
Context
256K tokens (~128 books)
Input $/1M
$3.00
Output $/1M
$15.00
Type
multimodal
License
Proprietary
Benchmarks
24 tested
Data updated today
About

Grok 4 is xAI's latest reasoning model with a 256k context window. It supports parallel tool calling, structured outputs, and both image and text inputs. Note that reasoning is not...

Tested on 24 benchmarks with 54.8% average. Top scores: HELM — IFEval (94.9%), Fiction.LiveBench (94.4%), HELM — MMLU-Pro (85.1%).

Looking for similar performance at lower cost?
MiniMax M2.5 scores 61.7 (99% as good) at $0.15/1M input · 95% cheaper
Capabilities
coding
48.9
#71 globally
reasoning
53.7
#43 globally
math
41.5
#103 globally
knowledge
62.8
#35 globally
agentic
15.2
#25 globally
language
94.9
#2 globally
Benchmark Scores
Compare All
Tested on 24 benchmarks · Ranked across 6 categories
Score Distribution (all 233 models)
0255075100
▲ You are here
Aider polyglot

Multi-language code editing from Aider. Tests editing ability across Python, JavaScript, TypeScript, Java, C++, Go, Rust, and more.

79.6
WeirdML

Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.

45.7
Cybench

Capture-the-flag cybersecurity challenges. Tests vulnerability analysis, reverse engineering, cryptography, and exploitation skills.

43.0
HELM — WildBench

Stanford HELM WildBench evaluation. Tests reasoning on challenging real-world tasks.

79.7
ARC-AGI

Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.

66.7
SimpleBench

Deceptively simple questions that humans find easy but AI models often get wrong. Tests common sense and reasoning gaps.

52.6
OTIS Mock AIME 2024-2025

Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.

84.0
HELM — Omni-MATH

Stanford HELM evaluation of mathematical reasoning across diverse problem types.

60.3
FrontierMath-2025-02-28-Private

Original research-level math problems created by professional mathematicians. Problems are unpublished and cannot be memorized.

19.7
Excellent (85+) Good (70-85) Average (50-70) Below (<50)
Recently Happened
Grok 4 posted 89.4% on GPQA Diamond
Mar 8, 2026
Links
Documentation
Community
BenchGecko API
grok-4
Specifications
  • Typemultimodal
  • Context256K tokens (~128 books)
  • ReleasedJul 2025
  • LicenseProprietary
  • StatusActive
  • Cost / Message~$0.021
Available On
xAI logoxAI$3.00
Share & Export
Tweet
Grok 4 is a proprietary multimodal AI model by xAI, released in July 2025. It has an average benchmark score of 62.2. Context window: 256K tokens.