GPT-5.4 is OpenAI’s latest frontier model, unifying the Codex and GPT lines into a single system. It features a 1M+ token context window (922K input, 128K output) with support for...
Tested on 16 benchmarks with 59.0% average. Top scores: Chatbot Arena Elo — Overall (1465.8%), OTIS Mock AIME 2024-2025 (95.3%), ARC-AGI (93.7%).
MiMo-V2-Flash scores 81.7 (98% as good) at $0.09/1M input · 96% cheaper
Real-world software engineering tasks from GitHub issues. Models must diagnose bugs and write patches that pass test suites. Human-verified subset of SWE-bench.
Unusual and adversarial machine learning challenges. Tests robustness of reasoning about edge cases in ML systems.
Abstraction and Reasoning Corpus. Tests fluid intelligence through novel visual pattern recognition puzzles. Core measure of general intelligence.
ARC-AGI 2, harder sequel to ARC. More complex abstract reasoning patterns that test generalization ability beyond training data.
Mock AIME (American Invitational Mathematics Exam) problems from OTIS. Tests mathematical competition performance.
Original research-level math problems created by professional mathematicians. Problems are unpublished and cannot be memorized.
Hardest tier of FrontierMath. Problems at the frontier of human mathematical ability, many unsolved by most mathematicians.
- Typemultimodal
- Context1.1M tokens (~525 books)
- ReleasedMar 2026
- LicenseProprietary
- StatusActive
- Cost / Message~$0.020