Beta
Classifica/o1-mini
OpenAI logo

o1-mini

di OpenAI · Rilascio 2024-01-01

34.9
punteggio medio
N/A
Prezzo Input
N/A
Prezzo Output
N/A
Finestra di Contesto
text
Tipo

Tested on 13 benchmarks with 34.9% average. Top scores: Chatbot Arena Elo — Overall (1336.6%), MATH level 5 (89.2%), Aider — Code Editing (70.7%).

BenchmarkCategoriaPunteggioBar
Chatbot Arena Elo — Overallarena1336.6
MATH level 5math89.2
Aider — Code Editingcoding70.7
Lech Mazur Writingknowledge64.9
GPQA diamondknowledge49.8
OTIS Mock AIME 2024-2025math46.9
WeirdMLcoding36.3
Aider polyglotcoding32.9
ARC-AGIreasoning14.0
Cybenchcoding10.0
SimpleBenchreasoning1.7
FrontierMath-2025-02-28-Privatemath1.7
ARC-AGI-2reasoning0.8