Beta
LIVE
Apr 7Claude Mythos Preview · Anthropic's most capable model arrives·Mar 31GPT-5.4 Nano launched on OpenAI·Mar 31GPT-5.4 Mini joins the OpenAI lineup·Mar 30Claude Opus 4.5 input price dropped · $5.00 per 1M tokens·Mar 30Mistral Small 4 available via Mistral AI·Mar 29Gemini 2.5 Pro scores 94.1% on MMLU·Mar 29Grok 4.20 Multi-Agent Beta enters agent rankings·Mar 28DeepSeek V3.2 output price dropped · $0.38 per 1M tokens·Mar 287 new MCP servers added in dev-tools category·Mar 27Claude Sonnet 4.6 released by Anthropic·Mar 27Claude Opus 4.6 released by Anthropic·Mar 26OTIS Mock AIME 2024-2025 benchmark added·Mar 26Claude Opus 4.1 pricing increased · $15/$75 per 1M tokens·Mar 25Grok 4.20 Beta launched by xAI·Mar 25Inception added as a tracked provider·Mar 24DeepSeek R1 0528 posted 87.2% on GPQA Diamond·Mar 243 new MCP servers in AI/ML category·Mar 23GPT-4o Audio Preview marked as deprecated·Mar 23Mistral Medium 3.1 input price cut to $0.40 per 1M tokens·Mar 22DeepSeek V3.2 Speciale released·Mar 22WeirdML benchmark now tracked on BenchGecko·Mar 20Nemotron 3 Super (120B) launched by NVIDIA·Mar 20Gemini 2.5 Flash Lite priced at $0.10/$0.40 per 1M tokens·Mar 18Mistral Large 3 2512 released by Mistral AI·Mar 18Grok Code Fast 1 added to agent rankings·Mar 16Claude Sonnet 4.5 scores 91.7% on MMLU·Mar 1612 new MCP servers added across 5 categories·Mar 14GPT-5.4 Pro launched · OpenAI's new flagship·Mar 14GPT-5.4 standard tier released by OpenAI·Mar 12Grok 3 Mini marked as deprecated by xAI·Mar 12Llama 3.3 Nemotron Super 49B pricing dropped·Mar 10Liquid added as a tracked provider·Mar 10MiniMax M2.7 released by MiniMax·Mar 8Grok 4 posted 89.4% on GPQA Diamond·Mar 8LAMBADA benchmark scores now tracked·Mar 5Gemini 2.5 Flash output price reduced to $2.50 per 1M tokens·Mar 5Mercury 2 launched by Inception·Mar 3Qwen3.5-Flash released by Alibaba Qwen·Mar 35 new MCP servers added · finance and auth categories·Apr 7Claude Mythos Preview · Anthropic's most capable model arrives·Mar 31GPT-5.4 Nano launched on OpenAI·Mar 31GPT-5.4 Mini joins the OpenAI lineup·Mar 30Claude Opus 4.5 input price dropped · $5.00 per 1M tokens·Mar 30Mistral Small 4 available via Mistral AI·Mar 29Gemini 2.5 Pro scores 94.1% on MMLU·Mar 29Grok 4.20 Multi-Agent Beta enters agent rankings·Mar 28DeepSeek V3.2 output price dropped · $0.38 per 1M tokens·Mar 287 new MCP servers added in dev-tools category·Mar 27Claude Sonnet 4.6 released by Anthropic·Mar 27Claude Opus 4.6 released by Anthropic·Mar 26OTIS Mock AIME 2024-2025 benchmark added·Mar 26Claude Opus 4.1 pricing increased · $15/$75 per 1M tokens·Mar 25Grok 4.20 Beta launched by xAI·Mar 25Inception added as a tracked provider·Mar 24DeepSeek R1 0528 posted 87.2% on GPQA Diamond·Mar 243 new MCP servers in AI/ML category·Mar 23GPT-4o Audio Preview marked as deprecated·Mar 23Mistral Medium 3.1 input price cut to $0.40 per 1M tokens·Mar 22DeepSeek V3.2 Speciale released·Mar 22WeirdML benchmark now tracked on BenchGecko·Mar 20Nemotron 3 Super (120B) launched by NVIDIA·Mar 20Gemini 2.5 Flash Lite priced at $0.10/$0.40 per 1M tokens·Mar 18Mistral Large 3 2512 released by Mistral AI·Mar 18Grok Code Fast 1 added to agent rankings·Mar 16Claude Sonnet 4.5 scores 91.7% on MMLU·Mar 1612 new MCP servers added across 5 categories·Mar 14GPT-5.4 Pro launched · OpenAI's new flagship·Mar 14GPT-5.4 standard tier released by OpenAI·Mar 12Grok 3 Mini marked as deprecated by xAI·Mar 12Llama 3.3 Nemotron Super 49B pricing dropped·Mar 10Liquid added as a tracked provider·Mar 10MiniMax M2.7 released by MiniMax·Mar 8Grok 4 posted 89.4% on GPQA Diamond·Mar 8LAMBADA benchmark scores now tracked·Mar 5Gemini 2.5 Flash output price reduced to $2.50 per 1M tokens·Mar 5Mercury 2 launched by Inception·Mar 3Qwen3.5-Flash released by Alibaba Qwen·Mar 35 new MCP servers added · finance and auth categories·

L'economia dell'IA, Sotto controllo.

Polso20·sano
Bolla278%·agitato
Claude Mythos Preview+4.1
Open Source16.2%

Classifica

Alibaba Qwen logo
Qwen3.5 397B A17B
#1
96.3
$0.39/M
DeepSeek logo
DeepSeek V3.2 Speciale
#2
95.2
$0.40/M
OpenAI logo
GPT-5.4 Pro
#3
93.0
$30.00/M
OpenAI logo
GPT-5.1-Codex-Max
#4
91.2
$1.25/M
Google DeepMind logo
Gemini 3.1 Pro Preview
#5
90.0
$2.00/M
stepfun logo
Step 3.5 Flash
#6
89.5
$0.10/M
OpenAI logo
GPT-5 Chat
#7
89.0
$1.25/M
Alibaba Qwen logo
Qwen3.6 Plus
#8
88.7
$0.33/M
z-ai logo
GLM 5.1
#9
87.0
$1.05/M
OpenAI logo
GPT-5.2-Codex
#10
85.4
$1.75/M
OpenAI logo
GPT-5.4
#11
83.4
$2.50/M
Anthropic logo
Claude Opus 4.6 (Fast)
#12
83.3
$30.00/M
OpenAI logo
GPT-5.1-Codex
#13
82.8
$1.25/M
Larghezza barra · punteggio benchmark medio · Colore · categoria

Il Polso

Il Polso
healthy
7d · +3 punti
Indice Bolla · componenti
Valuation Premium+2.1
Funding Acceleration+1.5
Concentration Risk0
Revenue Quality+1.4
Capex Gap+0.3
Maggiore variazione · Valuation Premium su 2.1 punti
Indice Bolla IA
SanoEffervescenteSurriscaldatoBolla
Aggiornato Apr 22·Metodologia·API gratuita

Segnali trasversali

Matrice completa
#Benchmark
1Anthropic logoClaude Mythos Preview100.01000K14
2Alibaba Qwen logoQwen3.5 397B A17B96.3$0.39262K11
3DeepSeek logoDeepSeek V3.2 Speciale95.2$0.40164K9
4OpenAI logoGPT-5.4 Pro93.0$30.001050K8
5OpenAI logoGPT-5.1-Codex-Max91.2$1.25400K8
6Google DeepMind logoGemini 3.1 Pro Preview90.0$2.001049K23
7stepfun logoStep 3.5 Flash89.5$0.10262K10
8OpenAI logoGPT-5 Chat89.0$1.25128K7
9Alibaba Qwen logoQwen3.6 Plus88.7$0.331000K11
10DeepSeek logoDeepSeek R1 Distill Qwen 14B88.311
11
HA
Qwen2.5 72B Instruct Abliterated
87.56
12z-ai logoGLM 5.187.0$1.05203K12
13OpenAI logoGPT-5.2-Codex85.4$1.75400K9
14Anthropic logoClaude Instant84.64
15DeepSeek logoDeepSeek-V2 (MoE-236B, May 2024)84.47
16OpenAI logoGPT-5.483.4$2.501050K16
17Anthropic logoClaude Opus 4.6 (Fast)83.3$30.001000K12
18OpenAI logoGPT-5.1-Codex82.8$1.25400K8
19xiaomi logoMiMo-V2-Flash81.7$0.09262K11
20Alibaba logoQwen2.5 32B Instruct81.37
Cerca 293 termini IA · dai transformer all'attention premiumApri
Metodologia completa
Con quale frequenza vengono aggiornati i dati di BenchGecko?

Modelli e benchmark si aggiornano quotidianamente dalle fonti primarie. I prezzi vengono raccolti in continuo dalle API di ogni provider. I segnali di attenzione sono aggregati settimanalmente. Il Polso viene ricalcolato alle 00:00 UTC.

Cos'è Il Polso?

Un punteggio composito da 0 a 100 sulla salute dell'economia IA. Combina l'Indice Bolla inverso, la velocità dei benchmark, la compressione dei prezzi, la diversità dell'attenzione e la tensione della catena di approvvigionamento in un unico numero. Più basso è, più sano è.

Come vengono normalizzati i punteggi benchmark?

Ogni benchmark viene normalizzato min-max sull'intero insieme di modelli valutati. Le classifiche calcolano la media dei punteggi normalizzati su almeno 3 benchmark per modello, per evitare di sovra-ponderare un singolo test.

Da dove arrivano i dati sui prezzi?

Dalle API dei provider · OpenRouter, OpenAI, Anthropic, Google, xAI, DeepSeek, Mistral e altri. Ogni snapshot viene salvato con attribuzione della fonte nella pagina di dettaglio del modello.

Posso citare i dati di BenchGecko?

Sì. Ogni pagina include una barra Condividi e Cita con formati APA, MLA, BibTeX, Chicago e testo semplice. L'attribuzione è obbligatoria nel piano API gratuito e consigliata ovunque.

Fonti ·OpenRouterEpoch AISWE-benchMCP RegistryChatbot ArenaHuggingFaceLiveBenchArtificial AnalysisSEALAider
Aggiornato 2h fa · 10+ fonti autorevoli · zero contenuto editoriale