The AI Economy Uptime Index
A single live number for the health of the AI economy · composite across every tracked provider, repinged every five minutes. Lower than 100 means something somewhere is broken.
30 day timeline
Pattern match at a glance · green healthy, yellow degraded, red down
Provider health
Status + uptime + latency per provider · click through for detail
Latency distribution
p50 · p95 · p99 across all providers · 24h window
Incident archive
Every incident gets a permanent URL · citable for research
We eat our own dog food
Transparency · every data source and every cron job
- OpenRouter modelsdaily 05:00 UTC6h agotarget < 25h
- Pricing snapshotdaily 04:00 UTC7h agotarget < 25h
- Score historydailynevertarget < 25h
- GitHub starsdailynevertarget < 25h
- MCP registrydaily12d agotarget < 25h
- Provider pingsevery 5 minutes1m agotarget < 10m
- daily-prices/api/cron/daily-prices7h agodaily 04:00 UTC
- daily-data/api/cron/daily-data6h agodaily 05:00 UTC
- rebuild/api/cron/rebuildneverdaily 06:00 UTC
- ping-providers/api/cron/ping-providers1m agoevery 5 minutes
Frequently asked
Methodology, cadence, and how to cite
What is the AI Economy Uptime Index?
A single composite number between 0 and 100 that measures the live health of the AI economy. We ping a curated set of provider control-plane endpoints every five minutes and compute the percentage of providers currently reachable. Lower than 100 means something somewhere is broken.
How often is the index updated?
Every 5 minutes. A Vercel cron job hits one endpoint per tracked provider, records the latency, and writes a new index snapshot to Supabase. The /status page is cached for 5 minutes and tag-revalidated by the cron after each run.
Which providers are monitored?
OpenAI, Anthropic, Google Gemini, Groq, Together, Fireworks, DeepSeek, Mistral, xAI, Cohere, Cerebras, and OpenRouter. The list expands as we add more. Every provider is pinged from the same Vercel edge region so the numbers are comparable.
What counts as an incident?
Three consecutive failed pings for a provider automatically opens an incident with a permanent slug. Two consecutive recovered pings auto-resolve it. Every incident gets its own citable URL at /status/incident/[slug] so researchers can reference historical outages.
Do you call paid endpoints?
No. We use the public /v1/models listing endpoint on every provider, which returns sub-kilobyte responses and does not cost anything to call unauthenticated. Some providers return 401 without auth · we treat that as reachable because it proves the control plane is up.
Can I export the incident history?
Yes. The Export incidents CSV button at the top of this page downloads the full archive with incident slug, provider, severity, status, start, end, duration, and peak latency. Attribution required · link back to benchgecko.ai/status.
See also
Keep exploring the BenchGecko graph