ベータ
Live12 providers monitored · last check 1m ago

The AI Economy Uptime Index

· 0.00 pts
vs 24h ago

A single live number for the health of the AI economy · composite across every tracked provider, repinged every five minutes. Lower than 100 means something somewhere is broken.

Providers up
10/12
24h window
Incidents 24h
1
active now
Median latency
106ms
across all providers
Data freshness
1m
since last ping

Pattern match at a glance · green healthy, yellow degraded, red down

Last 30 days
Provider uptime grid
up degraded down no data
Anthropic
cerebras
Cohere
DeepSeek
fireworks
Google DeepMind
groq
Mistral AI
OpenAI
openrouter
together
xAI

Status + uptime + latency per provider · click through for detail

p50 · p95 · p99 across all providers · 24h window

Latency distribution
Across all providers · 24h
p50 median110ms
p95477ms
p99 tail986ms

Every incident gets a permanent URL · citable for research

Transparency · every data source and every cron job

Data freshness
Every source we track
  • OpenRouter models
    daily 05:00 UTC
    6h ago
    target < 25h
  • Pricing snapshot
    daily 04:00 UTC
    7h ago
    target < 25h
  • Score history
    daily
    never
    target < 25h
  • GitHub stars
    daily
    never
    target < 25h
  • MCP registry
    daily
    12d ago
    target < 25h
  • Provider pings
    every 5 minutes
    1m ago
    target < 10m
Cron health
BenchGecko scrapers
  • daily-prices
    /api/cron/daily-prices
    7h ago
    daily 04:00 UTC
  • daily-data
    /api/cron/daily-data
    6h ago
    daily 05:00 UTC
  • rebuild
    /api/cron/rebuild
    never
    daily 06:00 UTC
  • ping-providers
    /api/cron/ping-providers
    1m ago
    every 5 minutes

Methodology, cadence, and how to cite

What is the AI Economy Uptime Index?

A single composite number between 0 and 100 that measures the live health of the AI economy. We ping a curated set of provider control-plane endpoints every five minutes and compute the percentage of providers currently reachable. Lower than 100 means something somewhere is broken.

How often is the index updated?

Every 5 minutes. A Vercel cron job hits one endpoint per tracked provider, records the latency, and writes a new index snapshot to Supabase. The /status page is cached for 5 minutes and tag-revalidated by the cron after each run.

Which providers are monitored?

OpenAI, Anthropic, Google Gemini, Groq, Together, Fireworks, DeepSeek, Mistral, xAI, Cohere, Cerebras, and OpenRouter. The list expands as we add more. Every provider is pinged from the same Vercel edge region so the numbers are comparable.

What counts as an incident?

Three consecutive failed pings for a provider automatically opens an incident with a permanent slug. Two consecutive recovered pings auto-resolve it. Every incident gets its own citable URL at /status/incident/[slug] so researchers can reference historical outages.

Do you call paid endpoints?

No. We use the public /v1/models listing endpoint on every provider, which returns sub-kilobyte responses and does not cost anything to call unauthenticated. Some providers return 401 without auth · we treat that as reachable because it proves the control plane is up.

Can I export the incident history?

Yes. The Export incidents CSV button at the top of this page downloads the full archive with incident slug, provider, severity, status, start, end, duration, and peak latency. Attribution required · link back to benchgecko.ai/status.

Keep exploring the BenchGecko graph