Livemonitoring bootstrap in progress

The AI Economy Uptime Index

warming up

A single live number for the health of the AI economy · composite across every tracked provider, repinged every five minutes. Lower than 100 means something somewhere is broken.

Providers up
24h window
Incidents 24h
0
all clear
Median latency
across all providers
Data freshness
since last ping

Pattern match at a glance · green healthy, yellow degraded, red down

30 day timeline
Collecting pings · full grid fills in over 30 days of monitoring.

Status + uptime + latency per provider · click through for detail

Provider health
Monitoring bootstrap · first ping cycle in progress.

p50 · p95 · p99 across all providers · 24h window

Latency distribution
Across all providers · 24h
Awaiting first cycle of pings.

Every incident gets a permanent URL · citable for research

Incident archive
Nothing tracked yet · every incident we record here gets a permanent URL.

Transparency · every data source and every cron job

Data freshness
Every source we track
  • OpenRouter models
    daily 05:00 UTC
    never
    target < 25h
  • Pricing snapshot
    daily 04:00 UTC
    never
    target < 25h
  • Score history
    daily
    never
    target < 25h
  • GitHub stars
    daily
    never
    target < 25h
  • MCP registry
    daily
    never
    target < 25h
  • Provider pings
    every 5 minutes
    never
    target < 10m
Cron health
BenchGecko scrapers
  • daily-prices
    /api/cron/daily-prices
    never
    daily 04:00 UTC
  • daily-data
    /api/cron/daily-data
    never
    daily 05:00 UTC
  • rebuild
    /api/cron/rebuild
    never
    daily 06:00 UTC
  • ping-providers
    /api/cron/ping-providers
    never
    every 5 minutes

Methodology, cadence, and how to cite

What is the AI Economy Uptime Index?

A single composite number between 0 and 100 that measures the live health of the AI economy. We ping a curated set of provider control-plane endpoints every five minutes and compute the percentage of providers currently reachable. Lower than 100 means something somewhere is broken.

How often is the index updated?

Every 5 minutes. A Vercel cron job hits one endpoint per tracked provider, records the latency, and writes a new index snapshot to Supabase. The /status page is cached for 5 minutes and tag-revalidated by the cron after each run.

Which providers are monitored?

OpenAI, Anthropic, Google Gemini, Groq, Together, Fireworks, DeepSeek, Mistral, xAI, Cohere, Cerebras, and OpenRouter. The list expands as we add more. Every provider is pinged from the same Vercel edge region so the numbers are comparable.

What counts as an incident?

Three consecutive failed pings for a provider automatically opens an incident with a permanent slug. Two consecutive recovered pings auto-resolve it. Every incident gets its own citable URL at /status/incident/[slug] so researchers can reference historical outages.

Do you call paid endpoints?

No. We use the public /v1/models listing endpoint on every provider, which returns sub-kilobyte responses and does not cost anything to call unauthenticated. Some providers return 401 without auth · we treat that as reachable because it proves the control plane is up.

Can I export the incident history?

Yes. The Export incidents CSV button at the top of this page downloads the full archive with incident slug, provider, severity, status, start, end, duration, and peak latency. Attribution required · link back to benchgecko.ai/status.

Keep exploring the BenchGecko graph