Context · 200K+
Cheapest 200K context LLMs
Every LLM with a 200,000+ token context window. Ranked by input price per 1M tokens.
Models40
Cheapest$-1000000.00
Min context200K tokens
What this page is
200K context is the modern default for frontier models. This page lists every priced 200K+ model, cheapest first. For long docs that fit in 200K, this tier offers the best balance of price, recall, and model diversity.
Ranked by input price
200K+ context models, cheapest first.
Top 3 cheapest 200K context LLMs
Cheapest 200K
Auto Router
input
$-1000000.00/M
output
$-1000000.00/M
Auto Router delivers 2.0M context at $-1000000.00/M input. Great sweet spot for long docs without paying 1M context premium.
Runner up · 200K
Elephant
input
$0.00/M
output
$0.00/M
Elephant delivers 262K context at $0.00/M input. Great sweet spot for long docs without paying 1M context premium.
Third · 200K
Free Models Router
input
$0.00/M
output
$0.00/M
Free Models Router delivers 200K context at $0.00/M input. Great sweet spot for long docs without paying 1M context premium.
The price gap · cheapest vs most expensive
Cheapest
Auto Router
$-1000000.00/M
$ per 1M input tokens
Why the gap
At the 200K tier, premium pricing buys better tail-end retrieval and higher overall reasoning. For retrieval-heavy RAG, cheap models often match premium ones.
Most expensive
Llama 4 Maverick
$0.15/M
$ per 1M input tokens
Frequently asked questions
Yes. 200K handles a full novel, a medium-sized codebase, or 100+ pages of PDFs. Only use 1M when you truly need whole-repo or multi-book context.