Context
33K tokens (~16 books)
Input $/1M
$0.01
Output $/1M
$0.02
Type
text
License
Open Source
Benchmarks
0 tested
Data updated today
About
LFM2-8B-A1B is an efficient on-device Mixture-of-Experts (MoE) model from Liquid AI’s LFM2 family, built for fast, high-quality inference on edge hardware. It uses 8.3B total parameters with only ~1.5B active per token, delivering strong performance while keeping compute and memory usage low—making it ideal for phones, tablets, and laptops.
No benchmark data available yet.
Links
Research
Documentation
Community
Source Code
BenchGecko API
lfm2-8b-a1b
Specifications
- Typetext
- Context33K tokens (~16 books)
- ReleasedOct 2025
- LicenseOpen Source
- StatusActive
- Cost / Message~$0.000
Available On
Learn More
Share & Export
Frequently Asked Questions
LFM2-8B-A1B is an open-source text AI model by liquid, released in October 2025. Context window: 33K tokens.