How GMI Works
The Gecko Mindshare Index aggregates attention signals from 6 public sources into a single score. Here is exactly how it works, what it measures, and what it does not.
GMI Formula
7 weighted signals combine into a single 0-100 score
Comment volume, reply depth, upvote velocity across r/LocalLLaMA, r/MachineLearning, and r/artificial. Weighted by comment quality and thread depth.
Tweet volume, engagement rate, KOL amplification, and hashtag velocity. Filtered for signal over marketing noise.
Front page appearances, point velocity, and comment depth. Strong engineering bias provides technical quality signal.
Weekly star acceleration, fork count, issue activity, and contributor growth. Measures what developers actually use.
Paper submissions mentioning the model, citation velocity, and abstract references. Leading indicator of future relevance.
Coverage from TechCrunch, The Verge, Bloomberg, Reuters, and other outlets. Weighted by publication tier.
Package download velocity from npm and PyPI. Direct measurement of developer integration and usage.
Update Frequency
GMI scores update daily. Historical snapshots recorded weekly. Data sourced from 6 channels. Each channel has its own polling interval, from every 2 hours (Twitter/X) to daily (GitHub, arXiv). The Pulse Score, Weather Report, and Power Rankings refresh each day at 06:00 UTC.
Limitations
- GMI measures attention, not quality. High mindshare does not mean a model is better.
- English-language sources are overrepresented. Chinese and other non-English AI communities (WeChat, Zhihu, Baidu Tieba) are not yet tracked.
- Twitter/X data may reflect marketing spend, not organic interest. Paid promotions and bot activity are filtered where possible but not eliminated.
- Sentiment analysis uses automated NLP. Sarcasm, irony, and nuanced criticism may be misclassified.
- Developer adoption (npm/pip) represents only a 5% weight. Actual production usage is not directly observable.
Data Sources
One card per source with API, frequency, and signal type