测试版

更新日志

AI 经济中的每一次变化 · 已追踪。

Anthropic officially released Claude Mythos Preview. Tops every major benchmark · 93.9% SWE-bench Verified, 94.5% GPQA Diamond, 97.6% USAMO, 64.7% HLE with tools. Adaptive thinking at max effort with up to 1M token context.

新模型高影响anthropic

OpenAI released GPT-5.4 Nano · a lightweight model targeting edge deployments with a 128K context window at $0.10/$0.40 per 1M tokens.

新模型高影响openai

OpenAI shipped GPT-5.4 Mini alongside the Nano variant · a mid-tier option with stronger reasoning at $0.50/$2.00 per 1M tokens.

新模型中影响openai

Anthropic cut Claude Opus 4.5 input pricing from $8.00 to $5.00 per 1M tokens, matching the Opus 4.6 price point.

$8.00/M input$5.00/M input
降价高影响anthropic

Mistral AI released Mistral Small 4 · a compact model at $0.15/$0.60 per 1M tokens with strong multilingual performance.

新模型中影响mistral

Google DeepMind's Gemini 2.5 Pro posted a new MMLU score of 94.1%, moving into second place behind GPT-5.4 Pro.

92.8%94.1%
分数更新高影响google

xAI launched the multi-agent variant of Grok 4.20 · the first production multi-agent system tracked on BenchGecko.

新智能体高影响xai

DeepSeek reduced V3.2 output pricing from $0.55 to $0.38 per 1M tokens, undercutting most mid-tier competitors.

$0.55/M output$0.38/M output
降价中影响deepseek

The MCP registry grew by 7 servers this week · 4 in dev-tools, 2 in database, and 1 in cloud. Quality scores range from 55 to 72.

MCP 服务器低影响

Anthropic launched Claude Sonnet 4.6 at $3/$15 per 1M tokens · positioned as the default coding and analysis model in the Claude lineup.

新模型高影响anthropic

Anthropic's new flagship · Claude Opus 4.6 · ships at $5/$25 per 1M tokens with extended thinking and improved agentic capabilities.

新模型高影响anthropic

BenchGecko now tracks the OTIS Mock AIME 2024-2025 benchmark · a math reasoning evaluation with 14 models scored so far.

新基准测试中影响

Anthropic raised Claude Opus 4.1 pricing from $12/$60 to $15/$75 per 1M tokens, reflecting its position as the legacy premium tier.

$12/$60 per 1M$15/$75 per 1M
涨价中影响anthropic

xAI released Grok 4.20 Beta at $2/$6 per 1M tokens · a major upgrade with improved code generation and multi-step reasoning.

新模型高影响xai

Inception joined BenchGecko with Mercury 2 · their first model available via OpenRouter at competitive pricing.

新供应商中影响inception

DeepSeek's R1 0528 model scored 87.2% on the GPQA Diamond benchmark, the highest score among open-weight models.

87.2%
分数更新中影响deepseek

Three new AI/ML-focused MCP servers were registered · including integrations for model monitoring and prompt management.

MCP 服务器低影响

OpenAI deprecated the GPT-4o Audio Preview endpoint. Existing integrations will continue for 90 days before shutdown.

已弃用中影响openai

Mistral AI reduced Medium 3.1 input pricing from $0.60 to $0.40 per 1M tokens · now the cheapest medium-tier model from Mistral.

$0.60/M input$0.40/M input
降价低影响mistral

DeepSeek launched V3.2 Speciale · a fine-tuned variant optimized for long-context tasks at $0.40/$1.20 per 1M tokens.

新模型中影响deepseek

WeirdML · a creative reasoning benchmark testing unusual pattern matching · is now tracked with 8 models scored.

新基准测试低影响

NVIDIA shipped Nemotron 3 Super · a 120B parameter MoE model (12B active) at $0.10/$0.40 per 1M tokens. A free variant is also available.

新模型中影响nvidia

Google DeepMind confirmed Flash Lite pricing · the cheapest Gemini model to date, targeting high-volume production workloads.

$0.10/$0.40 per 1M
降价中影响google

Mistral AI shipped their flagship Large 3 model (December 2025 checkpoint) at $0.50/$1.50 per 1M tokens with 128K context.

新模型中影响mistral

xAI's dedicated coding agent · Grok Code Fast 1 · entered the SWE-bench leaderboard at $0.20/$1.50 per 1M tokens.

新智能体中影响xai

Anthropic's Claude Sonnet 4.5 achieved 91.7% on MMLU · a strong result for a mid-tier model, surpassing several flagship competitors.

91.7%
分数更新中影响anthropic

Largest weekly MCP registry growth · 12 servers covering search, finance, communication, database, and dev-tools categories.

MCP 服务器低影响

OpenAI released GPT-5.4 Pro · their most capable model yet, targeting enterprise and research use cases at premium pricing.

新模型高影响openai

Alongside the Pro variant, OpenAI shipped the standard GPT-5.4 · positioned as the successor to GPT-5.3 Chat for general use.

新模型高影响openai

xAI deprecated Grok 3 Mini · users are directed to Grok 4.1 Fast as the recommended replacement at $0.20/$0.50 per 1M tokens.

已弃用低影响xai

NVIDIA cut Nemotron Super 49B pricing to $0.10/$0.40 per 1M tokens · making it the cheapest 49B-class model available.

$0.20/$0.80 per 1M$0.10/$0.40 per 1M
降价低影响nvidia

Liquid joined BenchGecko with their LFM2-24B model · a 24B parameter mixture-of-experts architecture.

新供应商低影响liquid

MiniMax shipped M2.7 · a competitive mid-tier model with strong multilingual benchmarks and 128K context.

新模型低影响minimax

xAI's Grok 4 achieved 89.4% on the GPQA Diamond benchmark · a new high for the Grok family.

89.4%
分数更新中影响xai

BenchGecko added LAMBADA · a language modeling benchmark measuring word prediction in long-range contexts · with 22 models scored.

新基准测试低影响

Google DeepMind reduced Gemini 2.5 Flash output pricing from $3.50 to $2.50 per 1M tokens, improving its cost-performance ratio.

$3.50/M output$2.50/M output
降价中影响google

Inception released Mercury 2 · their second-generation model with improved reasoning capabilities, available via OpenRouter.

新模型低影响inception

Alibaba shipped Qwen3.5-Flash · a lightweight model targeting fast inference at competitive pricing for the Asian market.

新模型中影响alibaba

Five new MCP servers registered this week, with a focus on financial data integrations and authentication providers.

MCP 服务器低影响