Agentic · Market trends·Snapshot 2026-04-14

One year of OpenRouter: who actually won 2025→2026

OpenRouter's rankings page ships a full year of weekly token data per vendor. We analyzed it. The story is brutal: while benchmark leaderboards talk about Claude, GPT-5, and Gemini, the actual inference market has shifted somewhere else entirely. Chinese open-weight labs went from a rounding error to more than half of the tracked flow in twelve months, and the total market is up 11×.

11.1×

Market growth (12mo)

39×

Chinese labs token growth

52%

Chinese share now (from 15%)

53

Weeks tracked

Vendor market share, week-by-week

Stacked share of total OpenRouter top-9 token volume. This is the "how many tokens" view — it undercounts premium vendors whose tokens cost 10–30× more. See the spend-share chart below for the dollar shape.

0%25%50%75%100%2025-042025-062025-082025-092025-112026-012026-032026-04googleanthropicOtheropenaiminimaxdeepseekz-aixiaomiqwenx-ai

Vendor dollar share, week-by-week

Same weeks, same vendors, but each vendor's token count is multiplied by its current average blended $/M (computed from the live model catalog weighted by real usage). This is the "how much money" view. Anthropic's token share is 12.3% but its dollar share is 45.8% — that gap is the whole premium-lane thesis.

0%25%50%75%100%2025-042025-062025-082025-092025-112026-012026-032026-04anthropicopenaigooglez-aixiaomiminimaxqwendeepseekx-aiOther

Vendor punch-weight: token share vs dollar share

Last 4 weeks. Positive gap = premium pricing, below-weight on tokens. Negative gap = volume play, below-weight on dollars.

VendorToken shareDollar shareGap (pts)Avg blended $/M
google13.3%7.4%-5.9$1.16
xiaomi13.0%9.0%-4.0$1.45
qwen12.7%4.7%-8.0$0.77
anthropic12.3%45.8%+33.4$7.78
openai9.8%24.3%+14.6$5.23
minimax9.5%1.9%-7.6$0.42
deepseek6.3%0.9%-5.4$0.31
z-ai6.0%5.4%-0.6$1.88
stepfun4.8%0.4%-4.4$0.16
x-ai1.0%0.3%-0.6$0.69
nvidia0.3%0.0%-0.3$0.21
meta-llama0.0%0.0%0.0
mistralai0.0%0.0%0.0$0.03
microsoft0.0%0.0%0.0$0.62
nousresearch0.0%0.0%0.0$0.20

Method: per-vendor blended rate is computed from the live OpenRouter catalog weighted by the real usage mix in our app-level breakdown. Historical weeks are priced at the current rate — premium vendors used to be even more expensive (GPT-4 was $30/$60), so older weeks understate the premium-lane dominance rather than overstate it.

What changed in 12 months

Chinese labs: 0 → dominant

Twelve months ago, Chinese labs made up 15% of tracked flow — almost all of it DeepSeek and a little Qwen. Today they're 52%. In absolute tokens that's 1.02T → 39.9T — roughly 39× growth. Xiaomi alone went from nonexistent to 13% share in under a year.

Western incumbents: absolute growth, relative collapse

Google and Anthropic didn't lose volume — both grew several-fold. But the market grew faster around them. Google's share fell from ~37% → ~13%. Anthropic from ~25% → ~12%. Meta, Mistral, and Microsoft vanished from the top-9 entirely. OpenAI held roughly flat at ~10% share. "Losing the market" and "making more money" are the same thing here.

Chinese labs share, weekly

Combined share of xiaomi, qwen, minimax, deepseek, z-ai, stepfun.

0%25%50%75%100%

Biggest share shifts

First 4 weeks of the series vs the last 4 weeks. Growth ratio is absolute tokens, not share.

VendorYear-ago shareNow shareΔ pointsYear-ago tokensNow tokensAbsolute growth
google37.0%13.3%-23.72.56T10.15T4.0×
xiaomiCN0.0%13.0%+13.009.90Tnew
qwenCN2.2%12.7%+10.5154.1B9.71T62×
anthropic24.7%12.3%-12.31.71T9.44T5.5×
openai11.4%9.8%-1.6788.1B7.47T9.5×
minimaxCN0.0%9.5%+9.507.24Tnew
deepseekCN12.5%6.3%-6.2864.8B4.84T5.6×
z-aiCN0.0%6.0%+6.004.58Tnew
stepfunCN0.0%4.8%+4.803.66Tnew
x-ai0.4%1.0%+0.625.2B735.7B29×
nvidia0.0%0.3%+0.30231.2Bnew
meta-llama5.6%0.0%-5.6385.3B0-100%
mistralai2.9%0.0%-2.9197.7B0-100%
microsoft0.8%0.0%-0.853.0B0-100%
nousresearch0.3%0.0%-0.317.8B0-100%

Is the market actually following the benchmark leaders?

The short answer: no, and the longer answer has three parts.

1. The market is stratifying, not choosing. Claude Opus 4.6 is the #1 model on our inverted leaderboard by dollar spend — $25M/month across 24 apps — because it's priced at $5/$25 per million. But it's only ~4th by token volume (2.4T), and Anthropic as a vendor is ~12% of total tokens, down from ~25% last year. Both things are true at once: the premium lane still pays Anthropic real money, and the commodity lane has moved decisively elsewhere. Benchmark leaders don't lose revenue — they lose the long tail of workloads they used to own.

2. It's following price, not quality. The vendors gaining share (Qwen, Xiaomi, MiniMax, DeepSeek, Z.ai, StepFun) share one property: aggressive open-weight pricing, often sub-$1/M blended. On the inverted model leaderboard, Qwen3.6 Plus is in 27 of 30 apps and MiMo-V2-Pro handles 5.5T tokens at ~$1.50/M blended — a tiny fraction of what a comparable Claude Opus run would cost.

3. Benchmark-to-spend correlation is weak and inverted at the top. Among models with both token volume and a published benchmark score, the relationship between benchmark rank and market share is closer to anti-correlated. Premium models capture most of the dollar spend because they're priced 10–30× higher, but they don't capture tokens. Agents — when given a free choice through a router — route most tokens to "good enough and 20× cheaper" rather than "best and 20× more expensive".

This cuts two ways. It says benchmark leaderboards overstate real-world adoption for frontier models. It also says the market may be under-weighting quality in agentic workflows where a wrong answer is expensive — we don't have the data yet to distinguish "agent routers are efficient" from "agent routers are cheap and the downstream users pay for the mistakes".

More rigorous join of benchmark scores × usage is on the roadmap — requires a clean model-ID match across OpenRouter permaslugs and our benchmark DB. Coming next.

Methodology & related pages

  • Data source: openrouter.ai/rankings RSC stream. 53 weeks of vendor-level tokens, 77 weeks of model-level. Captured 2026-04-14.
  • Top-9 slice: OpenRouter charts the top-9 vendors each week and rolls the rest into "Others". Vendors that fall out of top-9 show 0 in weeks where they were lower than the 9th-place vendor.
  • Chinese labs = Qwen, DeepSeek, MiniMax, Xiaomi, Z.ai (Zhipu), StepFun, MoonshotAI, TNG Tech. NVIDIA Nemotron and Microsoft Phi counted as Western.
  • Snapshot leaderboard: /agentic/openrouter-apps · /agentic/openrouter-models · /agentic/openrouter-categories
We reply within 48 hours

Spotted a market trend we missed?

Seeing a model, vendor, or app shift that isn't reflected here? Tell us — we reply within 48 hours and update the analysis.

Tell us what you found →
No newsletter Real humans read this 30 seconds to send