Agentic · Model adoption·Snapshot 2026-04-14

Which models do AI agents actually use?

OpenRouter publishes the top-20 model mix for every app that routes through it. We invert that data: for every model in the catalog, we now know exactly how many apps rely on it, how much token volume they drive through it, and — matched against the live pricing catalog — how much they spend on it. Four rankings below.

99

Distinct models in play

30

Apps analysed

35.56T

Total tokens (30d)

$74.62M

Total cost

Four different winners

  • Most dollars: Anthropic: Claude Opus 4.6$25.10M/month across 24 apps. Premium pricing compounds even at modest volume.
  • Most tokens: Xiaomi: MiMo-V2-Pro5.49T through 15 apps. High-volume cheap models absorb the long tail.
  • Most adopted: Qwen: Qwen3.6 Plus — shows up in 27 of 30 apps. "Everyone tries it at least a little" ≠ "anyone uses it primarily".
  • Most #1 slots: MiniMax: MiniMax M2.5 — the top model in 4 of 30 apps. This is the closest proxy for "what agents actually lean on when it matters".

Grouped by vendor

Same data, rolled up one level: every model for each vendor summed into one row. Blended $/M is vendor-weighted by real usage, so premium vendors show their true price per token.

#VendorModelsAppsTokensMonthly costAvg blended $/M#1 slots
1anthropic8275.98T$46.55M$7.786
2xiaomi3166.19T$9.00M$1.452
3z-ai6243.20T$6.01M$1.880
4openai2124747.5B$3.90M$5.224
5google13292.08T$2.42M$1.163
6qwen13213.05T$2.35M$0.774
7minimax2205.41T$2.29M$0.424
8stepfun1163.99T$623K$0.161
9moonshotai320643.7B$491K$0.760
10deepseek6241.46T$453K$0.314
11nvidia2151.17T$248K$0.211
12arcee-ai111604.7B$240K$0.400
13x-ai3649.4B$34K$0.690
14amazon1110.5B$10K$0.920
15mistralai32196.3B$5K$0.031
16sao10k219.3B$645$0.070
17aion-labs11533.7M$547$1.020
18microsoft11864.1M$536$0.620
19anthracite-org1138.7M$138$3.560
20thedrummer21236.7M$85$0.360
21nousresearch21185.3M$37$0.200
22kwaipilot1158.2M$32$0.550
23gryphe1147.3M$3$0.060
24openrouter213744.8B$00

Top 30 by monthly cost

Where the dollars actually go. Summed across every app that uses each model.

#ModelVendor$/M in$/M outTokensMonthly costApps#1 inTop-3 in
1Anthropic: Claude Opus 4.6anthropic$5.00$25.002.37T$25.10M24313
2Anthropic: Claude Sonnet 4.6anthropic$3.00$15.002.62T$16.67M2429
3Xiaomi: MiMo-V2-Proxiaomi$1.00$3.005.49T$8.57M1525
4Z.ai: GLM 5 Turboz-ai$1.20$4.002.84T$5.64M702
5Anthropic: Claude Sonnet 4.5anthropic$3.00$15.00460.6B$2.93M1814
6OpenAI: GPT-5.4openai$2.50$15.00478.4B$2.87M1722
7Qwen: Qwen3.6 Plusqwen$0.33$1.952.98T$2.33M2748
8MiniMax: MiniMax M2.5minimax$0.12$0.993.70T$1.34M1545
9Google: Gemini 3 Flash Previewgoogle$0.50$3.00994.8B$1.19M2436
10MiniMax: MiniMax M2.7minimax$0.30$1.201.72T$947K1904
11Anthropic: Claude Haiku 4.5anthropic$1.00$5.00444.5B$942K1302
12OpenAI: GPT-5.3-Codexopenai$1.75$14.00172.2B$892K1012
13Anthropic: Claude Opus 4.5anthropic$5.00$25.0083.1B$881K1000
14Google: Gemini 3.1 Pro Preview Custom Toolsgoogle$2.00$12.00141.9B$681K2200
15StepFun: Step 3.5 Flashstepfun$0.10$0.303.99T$623K1616
16MoonshotAI: Kimi K2.5moonshotai$0.38$1.72629.5B$477K1900
17Xiaomi: MiMo-V2-Omnixiaomi$0.40$2.00466.0B$395K301
18DeepSeek: DeepSeek V3.2deepseek$0.26$0.381.26T$371K2445
19NVIDIA: Nemotron 3 Supernvidia$0.10$0.501.17T$248K1511
20Arcee AI: Trinity Large Thinkingarcee-ai$0.22$0.85604.7B$240K1200
21Z.ai: GLM 5z-ai$0.72$2.30197.8B$230K1601
22Google: Gemini 2.5 Flashgoogle$0.30$2.50243.7B$223K1001
23Google: Gemini 2.5 Progoogle$1.25$10.0046.8B$173K800
24Google: Gemini 2.5 Flash Litegoogle$0.10$0.40591.4B$109K302
25Z.ai: GLM 5.1z-ai$0.95$3.1557.7B$90K1300
26OpenAI: GPT-5.4 Miniopenai$0.75$4.5033.6B$60K611
27OpenAI: GPT-4.1 Miniopenai$0.40$1.6049.5B$36K300
28Xiaomi: MiMo-V2-Flashxiaomi$0.09$0.29234.8B$34K600
29DeepSeek: DeepSeek V3 0324deepseek$0.20$0.7794.6B$34K402
30Z.ai: GLM 4.7z-ai$0.39$1.7542.2B$33K400

Top 30 by raw token volume

How much work (in tokens) each model carries. Cheap models lead here.

#ModelVendor$/M in$/M outTokensMonthly costApps#1 inTop-3 in
1Xiaomi: MiMo-V2-Proxiaomi$1.00$3.005.49T$8.57M1525
2StepFun: Step 3.5 Flashstepfun$0.10$0.303.99T$623K1616
3MiniMax: MiniMax M2.5minimax$0.12$0.993.70T$1.34M1545
4Qwen: Qwen3.6 Plusqwen$0.33$1.952.98T$2.33M2748
5Z.ai: GLM 5 Turboz-ai$1.20$4.002.84T$5.64M702
6Anthropic: Claude Sonnet 4.6anthropic$3.00$15.002.62T$16.67M2429
7Anthropic: Claude Opus 4.6anthropic$5.00$25.002.37T$25.10M24313
8MiniMax: MiniMax M2.7minimax$0.30$1.201.72T$947K1904
9DeepSeek: DeepSeek V3.2deepseek$0.26$0.381.26T$371K2445
10NVIDIA: Nemotron 3 Supernvidia$0.10$0.501.17T$248K1511
11Google: Gemini 3 Flash Previewgoogle$0.50$3.00994.8B$1.19M2436
12openrouter/hunter-alphaopenrouter741.6B$01301
13MoonshotAI: Kimi K2.5moonshotai$0.38$1.72629.5B$477K1900
14Arcee AI: Trinity Large Thinkingarcee-ai$0.22$0.85604.7B$240K1200
15Google: Gemini 2.5 Flash Litegoogle$0.10$0.40591.4B$109K302
16OpenAI: GPT-5.4openai$2.50$15.00478.4B$2.87M1722
17Xiaomi: MiMo-V2-Omnixiaomi$0.40$2.00466.0B$395K301
18Anthropic: Claude Sonnet 4.5anthropic$3.00$15.00460.6B$2.93M1814
19Anthropic: Claude Haiku 4.5anthropic$1.00$5.00444.5B$942K1302
20Google: Gemini 2.5 Flashgoogle$0.30$2.50243.7B$223K1001
21Xiaomi: MiMo-V2-Flashxiaomi$0.09$0.29234.8B$34K600
22Z.ai: GLM 5z-ai$0.72$2.30197.8B$230K1601
23Mistral: Mistral Nemomistralai$0.02$0.04195.7B$5K111
24OpenAI: GPT-5.3-Codexopenai$1.75$14.00172.2B$892K1012
25Google: Gemini 3.1 Pro Preview Custom Toolsgoogle$2.00$12.00141.9B$681K2200
26DeepSeek: DeepSeek V3 0324deepseek$0.20$0.7794.6B$34K402
27Anthropic: Claude Opus 4.5anthropic$5.00$25.0083.1B$881K1000
28Z.ai: GLM 4.5 Airz-ai$0.13$0.8560.6B$20K400
29Z.ai: GLM 5.1z-ai$0.95$3.1557.7B$90K1300
30OpenAI: GPT-4.1 Miniopenai$0.40$1.6049.5B$36K300

Top 30 by number of apps using the model

Breadth of adoption — models that appear in the top-20 mix of the most apps.

#ModelVendor$/M in$/M outTokensMonthly costApps#1 inTop-3 in
1Qwen: Qwen3.6 Plusqwen$0.33$1.952.98T$2.33M2748
2Anthropic: Claude Sonnet 4.6anthropic$3.00$15.002.62T$16.67M2429
3Anthropic: Claude Opus 4.6anthropic$5.00$25.002.37T$25.10M24313
4DeepSeek: DeepSeek V3.2deepseek$0.26$0.381.26T$371K2445
5Google: Gemini 3 Flash Previewgoogle$0.50$3.00994.8B$1.19M2436
6Google: Gemini 3.1 Pro Preview Custom Toolsgoogle$2.00$12.00141.9B$681K2200
7MiniMax: MiniMax M2.7minimax$0.30$1.201.72T$947K1904
8MoonshotAI: Kimi K2.5moonshotai$0.38$1.72629.5B$477K1900
9Anthropic: Claude Sonnet 4.5anthropic$3.00$15.00460.6B$2.93M1814
10OpenAI: GPT-5.4openai$2.50$15.00478.4B$2.87M1722
11StepFun: Step 3.5 Flashstepfun$0.10$0.303.99T$623K1616
12Z.ai: GLM 5z-ai$0.72$2.30197.8B$230K1601
13Xiaomi: MiMo-V2-Proxiaomi$1.00$3.005.49T$8.57M1525
14MiniMax: MiniMax M2.5minimax$0.12$0.993.70T$1.34M1545
15NVIDIA: Nemotron 3 Supernvidia$0.10$0.501.17T$248K1511
16openrouter/hunter-alphaopenrouter741.6B$01301
17Anthropic: Claude Haiku 4.5anthropic$1.00$5.00444.5B$942K1302
18Z.ai: GLM 5.1z-ai$0.95$3.1557.7B$90K1300
19Arcee AI: Trinity Large Thinkingarcee-ai$0.22$0.85604.7B$240K1200
20Google: Gemini 2.5 Flashgoogle$0.30$2.50243.7B$223K1001
21OpenAI: GPT-5.3-Codexopenai$1.75$14.00172.2B$892K1012
22Anthropic: Claude Opus 4.5anthropic$5.00$25.0083.1B$881K1000
23Google: Gemini 2.5 Progoogle$1.25$10.0046.8B$173K800
24Google: Gemini 3.1 Flash Lite Previewgoogle$0.25$1.5041.9B$25K800
25Z.ai: GLM 5 Turboz-ai$1.20$4.002.84T$5.64M702
26Xiaomi: MiMo-V2-Flashxiaomi$0.09$0.29234.8B$34K600
27OpenAI: GPT-5.4 Miniopenai$0.75$4.5033.6B$60K611
28Google: Nano Banana Pro (Gemini 3 Pro Image Preview)google$2.00$12.001.8B$9K601
29xAI: Grok 4.1 Fastx-ai$0.20$0.5032.3B$9K500
30DeepSeek: DeepSeek V3.2 Expdeepseek$0.27$0.4128.5B$9K500

Models that rank #1 in an app's mix (14 models)

Strongest 'primary driver' signal. Being #1 in even one app beats showing up in a long tail.

#ModelVendor$/M in$/M outTokensMonthly costApps#1 inTop-3 in
1MiniMax: MiniMax M2.5minimax$0.12$0.993.70T$1.34M1545
2Qwen: Qwen3.6 Plusqwen$0.33$1.952.98T$2.33M2748
3DeepSeek: DeepSeek V3.2deepseek$0.26$0.381.26T$371K2445
4Anthropic: Claude Opus 4.6anthropic$5.00$25.002.37T$25.10M24313
5Google: Gemini 3 Flash Previewgoogle$0.50$3.00994.8B$1.19M2436
6Xiaomi: MiMo-V2-Proxiaomi$1.00$3.005.49T$8.57M1525
7Anthropic: Claude Sonnet 4.6anthropic$3.00$15.002.62T$16.67M2429
8OpenAI: GPT-5.4openai$2.50$15.00478.4B$2.87M1722
9StepFun: Step 3.5 Flashstepfun$0.10$0.303.99T$623K1616
10NVIDIA: Nemotron 3 Supernvidia$0.10$0.501.17T$248K1511
11Anthropic: Claude Sonnet 4.5anthropic$3.00$15.00460.6B$2.93M1814
12Mistral: Mistral Nemomistralai$0.02$0.04195.7B$5K111
13OpenAI: GPT-5.3-Codexopenai$1.75$14.00172.2B$892K1012
14OpenAI: GPT-5.4 Miniopenai$0.75$4.5033.6B$60K611

Methodology

  • Built by inverting the app leaderboard JSON — for every (app, model) pair in the top-20 slice, aggregate by model.
  • Permaslugs with date suffixes (anthropic/claude-4.6-opus-20260205) are matched back to the base catalog ID via vendor-scoped token-bag overlap so dollar totals aren't missed.
  • "#1 in N apps" means the model holds the top token slot for that app's 30-day window. It does not mean the model is objectively best — just that this particular app routes the most tokens to it.
  • Free models (Nemotron 3 Super, MiniMax M2.5 free tier) contribute to token and adoption rankings but $0 to spend. Stealth/alpha lanes without listed pricing are counted in adoption but not dollars.
Regenerate with: npx tsx scripts/fetch-openrouter-apps.ts
We reply within 48 hours

Model getting ignored?

Think a model deserves a closer look or isn't priced correctly? Tell us and we'll re-run the match against our catalog.

Tell us what you found →
No newsletter Real humans read this 30 seconds to send