Model card
Kimi K2.5.
Moonshot AIapiUndisclosed params
§ 01 · Benchmarks
Every benchmark Kimi K2.5 has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | SWE-Bench | Computer Code · Code Generation | resolve-rate-agentic | 76.8% | #10 | 2026-01-01 | |
| 02 | SWE-Bench Verified | Computer Code · Code Generation | resolve-rate | 76.8% | #10 | — | source ↗ |
| 03 | MMLU-Pro | Reasoning · Commonsense Reasoning | accuracy | 87.1% | #11 | 2026-04-20 | source ↗ |
| 04 | SWE-Bench | Computer Code · Code Generation | resolve-rate | 76.8% | #12 | 2026-01-01 | source ↗ |
| 05 | SWE-bench Verified | Agentic AI · SWE-bench | resolve-rate | 76.8% | #13 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where Kimi K2.5 actually performs.
§ 03 · Papers
1 paper with results for Kimi K2.5.
- 2023-10-10· Computer Code· 1 result
SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao et al.
§ 04 · Related models
Other Moonshot AI models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
moonshot-agent
1
result
moonshot-blog
1
result
llm-stats
1
result
swebench-leaderboard
1
result
editorial
1
result
2 of 5 rows marked verified. · first result 2026-01-01, latest 2026-04-20.