Model card
GLM-5.
Zhipu AIopen-source130B params
§ 01 · Benchmarks
Every benchmark GLM-5 has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | React Native Evals | Mobile Development · React Native Code Generation | animation-satisfaction | 66.0% | #4 | — | source ↗ |
| 02 | SWE-Bench | Computer Code · Code Generation | resolve-rate-agentic | 77.8% | #7 | 2026-01-01 | source ↗ |
| 03 | React Native Evals | Mobile Development · React Native Code Generation | requirement-satisfaction | 74.2% | #8 | — | source ↗ |
| 04 | React Native Evals | Mobile Development · React Native Code Generation | navigation-satisfaction | 86.7% | #8 | — | source ↗ |
| 05 | SWE-Bench | Computer Code · Code Generation | resolve-rate | 77.8% | #9 | 2026-01-01 | source ↗ |
| 06 | React Native Evals | Mobile Development · React Native Code Generation | async-state-satisfaction | 73.8% | #9 | — | source ↗ |
| 07 | SWE-bench Verified | Agentic AI · SWE-bench | resolve-rate | 77.8% | #11 | — | source ↗ |
| 08 | PLCC | Natural Language Processing · Polish Cultural Competency | grammar | 82.0% | #16 | — | source ↗ |
| 09 | PLCC | Natural Language Processing · Polish Cultural Competency | geography | 91.0% | #21 | — | source ↗ |
| 10 | PLCC | Natural Language Processing · Polish Cultural Competency | history | 88.0% | #28 | — | source ↗ |
| 11 | PLCC | Natural Language Processing · Polish Cultural Competency | average | 80.0% | #33 | — | source ↗ |
| 12 | PLCC | Natural Language Processing · Polish Cultural Competency | culture-and-tradition | 81.0% | #37 | — | source ↗ |
| 13 | PLCC | Natural Language Processing · Polish Cultural Competency | vocabulary | 72.0% | #39 | — | source ↗ |
| 14 | PLCC | Natural Language Processing · Polish Cultural Competency | art-and-entertainment | 66.0% | #47 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where GLM-5 actually performs.
§ 03 · Papers
1 paper with results for GLM-5.
- 2023-10-10· Computer Code· 1 result
SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao et al.
§ 04 · Related models
Other Zhipu AI models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
sdadas/PLCC
7
results
Callstack Incubator
4
results
zhipu-agent
1
result
swebench-leaderboard
1
result
editorial
1
result
14 of 14 rows marked verified.