Codesota · Models · GLM-4.7Zhipu AI7 results · 1 benchmarks
Model card

GLM-4.7.

Zhipu AIopen-source
§ 01 · Benchmarks

Every benchmark GLM-4.7 has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01PLCCNatural Language Processing · Polish Cultural Competencygeography88.0%#27/165source ↗
02PLCCNatural Language Processing · Polish Cultural Competencyculture-and-tradition79.0%#39/165source ↗
03PLCCNatural Language Processing · Polish Cultural Competencyhistory85.0%#40/165source ↗
04PLCCNatural Language Processing · Polish Cultural Competencyaverage73.5%#50/165source ↗
05PLCCNatural Language Processing · Polish Cultural Competencyart-and-entertainment64.0%#50/165source ↗
06PLCCNatural Language Processing · Polish Cultural Competencygrammar66.0%#61/165source ↗
07PLCCNatural Language Processing · Polish Cultural Competencyvocabulary59.0%#75/165source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where GLM-4.7 actually performs.

Natural Language Processing
1
benchmark
avg rank #48.9
§ 04 · Related models

Other Zhipu AI models scored on Codesota.

GLM-5
130B params · 3 results
GLM-4.5
2 results
GLM-4.5-Air
1 result
GLM-4.6
1 result
GLM-4.7
1 result
GLM-4.7-Flash
1 result
GLM-OCR
1 result
GLM-4.5
0 results
§ 05 · Sources & freshness

Where these numbers come from.

sdadas/PLCC
7
results
7 of 7 rows marked verified.