Model card
Command-R7B.
Cohereopen-source
§ 01 · Benchmarks
Every benchmark Command-R7B has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | PLCC | Natural Language Processing · Polish Cultural Competency | geography | 33.0% | #150 | — | source ↗ |
| 02 | PLCC | Natural Language Processing · Polish Cultural Competency | culture-and-tradition | 18.0% | #154 | — | source ↗ |
| 03 | PLCC | Natural Language Processing · Polish Cultural Competency | vocabulary | 22.0% | #156 | — | source ↗ |
| 04 | PLCC | Natural Language Processing · Polish Cultural Competency | art-and-entertainment | 14.0% | #158 | — | source ↗ |
| 05 | PLCC | Natural Language Processing · Polish Cultural Competency | average | 22.8% | #159 | — | source ↗ |
| 06 | PLCC | Natural Language Processing · Polish Cultural Competency | history | 27.0% | #162 | — | source ↗ |
| 07 | PLCC | Natural Language Processing · Polish Cultural Competency | grammar | 23.0% | #163 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where Command-R7B actually performs.
§ 04 · Related models
Other Cohere models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
sdadas/PLCC
7
results
7 of 7 rows marked verified.