Model card
pllum-12b-nc-chat-250715.
CYFRAGOVPLopen-source12.2B params
§ 01 · Benchmarks
Every benchmark pllum-12b-nc-chat-250715 has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | CPTU-Bench | Natural Language Processing · Polish Text Understanding | sentiment | 4.4% | #8 | — | source ↗ |
| 02 | CPTU-Bench | Natural Language Processing · Polish Text Understanding | language-understanding | 4.0% | #19 | — | source ↗ |
| 03 | CPTU-Bench | Natural Language Processing · Polish Text Understanding | average | 3.7% | #29 | — | source ↗ |
| 04 | CPTU-Bench | Natural Language Processing · Polish Text Understanding | phraseology | 3.5% | #36 | — | source ↗ |
| 05 | Polish EQ-Bench | Natural Language Processing · Polish Emotional Intelligence | eq-score | 55.2% | #43 | — | source ↗ |
| 06 | CPTU-Bench | Natural Language Processing · Polish Text Understanding | tricky-questions | 2.9% | #44 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where pllum-12b-nc-chat-250715 actually performs.
§ 04 · Related models
Other CYFRAGOVPL models scored on Codesota.
CYFRAGOVPL/Llama-PLLuM-8B-instruct
8.03B params · 0 results
CYFRAGOVPL/PLLuM-12B-nc-chat
12.2B params · 0 results
CYFRAGOVPL/PLLuM-12B-nc-instruct
12.2B params · 0 results
CYFRAGOVPL/pllum-12b-nc-instruct-250715
12.2B params · 0 results
Llama-PLLuM-70B-chat
70.6B params · 0 results
Llama-PLLuM-70B-instruct
70.6B params · 0 results
Llama-PLLuM-8B-chat
8.03B params · 0 results
PLLuM-12B-chat
12.2B params · 0 results
§ 05 · Sources & freshness
Where these numbers come from.
SpeakLeash/CPTU-Bench
5
results
SpeakLeash/Polish-EQ-Bench
1
result
6 of 6 rows marked verified.