Codesota · Models · CYFRAGOVPL/Llama-PLLuM-8B-instructCYFRAGOVPL5 results · 1 benchmarks
Model card

CYFRAGOVPL/Llama-PLLuM-8B-instruct.

CYFRAGOVPLopen-source8.03B params
§ 01 · Benchmarks

Every benchmark CYFRAGOVPL/Llama-PLLuM-8B-instruct has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01CPTU-BenchNatural Language Processing · Polish Text Understandingphraseology3.5%#34/93source ↗
02CPTU-BenchNatural Language Processing · Polish Text Understandingaverage2.8%#70/93source ↗
03CPTU-BenchNatural Language Processing · Polish Text Understandingsentiment3.2%#71/93source ↗
04CPTU-BenchNatural Language Processing · Polish Text Understandinglanguage-understanding2.9%#74/93source ↗
05CPTU-BenchNatural Language Processing · Polish Text Understandingtricky-questions1.7%#76/93source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where CYFRAGOVPL/Llama-PLLuM-8B-instruct actually performs.

Natural Language Processing
1
benchmark
avg rank #65.0
§ 04 · Related models

Other CYFRAGOVPL models scored on Codesota.

CYFRAGOVPL/PLLuM-12B-nc-chat
12.2B params · 0 results
CYFRAGOVPL/PLLuM-12B-nc-instruct
12.2B params · 0 results
CYFRAGOVPL/pllum-12b-nc-instruct-250715
12.2B params · 0 results
Llama-PLLuM-70B-chat
70.6B params · 0 results
Llama-PLLuM-70B-instruct
70.6B params · 0 results
Llama-PLLuM-8B-chat
8.03B params · 0 results
PLLuM-12B-chat
12.2B params · 0 results
PLLuM-12B-instruct
12.2B params · 0 results
§ 05 · Sources & freshness

Where these numbers come from.

SpeakLeash/CPTU-Bench
5
results
5 of 5 rows marked verified.