Codesota · Models · Llama-PLLuM-70B-chatPLLuM16 results · 2 benchmarks
Model card

Llama-PLLuM-70B-chat.

PLLuMopen-source
§ 01 · Benchmarks

Every benchmark Llama-PLLuM-70B-chat has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01Polish MT-BenchNatural Language Processing · Polish Conversation Qualityextraction9.4%#12/50source ↗
02Polish MT-BenchNatural Language Processing · Polish Conversation Qualitywriting8.1%#22/50source ↗
03Polish MT-BenchNatural Language Processing · Polish Conversation Qualitypl-score6.8%#25/50source ↗
04Polish MT-BenchNatural Language Processing · Polish Conversation Qualityreasoning5.2%#27/50source ↗
05Polish MT-BenchNatural Language Processing · Polish Conversation Qualitycoding4.8%#28/50source ↗
06Polish MT-BenchNatural Language Processing · Polish Conversation Qualityhumanities8.8%#30/50source ↗
07Polish MT-BenchNatural Language Processing · Polish Conversation Qualitystem8.2%#30/50source ↗
08Polish MT-BenchNatural Language Processing · Polish Conversation Qualityroleplay6.6%#34/50source ↗
09Polish MT-BenchNatural Language Processing · Polish Conversation Qualitymath2.9%#41/50source ↗
10PLCCNatural Language Processing · Polish Cultural Competencyculture-and-tradition64.0%#78/165source ↗
11PLCCNatural Language Processing · Polish Cultural Competencyhistory74.0%#79/165source ↗
12PLCCNatural Language Processing · Polish Cultural Competencyart-and-entertainment49.0%#91/165source ↗
13PLCCNatural Language Processing · Polish Cultural Competencyaverage58.5%#94/165source ↗
14PLCCNatural Language Processing · Polish Cultural Competencygeography68.0%#95/165source ↗
15PLCCNatural Language Processing · Polish Cultural Competencyvocabulary46.0%#100/165source ↗
16PLCCNatural Language Processing · Polish Cultural Competencygrammar50.0%#112/165source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where Llama-PLLuM-70B-chat actually performs.

Natural Language Processing
2
benchmarks
avg rank #56.1
§ 04 · Related models

Other PLLuM models scored on Codesota.

Llama-PLLuM-70B-chat-250801
0 results
Llama-PLLuM-8B-chat
0 results
PLLuM-12B-chat
0 results
PLLuM-12B-nc-chat
0 results
PLLuM-12B-nc-chat-250715
0 results
PLLuM-8x7B-chat
0 results
PLLuM-8x7B-nc-chat
0 results
§ 05 · Sources & freshness

Where these numbers come from.

SpeakLeash/MT-Bench-PL
9
results
sdadas/PLCC
7
results
16 of 16 rows marked verified.