Codesota · Models · Llama-4-Scout-17B-16E-Instructmeta-llama13 results · 2 benchmarks
Model card

Llama-4-Scout-17B-16E-Instruct.

meta-llamaopen-source109B params
§ 01 · Benchmarks

Every benchmark Llama-4-Scout-17B-16E-Instruct has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generalpolemo2-in89.3%#1/490source ↗
02CPTU-BenchNatural Language Processing · Polish Text Understandingphraseology3.9%#15/93source ↗
03Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generalcbd38.5%#16/490source ↗
04Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generalbelebele91.2%#20/490source ↗
05CPTU-BenchNatural Language Processing · Polish Text Understandingsentiment4.1%#22/93source ↗
06CPTU-BenchNatural Language Processing · Polish Text Understandingaverage3.7%#24/93source ↗
07Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generaldyk69.7%#29/489source ↗
08CPTU-BenchNatural Language Processing · Polish Text Understandinglanguage-understanding3.8%#33/93source ↗
09CPTU-BenchNatural Language Processing · Polish Text Understandingtricky-questions3.2%#37/93source ↗
10Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generaleq-bench59.3%#41/299source ↗
11Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generalaverage64.2%#42/491source ↗
12Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generalpolqa-open-book89.2%#75/489source ↗
13Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generalppc75.9%#98/490source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where Llama-4-Scout-17B-16E-Instruct actually performs.

Natural Language Processing
2
benchmarks
avg rank #34.8
§ 04 · Related models

Other meta-llama models scored on Codesota.

Llama-3.3-70B-Instruct
70.6B params · 1 result
Llama-2-7b-chat-hf
0 results
Llama-2-7b-hf
0 results
Llama-3.2-1B
0 results
Llama-3.2-1B-Instruct
1.24B params · 0 results
Llama-3.2-3B
0 results
Llama-3.2-3B-Instruct
3.21B params · 0 results
Llama-4-Scout-17B-16E
0 results
§ 05 · Sources & freshness

Where these numbers come from.

speakleash/open_pl_llm_leaderboard
8
results
SpeakLeash/CPTU-Bench
5
results
13 of 13 rows marked verified.