Codesota · Models · Llama-3.1-70BMeta7 results · 1 benchmarks
Model card

Llama-3.1-70B.

Metaopen-source
§ 01 · Benchmarks

Every benchmark Llama-3.1-70B has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01PLCCNatural Language Processing · Polish Cultural Competencyhistory68.0%#100/165source ↗
02PLCCNatural Language Processing · Polish Cultural Competencyart-and-entertainment42.0%#113/165source ↗
03PLCCNatural Language Processing · Polish Cultural Competencygeography58.0%#120/165source ↗
04PLCCNatural Language Processing · Polish Cultural Competencyaverage47.8%#120/165source ↗
05PLCCNatural Language Processing · Polish Cultural Competencyculture-and-tradition41.0%#123/165source ↗
06PLCCNatural Language Processing · Polish Cultural Competencygrammar44.0%#137/165source ↗
07PLCCNatural Language Processing · Polish Cultural Competencyvocabulary34.0%#139/165source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where Llama-3.1-70B actually performs.

Natural Language Processing
1
benchmark
avg rank #121.7
§ 04 · Related models

Other Meta models scored on Codesota.

DeiT-B Distilled
86M params · 2 results · 1 SOTA
Llama 3 70B
8 results
Llama 3.1 405B
6 results
Llama-4-Maverick
400B total / 17B active (128 experts) params · 6 results
Llama 3.1 70B
4 results
Code Llama 34B
Unknown params · 2 results
ConvNeXt V2 Huge
650M params · 2 results
CodeLlama 70B
70B params · 1 result
§ 05 · Sources & freshness

Where these numbers come from.

sdadas/PLCC
7
results
7 of 7 rows marked verified.