Codesota · Models · Meta-Llama-3.1-70B-InstructMeta9 results · 1 benchmarks
Model card

Meta-Llama-3.1-70B-Instruct.

Metaopen-source
§ 01 · Benchmarks

Every benchmark Meta-Llama-3.1-70B-Instruct has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01Polish MT-BenchNatural Language Processing · Polish Conversation Qualityextraction9.8%#5/50source ↗
02Polish MT-BenchNatural Language Processing · Polish Conversation Qualitystem9.6%#12/50source ↗
03Polish MT-BenchNatural Language Processing · Polish Conversation Qualitycoding6.3%#14/50source ↗
04Polish MT-BenchNatural Language Processing · Polish Conversation Qualitywriting9.1%#14/50source ↗
05Polish MT-BenchNatural Language Processing · Polish Conversation Qualityroleplay8.8%#15/50source ↗
06Polish MT-BenchNatural Language Processing · Polish Conversation Qualityhumanities9.5%#15/50source ↗
07Polish MT-BenchNatural Language Processing · Polish Conversation Qualitypl-score8.2%#17/50source ↗
08Polish MT-BenchNatural Language Processing · Polish Conversation Qualityreasoning6.2%#19/50source ↗
09Polish MT-BenchNatural Language Processing · Polish Conversation Qualitymath6.0%#22/50source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where Meta-Llama-3.1-70B-Instruct actually performs.

Natural Language Processing
1
benchmark
avg rank #14.8
§ 04 · Related models

Other Meta models scored on Codesota.

DeiT-B Distilled
86M params · 2 results · 1 SOTA
Llama 3 70B
8 results
Llama 3.1 405B
6 results
Llama-4-Maverick
400B total / 17B active (128 experts) params · 6 results
Llama 3.1 70B
4 results
Code Llama 34B
Unknown params · 2 results
ConvNeXt V2 Huge
650M params · 2 results
CodeLlama 70B
70B params · 1 result
§ 05 · Sources & freshness

Where these numbers come from.

SpeakLeash/MT-Bench-PL
9
results
9 of 9 rows marked verified.