Model card
Llama 3.1 70B.
Metaopen-source
§ 01 · Benchmarks
Every benchmark Llama 3.1 70B has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | GPQA | Reasoning · Multi-step Reasoning | accuracy | 41.7% | #32 | — | source ↗ |
| 02 | MATH | Reasoning · Mathematical Reasoning | accuracy | 68.0% | #32 | — | source ↗ |
| 03 | HumanEval | Computer Code · Code Generation | pass@1 | 80.5% | #36 | — | source ↗ |
| 04 | MMLU | Reasoning · Commonsense Reasoning | accuracy | 82.0% | #40 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where Llama 3.1 70B actually performs.
§ 04 · Related models
Other Meta models scored on Codesota.
DeiT-B Distilled
86M params · 2 results · 1 SOTA
Llama 3 70B
8 results
Llama 3.1 405B
6 results
Llama-4-Maverick
400B total / 17B active (128 experts) params · 6 results
Code Llama 34B
Unknown params · 2 results
ConvNeXt V2 Huge
650M params · 2 results
CodeLlama 70B
70B params · 1 result
ConvNeXt V2 Base
89M params · 1 result
§ 05 · Sources & freshness
Where these numbers come from.
openai-simple-evals
4
results
0 of 4 rows marked verified.