Model card
Llama 3 70B.
Metaopen-sourceLLM
Meta Llama 3, 70B parameter instruct variant. Released April 2024.
§ 01 · Benchmarks
Every benchmark Llama 3 70B has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | CommonsenseQA | Reasoning · Commonsense Reasoning | accuracy | 80.9% | #3 | — | source ↗ |
| 02 | MAWPS | Reasoning · Arithmetic Reasoning | accuracy | 94.1% | #3 | — | source ↗ |
| 03 | SVAMP | Reasoning · Arithmetic Reasoning | accuracy | 89.5% | #3 | — | source ↗ |
| 04 | WinoGrande | Reasoning · Commonsense Reasoning | accuracy | 85.3% | #3 | — | source ↗ |
| 05 | HellaSwag | Reasoning · Commonsense Reasoning | accuracy | 88.0% | #5 | — | source ↗ |
| 06 | CoNLL-2003 | Natural Language Processing · Named Entity Recognition | f1 | 89.3% | #6 | 2024-07-31 | source ↗ |
| 07 | SNLI | Natural Language Processing · Natural Language Inference | accuracy | 89.7% | #7 | 2024-07-31 | source ↗ |
| 08 | ARC-Challenge | Reasoning · Commonsense Reasoning | accuracy | 93.0% | #10 | — | source ↗ |
| 09 | SQuAD v2.0 | Natural Language Processing · Question Answering | f1 | 85.3% | #20 | 2024-07-31 | source ↗ |
| 10 | GSM8K | Reasoning · Mathematical Reasoning | accuracy | 93.0% | #23 | — | source ↗ |
| 11 | HumanEval | Computer Code · Code Generation | pass@1 | 81.7% | #34 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where Llama 3 70B actually performs.
§ 03 · Papers
1 paper with results for Llama 3 70B.
- 2024-07-31· Natural Language Processing· 3 results
The Llama 3 Herd of Models
§ 04 · Related models
Other Meta models scored on Codesota.
DeiT-B Distilled
86M params · 2 results · 1 SOTA
Llama 3.1 405B
6 results
Llama-4-Maverick
400B total / 17B active (128 experts) params · 6 results
Llama 3.1 70B
4 results
Code Llama 34B
Unknown params · 2 results
ConvNeXt V2 Huge
650M params · 2 results
CodeLlama 70B
70B params · 1 result
ConvNeXt V2 Base
89M params · 1 result
§ 05 · Sources & freshness
Where these numbers come from.
meta-blog
7
results
arxiv
3
results
openai-simple-evals
1
result
3 of 11 rows marked verified.