Model card
Llama 3.1 405B.
Metaopen-source
Meta Llama 3.1, 405B parameter instruct variant. Released July 2024.
§ 01 · Benchmarks
Every benchmark Llama 3.1 405B has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | HellaSwag | Reasoning · Commonsense Reasoning | accuracy | 89.0% | #3 | — | source ↗ |
| 02 | CNN/DailyMail | Natural Language Processing · Text Summarization | rouge-1 | 45.1% | #4 | 2024-07-31 | source ↗ |
| 03 | CNN/DailyMail | Natural Language Processing · Text Summarization | rouge-l | 42.3% | #4 | 2024-07-31 | source ↗ |
| 04 | CoNLL-2003 | Natural Language Processing · Named Entity Recognition | f1 | 90.6% | #4 | 2024-07-31 | source ↗ |
| 05 | SNLI | Natural Language Processing · Natural Language Inference | accuracy | 91.2% | #5 | 2024-07-31 | source ↗ |
| 06 | BIG-Bench Hard | Reasoning · Multi-step Reasoning | accuracy | 85.9% | #5 | — | source ↗ |
| 07 | SuperGLUE | Natural Language Processing · Text Classification | average-score | 86.7% | #6 | 2024-07-31 | source ↗ |
| 08 | ARC-Challenge | Reasoning · Commonsense Reasoning | accuracy | 96.9% | #6 | — | source ↗ |
| 09 | SQuAD v2.0 | Natural Language Processing · Question Answering | f1 | 88.7% | #12 | 2024-07-31 | source ↗ |
| 10 | HumanEval | Computer Code · Code Generation | pass@1 | 89.0% | #20 | — | source ↗ |
| 11 | MMLU | Reasoning · Commonsense Reasoning | accuracy | 88.6% | #21 | — | source ↗ |
| 12 | GPQA | Reasoning · Multi-step Reasoning | accuracy | 50.7% | #26 | — | source ↗ |
| 13 | MATH | Reasoning · Mathematical Reasoning | accuracy | 73.8% | #28 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where Llama 3.1 405B actually performs.
§ 03 · Papers
1 paper with results for Llama 3.1 405B.
- 2024-07-31· Natural Language Processing· 6 results
The Llama 3 Herd of Models
§ 04 · Related models
Other Meta models scored on Codesota.
DeiT-B Distilled
86M params · 2 results · 1 SOTA
Llama 3 70B
8 results
Llama-4-Maverick
400B total / 17B active (128 experts) params · 6 results
Llama 3.1 70B
4 results
Code Llama 34B
Unknown params · 2 results
ConvNeXt V2 Huge
650M params · 2 results
CodeLlama 70B
70B params · 1 result
ConvNeXt V2 Base
89M params · 1 result
§ 05 · Sources & freshness
Where these numbers come from.
arxiv
6
results
openai-simple-evals
4
results
meta-modelcard
2
results
llm-stats-bbh
1
result
9 of 13 rows marked verified.