Model card
DeepSeek R1.
DeepSeekopen-source671B MoE params
§ 01 · Benchmarks
Every benchmark DeepSeek R1 has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | AIME 2025 | Reasoning · Mathematical Reasoning | accuracy | 72.0% | #5 | — | source ↗ |
| 02 | ARC-Challenge | Reasoning · Commonsense Reasoning | accuracy | 97.1% | #5 | — | source ↗ |
| 03 | AIME 2024 | Reasoning · Mathematical Reasoning | accuracy | 79.8% | #6 | — | source ↗ |
| 04 | MATH | Reasoning · Mathematical Reasoning | accuracy | 97.3% | #6 | — | source ↗ |
| 05 | MMLU | Reasoning · Commonsense Reasoning | accuracy | 90.8% | #8 | 2025-01-22 | source ↗ |
| 06 | LiveCodeBench Pro | Computer Code · Code Generation | elo | 1161.00 | #8 | — | source ↗ |
| 07 | HLE | Reasoning · Multi-step Reasoning | accuracy | 8.5% | #10 | — | |
| 08 | LiveCodeBench | Computer Code · Code Generation | pass@1 | 65.9% | #10 | — | source ↗ |
| 09 | SWE-Bench | Computer Code · Code Generation | resolve-rate | 76.3% | #13 | 2025-12-01 | source ↗ |
| 10 | GPQA | Reasoning · Multi-step Reasoning | accuracy | 71.5% | #16 | — | source ↗ |
| 11 | GSM8K | Reasoning · Mathematical Reasoning | accuracy | 97.3% | #16 | — | source ↗ |
| 12 | SWE-Bench Verified | Computer Code · Code Generation | resolve-rate | 49.2% | #33 | — | source ↗ |
| 13 | PLCC | Natural Language Processing · Polish Cultural Competency | grammar | 74.0% | #34 | — | source ↗ |
| 14 | PLCC | Natural Language Processing · Polish Cultural Competency | vocabulary | 72.0% | #39 | — | source ↗ |
| 15 | PLCC | Natural Language Processing · Polish Cultural Competency | history | 85.0% | #40 | — | source ↗ |
| 16 | PLCC | Natural Language Processing · Polish Cultural Competency | geography | 84.0% | #45 | — | source ↗ |
| 17 | PLCC | Natural Language Processing · Polish Cultural Competency | average | 76.0% | #45 | — | source ↗ |
| 18 | PLCC | Natural Language Processing · Polish Cultural Competency | art-and-entertainment | 66.0% | #47 | — | source ↗ |
| 19 | PLCC | Natural Language Processing · Polish Cultural Competency | culture-and-tradition | 75.0% | #53 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where DeepSeek R1 actually performs.
§ 03 · Papers
1 paper with results for DeepSeek R1.
- 2023-10-10· Computer Code· 1 result
SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao et al.
§ 04 · Related models
Other DeepSeek models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
sdadas/PLCC
7
results
arxiv
6
results
swebench-leaderboard
2
results
deepseek-paper
1
result
livecodebench-pro-official
1
result
editorial
1
result
arxiv-2501.12948
1
result
16 of 19 rows marked verified. · first result 2025-01-22, latest 2025-12-01.