Model card
Claude Opus 4.
AnthropicapiUndisclosed params2 current SOTA
§ 01 · Benchmarks
Every benchmark Claude Opus 4 has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | HCAST | Agentic AI · HCAST | success-rate | 55.0% | #1 | 2025-04-01 | source ↗ |
| 02 | METR Time Horizon | Agentic AI · Time Horizon | task-horizon-minutes | 60.0% | #1 | 2025-04-01 | source ↗ |
| 03 | Defects4J | Computer Code · Program Repair | correct-patches | 89.0% | #2 | — | source ↗ |
| 04 | WebArena | Agentic AI · Web & Desktop Agents | success-rate | 55.0% | #3 | 2025-04-01 | source ↗ |
| 05 | MBPP | Computer Code · Code Generation | pass@1 | 92.0% | #3 | — | source ↗ |
| 06 | GPQA | Reasoning · Multi-step Reasoning | accuracy | 76.7% | #11 | — | source ↗ |
| 07 | GSM8K | Reasoning · Mathematical Reasoning | accuracy | 98.0% | #11 | — | source ↗ |
| 08 | HumanEval | Computer Code · Code Generation | pass@1 | 92.2% | #13 | — | source ↗ |
| 09 | LiveCodeBench | Computer Code · Code Generation | pass@1 | 57.8% | #16 | 2024-03-12 | source ↗ |
| 10 | SWE-Bench | Computer Code · Code Generation | resolve-rate-agentic | 55.2% | #17 | 2025-03-01 | |
| 11 | SWE-Bench Verified | Computer Code · Code Generation | resolve-rate | 72.5% | #17 | — | source ↗ |
| 12 | MMLU | Reasoning · Commonsense Reasoning | accuracy | 88.8% | #19 | — | source ↗ |
| 13 | MATH | Reasoning · Mathematical Reasoning | accuracy | 89.2% | #20 | — | source ↗ |
| 14 | SWE-Bench | Computer Code · Code Generation | resolve-rate | 55.2% | #23 | 2025-03-01 | source ↗ |
| 15 | PLCC | Natural Language Processing · Polish Cultural Competency | grammar | 76.0% | #30 | — | source ↗ |
| 16 | PLCC | Natural Language Processing · Polish Cultural Competency | history | 87.0% | #30 | — | source ↗ |
| 17 | PLCC | Natural Language Processing · Polish Cultural Competency | art-and-entertainment | 72.0% | #33 | — | source ↗ |
| 18 | SWE-bench Verified | Agentic AI · SWE-bench | resolve-rate | 72.5% | #33 | — | source ↗ |
| 19 | PLCC | Natural Language Processing · Polish Cultural Competency | vocabulary | 73.0% | #36 | — | source ↗ |
| 20 | PLCC | Natural Language Processing · Polish Cultural Competency | average | 78.7% | #36 | — | source ↗ |
| 21 | PLCC | Natural Language Processing · Polish Cultural Competency | culture-and-tradition | 81.0% | #37 | — | source ↗ |
| 22 | PLCC | Natural Language Processing · Polish Cultural Competency | geography | 83.0% | #50 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where Claude Opus 4 actually performs.
§ 03 · Papers
3 papers with results for Claude Opus 4.
- 2025-04-01· Agentic AI· 3 results
METR: Measuring Autonomy in AI Systems (2025 Update)
- 2024-03-12· Computer Code· 1 result
LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code
- 2023-10-10· Computer Code· 1 result
SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao et al.
§ 04 · Related models
Other Anthropic models scored on Codesota.
Claude Opus 4.5
3 results · 2 SOTA
Claude Sonnet 5
Undisclosed params · 2 results · 2 SOTA
Claude Sonnet 4
10 results · 1 SOTA
Claude Mythos Preview
1 result · 1 SOTA
Claude 3.5 Sonnet
Undisclosed params · 27 results
Claude Opus 4.5
Undisclosed params · 13 results
Claude 3.7 Sonnet
10 results
Claude 3 Opus
5 results
§ 05 · Sources & freshness
Where these numbers come from.
sdadas/PLCC
7
results
official-leaderboard
3
results
official-model-card
3
results
anthropic-model-card
3
results
arxiv
1
result
aider
1
result
anthropic-blog
1
result
anthropic-announcement
1
result
sota-timeline
1
result
editorial
1
result
19 of 22 rows marked verified. · first result 2024-03-12, latest 2025-04-01.