Codesota · Models · Claude Opus 4.6Anthropic4 results · 3 benchmarks
Model card

Claude Opus 4.6.

AnthropicapiUndisclosed params
§ 01 · Benchmarks

Every benchmark Claude Opus 4.6 has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01SWE-BenchComputer Code · Code Generationresolve-rate-agentic80.8%#2/252026-02-01source ↗
02SWE-Bench VerifiedComputer Code · Code Generationresolve-rate80.8%#3/392026-02-17source ↗
03SWE-bench VerifiedAgentic AI · SWE-benchresolve-rate80.8%#3/81source ↗
04SWE-BenchComputer Code · Code Generationresolve-rate79.8%#7/322026-02-01source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where Claude Opus 4.6 actually performs.

Agentic AI
1
benchmark
avg rank #3.0
Computer Code
2
benchmarks
avg rank #4.0
§ 03 · Papers

1 paper with results for Claude Opus 4.6.

  1. 2023-10-10· Computer Code· 1 result

    SWE-bench: Can Language Models Resolve Real-World GitHub Issues?

    Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao et al.
§ 04 · Related models

Other Anthropic models scored on Codesota.

Claude Opus 4
Undisclosed params · 13 results · 2 SOTA
Claude Opus 4.5
3 results · 2 SOTA
Claude Sonnet 5
Undisclosed params · 2 results · 2 SOTA
Claude Sonnet 4
10 results · 1 SOTA
Claude Mythos Preview
1 result · 1 SOTA
Claude 3.5 Sonnet
Undisclosed params · 27 results
Claude Opus 4.5
Undisclosed params · 13 results
Claude 3.7 Sonnet
10 results
§ 05 · Sources & freshness

Where these numbers come from.

anthropic-internal
1
result
anthropic-blog
1
result
editorial
1
result
swebench-leaderboard
1
result
4 of 4 rows marked verified. · first result 2026-02-01, latest 2026-02-17.