Codesota · Models · GPT-4.5OpenAI4 results · 3 benchmarks
Model card

GPT-4.5.

OpenAIapiUndisclosed params
§ 01 · Benchmarks

Every benchmark GPT-4.5 has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01GSM8KReasoning · Mathematical Reasoningaccuracy98.2%#10/322025-03-01source ↗
02SWE-BenchComputer Code · Code Generationresolve-rate-agentic62.0%#16/252025-06-01unverified
03SWE-BenchComputer Code · Code Generationresolve-rate62.0%#21/322025-06-01source ↗
04SWE-bench VerifiedAgentic AI · SWE-benchresolve-rate38.0%#75/81source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where GPT-4.5 actually performs.

Reasoning
1
benchmark
avg rank #10.0
Computer Code
1
benchmark
avg rank #18.5
Agentic AI
1
benchmark
avg rank #75.0
§ 03 · Papers

1 paper with results for GPT-4.5.

  1. 2023-10-10· Computer Code· 1 result

    SWE-bench: Can Language Models Resolve Real-World GitHub Issues?

    Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao et al.
§ 04 · Related models

Other OpenAI models scored on Codesota.

GPT-4o
Undisclosed params · 35 results · 9 SOTA
o3
16 results · 5 SOTA
o4-mini
13 results · 3 SOTA
o3 (high)
2 results · 1 SOTA
o4-mini (high)
1 result · 1 SOTA
o1
11 results
GPT-5
8 results
o1-preview
Undisclosed params · 8 results
§ 05 · Sources & freshness

Where these numbers come from.

gsm8k-shadow-page
1
result
codex
1
result
sota-timeline
1
result
editorial
1
result
2 of 4 rows marked verified. · first result 2025-03-01, latest 2025-06-01.