Codesota · Models · GPT-4.1OpenAI8 results · 8 benchmarks
Model card

GPT-4.1.

OpenAIapi
§ 01 · Benchmarks

Every benchmark GPT-4.1 has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01MBPPComputer Code · Code Generationpass@190.9%#4/19source ↗
02HumanEvalComputer Code · Code Generationpass@194.5%#6/42source ↗
03MMLUReasoning · Commonsense Reasoningaccuracy90.2%#13/412025-04-14source ↗
04LiveCodeBenchComputer Code · Code Generationpass@154.4%#17/302024-03-12source ↗
05GPQAReasoning · Multi-step Reasoningaccuracy66.3%#22/33source ↗
06MATHReasoning · Mathematical Reasoningaccuracy82.1%#25/34source ↗
07SWE-Bench VerifiedComputer Code · Code Generationresolve-rate54.6%#31/39source ↗
08SWE-bench VerifiedAgentic AI · SWE-benchresolve-rate54.6%#62/81source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where GPT-4.1 actually performs.

Computer Code
4
benchmarks
avg rank #14.5
Reasoning
3
benchmarks
avg rank #20.0
Agentic AI
1
benchmark
avg rank #62.0
§ 03 · Papers

1 paper with results for GPT-4.1.

  1. 2024-03-12· Computer Code· 1 result

    LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code

§ 04 · Related models

Other OpenAI models scored on Codesota.

GPT-4o
Undisclosed params · 35 results · 9 SOTA
o3
16 results · 5 SOTA
o4-mini
13 results · 3 SOTA
o3 (high)
2 results · 1 SOTA
o4-mini (high)
1 result · 1 SOTA
o1
11 results
GPT-5
8 results
o1-preview
Undisclosed params · 8 results
§ 05 · Sources & freshness

Where these numbers come from.

openai-simple-evals
4
results
official-model-card
1
result
official-leaderboard
1
result
swebench-leaderboard
1
result
editorial
1
result
4 of 8 rows marked verified. · first result 2024-03-12, latest 2025-04-14.