Model card
o3.
OpenAIapi5 current SOTA
§ 01 · Benchmarks
Every benchmark o3 has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | MMLU | Reasoning · Commonsense Reasoning | accuracy | 92.9% | #1 | 2025-04-16 | source ↗ |
| 02 | RE-Bench | Agentic AI · RE-Bench | normalized-score | 0.4% | #1 | 2025-04-01 | source ↗ |
| 03 | AIME 2024 | Reasoning · Mathematical Reasoning | accuracy | 96.7% | #1 | — | source ↗ |
| 04 | ARC-AGI-1 | Reasoning · Logical Reasoning | accuracy | 87.5% | #1 | — | source ↗ |
| 05 | ARC-Challenge | Reasoning · Commonsense Reasoning | accuracy | 98.1% | #1 | — | source ↗ |
| 06 | HCAST | Agentic AI · HCAST | success-rate | 49.0% | #2 | 2025-04-01 | source ↗ |
| 07 | METR Time Horizon | Agentic AI · Time Horizon | task-horizon-minutes | 30.0% | #2 | 2025-04-01 | source ↗ |
| 08 | AIME 2025 | Reasoning · Mathematical Reasoning | accuracy | 86.7% | #2 | — | source ↗ |
| 09 | ARC-AGI-2 | Reasoning · Logical Reasoning | accuracy | 4.0% | #2 | — | source ↗ |
| 10 | GSM8K | Reasoning · Mathematical Reasoning | accuracy | 99.0% | #3 | — | source ↗ |
| 11 | MATH | Reasoning · Mathematical Reasoning | accuracy | 97.8% | #4 | — | source ↗ |
| 12 | HumanEval | Computer Code · Code Generation | pass@1 | 94.8% | #5 | 2025-04-01 | source ↗ |
| 13 | GPQA | Reasoning · Multi-step Reasoning | accuracy | 82.8% | #8 | — | source ↗ |
| 14 | LiveCodeBench Pro | Computer Code · Code Generation | elo | 1010.00 | #9 | — | source ↗ |
| 15 | LiveCodeBench | Computer Code · Code Generation | pass@1 | 65.3% | #11 | 2024-03-12 | source ↗ |
| 16 | SWE-Bench Verified | Computer Code · Code Generation | resolve-rate | 69.1% | #21 | — | source ↗ |
| 17 | HumanEval | Computer Code · Code Generation | pass@1 | 87.4% | #26 | — | source ↗ |
| 18 | SWE-bench Verified | Agentic AI · SWE-bench | resolve-rate | 69.1% | #44 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where o3 actually performs.
§ 03 · Papers
2 papers with results for o3.
- 2025-04-01· Agentic AI· 3 results
METR: Measuring Autonomy in AI Systems (2025 Update)
- 2024-03-12· Computer Code· 1 result
LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code
§ 04 · Related models
Other OpenAI models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
openai-simple-evals
6
results
official-leaderboard
3
results
openai-system-card
2
results
arcprize-leaderboard
2
results
arxiv
1
result
shadow-page-humaneval
1
result
livecodebench-pro-official
1
result
openai-blog
1
result
editorial
1
result
14 of 18 rows marked verified. · first result 2024-03-12, latest 2025-04-16.