Codesota · Models · o1-previewOpenAI8 results · 8 benchmarks
Model card

o1-preview.

OpenAIapiUndisclosed paramsReasoning LLM

OpenAI's reasoning-focused model.

§ 01 · Benchmarks

Every benchmark o1-preview has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01AIME 2024Reasoning · Mathematical Reasoningaccuracy83.3%#4/8source ↗
02MMLUReasoning · Commonsense Reasoningaccuracy90.8%#8/412024-09-12source ↗
03HumanEvalComputer Code · Code Generationpass@192.4%#11/42source ↗
04GSM8KReasoning · Mathematical Reasoningaccuracy97.8%#12/32source ↗
05GPQAReasoning · Multi-step Reasoningaccuracy73.3%#15/33source ↗
06MATHReasoning · Mathematical Reasoningaccuracy85.5%#23/34source ↗
07SWE-BenchComputer Code · Code Generationresolve-rate36.2%#25/322024-10-01source ↗
08SWE-bench VerifiedAgentic AI · SWE-benchresolve-rate41.3%#71/81source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where o1-preview actually performs.

Reasoning
5
benchmarks
avg rank #12.4
Computer Code
2
benchmarks
avg rank #18.0
Agentic AI
1
benchmark
avg rank #71.0
§ 03 · Papers

1 paper with results for o1-preview.

  1. 2023-10-10· Computer Code· 1 result

    SWE-bench: Can Language Models Resolve Real-World GitHub Issues?

    Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao et al.
§ 04 · Related models

Other OpenAI models scored on Codesota.

GPT-4o
Undisclosed params · 35 results · 9 SOTA
o3
16 results · 5 SOTA
o4-mini
13 results · 3 SOTA
o3 (high)
2 results · 1 SOTA
o4-mini (high)
1 result · 1 SOTA
o1
11 results
GPT-5
8 results
GPT-4.1
7 results
§ 05 · Sources & freshness

Where these numbers come from.

openai-simple-evals
4
results
openai-blog
2
results
sota-timeline
1
result
editorial
1
result
3 of 8 rows marked verified. · first result 2024-09-12, latest 2024-10-01.