Model card
GPT-5.1.
OpenAI
§ 01 · Benchmarks
Every benchmark GPT-5.1 has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | MMMU | Multimodal · Visual Question Answering | accuracy | 85.4% | #2 | 2025-11-13 | source ↗ |
| 02 | MMMU-Pro | Multimodal · Visual Question Answering | accuracy | 76.5% | #4 | 2025-11-13 | source ↗ |
| 03 | Tau2-Bench | Agentic AI · Tool Use | pass_rate | 59.0% | #5 | — |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where GPT-5.1 actually performs.
§ 04 · Related models
Other OpenAI models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
llm-stats.com
1
result
artificialanalysis.ai
1
result
editorial
1
result
2 of 3 rows marked verified.