Model card
Grok 4.
xAIapi
§ 01 · Benchmarks
Every benchmark Grok 4 has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | HLE | Reasoning · Multi-step Reasoning | accuracy | 24.5% | #3 | — | |
| 02 | PLCC | Natural Language Processing · Polish Cultural Competency | history | 94.0% | #3 | — | source ↗ |
| 03 | PLCC | Natural Language Processing · Polish Cultural Competency | grammar | 90.0% | #3 | — | source ↗ |
| 04 | LiveCodeBench | Computer Code · Code Generation | pass@1 | 79.0% | #4 | — | source ↗ |
| 05 | PLCC | Natural Language Processing · Polish Cultural Competency | culture-and-tradition | 95.0% | #5 | — | source ↗ |
| 06 | GPQA | Reasoning · Multi-step Reasoning | accuracy | 88.0% | #6 | — | source ↗ |
| 07 | PLCC | Natural Language Processing · Polish Cultural Competency | average | 90.5% | #7 | — | source ↗ |
| 08 | React Native Evals | Mobile Development · React Native Code Generation | animation-satisfaction | 59.4% | #8 | — | source ↗ |
| 09 | React Native Evals | Mobile Development · React Native Code Generation | requirement-satisfaction | 70.1% | #9 | — | source ↗ |
| 10 | React Native Evals | Mobile Development · React Native Code Generation | async-state-satisfaction | 73.8% | #9 | — | source ↗ |
| 11 | React Native Evals | Mobile Development · React Native Code Generation | navigation-satisfaction | 84.4% | #9 | — | source ↗ |
| 12 | PLCC | Natural Language Processing · Polish Cultural Competency | art-and-entertainment | 86.0% | #10 | — | source ↗ |
| 13 | MMLU-Pro | Reasoning · Commonsense Reasoning | accuracy | 86.6% | #14 | 2026-04-20 | source ↗ |
| 14 | PLCC | Natural Language Processing · Polish Cultural Competency | geography | 94.0% | #14 | — | source ↗ |
| 15 | PLCC | Natural Language Processing · Polish Cultural Competency | vocabulary | 84.0% | #18 | — | source ↗ |
| 16 | MMLU | Reasoning · Commonsense Reasoning | accuracy | 86.6% | #31 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where Grok 4 actually performs.
§ 04 · Related models
Other xAI models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
sdadas/PLCC
7
results
Callstack Incubator
4
results
xai-grok-4-announcement
2
results
editorial
1
result
pricepertoken
1
result
artificial-analysis
1
result
11 of 16 rows marked verified.