Codesota · Models · mistralai/Mistral-Small-3.2-24B-Instruct-2506 (API FP8)mistralai5 results · 1 benchmarks
Model card

mistralai/Mistral-Small-3.2-24B-Instruct-2506 (API FP8).

mistralaiopen-source24B params
§ 01 · Benchmarks

Every benchmark mistralai/Mistral-Small-3.2-24B-Instruct-2506 (API FP8) has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01CPTU-BenchNatural Language Processing · Polish Text Understandingphraseology4.0%#11/93source ↗
02CPTU-BenchNatural Language Processing · Polish Text Understandinglanguage-understanding4.0%#14/93source ↗
03CPTU-BenchNatural Language Processing · Polish Text Understandingaverage3.8%#19/93source ↗
04CPTU-BenchNatural Language Processing · Polish Text Understandingsentiment4.0%#26/93source ↗
05CPTU-BenchNatural Language Processing · Polish Text Understandingtricky-questions3.3%#33/93source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where mistralai/Mistral-Small-3.2-24B-Instruct-2506 (API FP8) actually performs.

Natural Language Processing
1
benchmark
avg rank #20.6
§ 04 · Related models

Other mistralai models scored on Codesota.

Ministral-8B-Instruct-2410
0 results
Mistral-7B-Instruct-v0.1
0 results
Mistral-7B-Instruct-v0.2
0 results
Mistral-7B-Instruct-v0.3
7.25B params · 0 results
Mistral-7B-v0.3
0 results
Mistral-Large-Instruct-2407
123B params · 0 results
Mistral-Large-Instruct-2411
123B params · 0 results
Mistral-Nemo-Base-2407
0 results
§ 05 · Sources & freshness

Where these numbers come from.

SpeakLeash/CPTU-Bench
5
results
5 of 5 rows marked verified.