Codesota · Models · Mistral-7B-Instruct-v0.3Mistral9 results · 1 benchmarks
Model card

Mistral-7B-Instruct-v0.3.

Mistralopen-source
§ 01 · Benchmarks

Every benchmark Mistral-7B-Instruct-v0.3 has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01Polish MT-BenchNatural Language Processing · Polish Conversation Qualityroleplay7.3%#29/50source ↗
02Polish MT-BenchNatural Language Processing · Polish Conversation Qualityextraction7.3%#32/50source ↗
03Polish MT-BenchNatural Language Processing · Polish Conversation Qualitywriting7.3%#33/50source ↗
04Polish MT-BenchNatural Language Processing · Polish Conversation Qualitystem7.5%#35/50source ↗
05Polish MT-BenchNatural Language Processing · Polish Conversation Qualitypl-score5.8%#36/50source ↗
06Polish MT-BenchNatural Language Processing · Polish Conversation Qualitycoding4.3%#36/50source ↗
07Polish MT-BenchNatural Language Processing · Polish Conversation Qualityreasoning3.8%#38/50source ↗
08Polish MT-BenchNatural Language Processing · Polish Conversation Qualityhumanities6.8%#43/50source ↗
09Polish MT-BenchNatural Language Processing · Polish Conversation Qualitymath2.4%#45/50source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where Mistral-7B-Instruct-v0.3 actually performs.

Natural Language Processing
1
benchmark
avg rank #36.3
§ 04 · Related models

Other Mistral models scored on Codesota.

Mistral OCR 3
6 results
Codestral 22B
Unknown params · 2 results
Devstral 2
1 result
Devstral Small 2505
1 result
Mistral Large 3
123B params · 1 result
Mistral OCR 2
1 result
Mixtral-8x22b
1 result
Devstral Small
0 results
§ 05 · Sources & freshness

Where these numbers come from.

SpeakLeash/MT-Bench-PL
9
results
9 of 9 rows marked verified.