Codesota · Models · Mistral-7B-Instruct-v0.2mistralai9 results · 2 benchmarks
Model card

Mistral-7B-Instruct-v0.2.

mistralaiopen-source
§ 01 · Benchmarks

Every benchmark Mistral-7B-Instruct-v0.2 has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01Polish EQ-BenchNatural Language Processing · Polish Emotional Intelligenceeq-score47.0%#61/101source ↗
02Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generaleq-bench33.9%#122/299source ↗
03Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generalpolemo2-in77.4%#135/490source ↗
04Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generaldyk53.8%#158/489source ↗
05Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generalaverage40.4%#209/491source ↗
06Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generalcbd22.3%#290/490source ↗
07Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generalbelebele67.0%#315/490source ↗
08Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generalppc49.2%#408/490source ↗
09Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generalpolqa-open-book50.8%#473/489source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where Mistral-7B-Instruct-v0.2 actually performs.

Natural Language Processing
2
benchmarks
avg rank #241.2
§ 04 · Related models

Other mistralai models scored on Codesota.

Ministral-8B-Instruct-2410
0 results
Mistral-7B-Instruct-v0.1
0 results
Mistral-7B-Instruct-v0.3
7.25B params · 0 results
Mistral-7B-v0.3
0 results
Mistral-Large-Instruct-2407
123B params · 0 results
Mistral-Large-Instruct-2411
123B params · 0 results
Mistral-Nemo-Base-2407
0 results
Mistral-Nemo-Instruct-2407
12.2B params · 0 results
§ 05 · Sources & freshness

Where these numbers come from.

speakleash/open_pl_llm_leaderboard
8
results
SpeakLeash/Polish-EQ-Bench
1
result
9 of 9 rows marked verified.