Model card
Mistral-Large-Instruct-2407.
mistralaiopen-source123B params
§ 01 · Benchmarks
Every benchmark Mistral-Large-Instruct-2407 has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | Polish EQ-Bench | Natural Language Processing · Polish Emotional Intelligence | eq-score | 78.1% | #1 | — | source ↗ |
| 02 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | dyk | 75.9% | #2 | — | source ↗ |
| 03 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | average | 69.1% | #3 | — | source ↗ |
| 04 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | belebele | 92.6% | #4 | — | source ↗ |
| 05 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | polemo2-in | 88.5% | #5 | — | source ↗ |
| 06 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | eq-bench | 62.6% | #14 | — | source ↗ |
| 07 | CPTU-Bench | Natural Language Processing · Polish Text Understanding | average | 3.9% | #15 | — | source ↗ |
| 08 | CPTU-Bench | Natural Language Processing · Polish Text Understanding | language-understanding | 4.0% | #15 | — | source ↗ |
| 09 | CPTU-Bench | Natural Language Processing · Polish Text Understanding | sentiment | 4.2% | #15 | — | source ↗ |
| 10 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | cbd | 38.7% | #15 | — | source ↗ |
| 11 | CPTU-Bench | Natural Language Processing · Polish Text Understanding | phraseology | 3.9% | #17 | — | source ↗ |
| 12 | CPTU-Bench | Natural Language Processing · Polish Text Understanding | tricky-questions | 3.6% | #20 | — | source ↗ |
| 13 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | polqa-open-book | 90.6% | #50 | — | source ↗ |
| 14 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | ppc | 77.5% | #70 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where Mistral-Large-Instruct-2407 actually performs.
§ 04 · Related models
Other mistralai models scored on Codesota.
Ministral-8B-Instruct-2410
0 results
Mistral-7B-Instruct-v0.1
0 results
Mistral-7B-Instruct-v0.2
0 results
Mistral-7B-Instruct-v0.3
7.25B params · 0 results
Mistral-7B-v0.3
0 results
Mistral-Large-Instruct-2411
123B params · 0 results
Mistral-Nemo-Base-2407
0 results
Mistral-Nemo-Instruct-2407
12.2B params · 0 results
§ 05 · Sources & freshness
Where these numbers come from.
speakleash/open_pl_llm_leaderboard
8
results
SpeakLeash/CPTU-Bench
5
results
SpeakLeash/Polish-EQ-Bench
1
result
14 of 14 rows marked verified.