Codesota · Models · Mistral-7B-Instruct-v0.1Mistral AI3 results · 1 benchmarks
Model card

Mistral-7B-Instruct-v0.1.

Mistral AIopen-sourceMistral 7B with instruction tuning

Mistral-7B-Instruct-v0.1. Zero-shot evaluation on CNN/DailyMail reported in arXiv:2507.05123 (Jul 2025). Best zero-shot open-source result among 7B-class models in that study.

§ 01 · Benchmarks

Every benchmark Mistral-7B-Instruct-v0.1 has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01cnn-/-daily-mailComputer Vision · Optical Character Recognitionrouge-216.4%#26/332025-07-01source ↗
02cnn-/-daily-mailComputer Vision · Optical Character Recognitionrouge-137.4%#31/332025-07-01source ↗
03cnn-/-daily-mailComputer Vision · Optical Character Recognitionrouge-l24.5%#33/332025-07-01source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where Mistral-7B-Instruct-v0.1 actually performs.

Computer Vision
1
benchmark
avg rank #30.0
§ 03 · Papers

1 paper with results for Mistral-7B-Instruct-v0.1.

  1. 2025-07-01· Natural Language Processing· 3 results

    An Evaluation of Large Language Models on Text Summarization Tasks Using Prompt Engineering Techniques

§ 04 · Related models

Other Mistral AI models scored on Codesota.

Codestral 25.01
1 result
Devstral Medium
1 result
Devstral Small 1.1
1 result
Mistral 7B
Unknown params · 0 results
§ 05 · Sources & freshness

Where these numbers come from.

arxiv
3
results
3 of 3 rows marked verified.