Model card
LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct.
LGAI-EXAONEopen-source2.41B params
§ 01 · Benchmarks
Every benchmark LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | CPTU-Bench | Natural Language Processing · Polish Text Understanding | language-understanding | 2.1% | #87 | — | source ↗ |
| 02 | CPTU-Bench | Natural Language Processing · Polish Text Understanding | tricky-questions | 0.5% | #89 | — | source ↗ |
| 03 | CPTU-Bench | Natural Language Processing · Polish Text Understanding | average | 1.7% | #90 | — | source ↗ |
| 04 | CPTU-Bench | Natural Language Processing · Polish Text Understanding | phraseology | 2.1% | #90 | — | source ↗ |
| 05 | CPTU-Bench | Natural Language Processing · Polish Text Understanding | sentiment | 1.9% | #93 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct actually performs.
§ 05 · Sources & freshness
Where these numbers come from.
SpeakLeash/CPTU-Bench
5
results
5 of 5 rows marked verified.