Model card
h2o-danube3-4b-base.
h2oaiopen-source
§ 01 · Benchmarks
Every benchmark h2o-danube3-4b-base has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | cbd | 22.7% | #284 | — | source ↗ |
| 02 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | polemo2-in | 69.8% | #319 | — | source ↗ |
| 03 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | ppc | 44.1% | #423 | — | source ↗ |
| 04 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | dyk | 29.0% | #427 | — | source ↗ |
| 05 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | belebele | 40.8% | #432 | — | source ↗ |
| 06 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | polqa-open-book | 64.6% | #443 | — | source ↗ |
| 07 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | average | 12.7% | #453 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where h2o-danube3-4b-base actually performs.
§ 04 · Related models
Other h2oai models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
speakleash/open_pl_llm_leaderboard
7
results
7 of 7 rows marked verified.