Codesota · Models · LightGBMMicrosoft2 results · 2 benchmarks
Model card

LightGBM.

Microsoftopen-sourceGradient Boosted Trees (leaf-wise)

Fast gradient boosting with leaf-wise tree growth. State-of-the-art for many tabular tasks.

§ 01 · Benchmarks

Every benchmark LightGBM has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01California HousingTime Series · Tabular Regressionrmse0.4%#2/2source ↗
02OpenML-CC18Time Series · Tabular Classificationaccuracy86.9%#3/52025-06-01source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where LightGBM actually performs.

Time Series
2
benchmarks
avg rank #2.5
§ 03 · Papers

1 paper with results for LightGBM.

  1. 2025-06-01· 1 result

    ConTextTab: A Semantics-Aware Tabular In-Context Learner

    Marco Spinaci
§ 04 · Related models

Other Microsoft models scored on Codesota.

RAD-DINO
2 results · 1 SOTA
NaturalSpeech 3
~500M params · 1 result · 1 SOTA
Swin Transformer V2 Large
197M params · 1 result · 1 SOTA
WavLM Large (SV)
316M params · 1 result · 1 SOTA
ResNet-50
25M params · 3 results
Florence-2-Large
2 results
KOSMOS-2.5
2 results
ResNet-152
60M params · 2 results
§ 05 · Sources & freshness

Where these numbers come from.

LightGBM scikit-learn benchmark
1
result
ConTextTab Table 1
1
result
2 of 2 rows marked verified.