Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
ML researchers just built a new ensemble technique.
It outperforms XGBoost, CatBoost, and LightGBM.
For years, gradient boosting has been the go-to for tabular learning. Not anymore.
TabM is a parameter-efficient ensemble that gives you:
- The speed of an MLP
- The accuracy of GBDT
Here's how it works:
In tabular ML, we've always had to choose between speed and accuracy. MLPs are fast but underperform. Deep ensembles are accurate but bloated. Transformers are powerful but impractical for most tables.
TabM solves this with a simple insight:
(refer the image below as you read ahead)
Instead of training 32 separate MLPs, it uses one shared model with a lightweight adapter. This small tweak gives you the benefits of ensembling without the cost of training multiple networks.
The results:
Against 15+ models and 46 datasets, TabM ranked 1.7 on average—ahead of XGBoost, CatBoost, and LightGBM. Complex models like FT Transformer and SAINT ranked much lower despite being more expensive to train.
I've shared the research paper and benchmarks in the next tweet.
Research paper →

13.61K
Top
Ranking
Favorites

