AutoML is dead an LLMs have killed it? MLGym is a benchmark and framework testing this theory. Roberta Raileanu and Deepak Nathani discuss how well current LLMs are doing at solving ML tasks, what the biggest roadblocks are, and what that means for AutoML generally. Check out the paper: https://arxiv.org/pdf/2502.14499 More on Roberta: https://rraileanu.github.io/ More on Deepak: https://dnathani.net/
All content for The AutoML Podcast is the property of AutoML Media and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
AutoML is dead an LLMs have killed it? MLGym is a benchmark and framework testing this theory. Roberta Raileanu and Deepak Nathani discuss how well current LLMs are doing at solving ML tasks, what the biggest roadblocks are, and what that means for AutoML generally. Check out the paper: https://arxiv.org/pdf/2502.14499 More on Roberta: https://rraileanu.github.io/ More on Deepak: https://dnathani.net/
Today we’re talking to Noah Hollmann and Samuel Muller about their paper on TabPFN - which is an incredible spin on AutoML based on Bayesian inference and transformers. [Quick note on audio quality]: Some of the tracks have not recorded perfectly but I felt that the content there was too important not to release. Sorry for any ear-strain! In the episode, we spend some time discussing posterior predictive probabilities before discussing how exactly they’ve pre-fitted their network, how they g...
The AutoML Podcast
AutoML is dead an LLMs have killed it? MLGym is a benchmark and framework testing this theory. Roberta Raileanu and Deepak Nathani discuss how well current LLMs are doing at solving ML tasks, what the biggest roadblocks are, and what that means for AutoML generally. Check out the paper: https://arxiv.org/pdf/2502.14499 More on Roberta: https://rraileanu.github.io/ More on Deepak: https://dnathani.net/