AutoML is dead an LLMs have killed it? MLGym is a benchmark and framework testing this theory. Roberta Raileanu and Deepak Nathani discuss how well current LLMs are doing at solving ML tasks, what the biggest roadblocks are, and what that means for AutoML generally. Check out the paper: https://arxiv.org/pdf/2502.14499 More on Roberta: https://rraileanu.github.io/ More on Deepak: https://dnathani.net/
All content for The AutoML Podcast is the property of AutoML Media and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
AutoML is dead an LLMs have killed it? MLGym is a benchmark and framework testing this theory. Roberta Raileanu and Deepak Nathani discuss how well current LLMs are doing at solving ML tasks, what the biggest roadblocks are, and what that means for AutoML generally. Check out the paper: https://arxiv.org/pdf/2502.14499 More on Roberta: https://rraileanu.github.io/ More on Deepak: https://dnathani.net/
Designing algorithms by hand is hard, so Chris Lu and Matthew Jackson talk about how to meta-learn them for reinforcement learning. Many of the concepts in this episode are interesting to meta-learning approaches as a whole, though: "how expressive can we be and still perform well?", "how can we get the necessary data to generalize?" and "how do we make the resulting algorithm easy to apply in practice?" are problems that come up for any learning-based approach to AutoML and some of the...
The AutoML Podcast
AutoML is dead an LLMs have killed it? MLGym is a benchmark and framework testing this theory. Roberta Raileanu and Deepak Nathani discuss how well current LLMs are doing at solving ML tasks, what the biggest roadblocks are, and what that means for AutoML generally. Check out the paper: https://arxiv.org/pdf/2502.14499 More on Roberta: https://rraileanu.github.io/ More on Deepak: https://dnathani.net/