Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/35/89/78/358978cd-68c2-1207-3dd7-dcf9eef9dedc/mza_9444972950683920287.jpg/600x600bb.jpg
The AutoML Podcast
AutoML Media
43 episodes
1 week ago
AutoML is dead an LLMs have killed it? MLGym is a benchmark and framework testing this theory. Roberta Raileanu and Deepak Nathani discuss how well current LLMs are doing at solving ML tasks, what the biggest roadblocks are, and what that means for AutoML generally. Check out the paper: https://arxiv.org/pdf/2502.14499 More on Roberta: https://rraileanu.github.io/ More on Deepak: https://dnathani.net/
Show more...
Technology
RSS
All content for The AutoML Podcast is the property of AutoML Media and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
AutoML is dead an LLMs have killed it? MLGym is a benchmark and framework testing this theory. Roberta Raileanu and Deepak Nathani discuss how well current LLMs are doing at solving ML tasks, what the biggest roadblocks are, and what that means for AutoML generally. Check out the paper: https://arxiv.org/pdf/2502.14499 More on Roberta: https://rraileanu.github.io/ More on Deepak: https://dnathani.net/
Show more...
Technology
Episodes (20/43)
The AutoML Podcast
MLGym: A New Framework and Benchmark for Advancing AI Research Agents
AutoML is dead an LLMs have killed it? MLGym is a benchmark and framework testing this theory. Roberta Raileanu and Deepak Nathani discuss how well current LLMs are doing at solving ML tasks, what the biggest roadblocks are, and what that means for AutoML generally. Check out the paper: https://arxiv.org/pdf/2502.14499 More on Roberta: https://rraileanu.github.io/ More on Deepak: https://dnathani.net/
Show more...
1 week ago
1 hour 28 minutes

The AutoML Podcast
Leverage Foundational Models for Black-Box Optimization
Where and how can we use foundation models in AutoML? Richard Song, researcher at Google DeepMind, has some answers. Starting off from his position paper on leveraging foundation models for optimization, we chat about what makes foundation models valuable for AutoML, how the next steps could look like, but also why the community is not currently embracing the topic as much as it could. Paper Link: https://arxiv.org/abs/2405.03547 Richard's website: https://xingyousong.github.io/
Show more...
1 month ago
56 minutes

The AutoML Podcast
Nyckel - Building an AutoML Startup
Oscar Beijbom is talking about what it's like to run an AutoML startup: Nyckel. Beyond that, we chat about the differences between academia and industry, what truly matters in application and more. Check out Nyckel at: https://www.nyckel.com/
Show more...
8 months ago
1 hour 20 minutes

The AutoML Podcast
Neural Architecture Search: Insights from 1000 Papers
Colin White, head of research at Abacus AI, takes us on a tour of Neural Architecture Search: its origins, important paradigms and the future of NAS in the age of LLMs. If you're looking for a broad overview of NAS, this is the podcast for you!
Show more...
11 months ago
1 hour 15 minutes

The AutoML Podcast
Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How
There are so many great foundation models in many different domains - but how do you choose one for your specific problem? And how can you best finetune it? Sebastian Pineda has an answer: Quicktune can help select the best model and tune it for specific use cases. Listen to find out when this will be a Huggingface feature and if hyperparameter optimization is even important in finetuning models (spoiler: very much so)!
Show more...
1 year ago
53 minutes

The AutoML Podcast
Discovering Temporally-Aware Reinforcement Learning Algorithms
Designing algorithms by hand is hard, so Chris Lu and Matthew Jackson talk about how to meta-learn them for reinforcement learning. Many of the concepts in this episode are interesting to meta-learning approaches as a whole, though: "how expressive can we be and still perform well?", "how can we get the necessary data to generalize?" and "how do we make the resulting algorithm easy to apply in practice?" are problems that come up for any learning-based approach to AutoML and some of the...
Show more...
1 year ago
51 minutes

The AutoML Podcast
X Hacking: The Threat of Misguided AutoML
AutoML can be a tool for good, but there are pitfalls along the way. Rahul Sharma and David Selby tell us about how AutoML systems can be used to give us false impressions about explainability metrics of ML systems - maliciously, but also on accident. While this episode isn't talking about a new exciting AutoML method, it can tell us a lot about what can go wrong in applying AutoML and what we should think about when we build tools for ML novices to use.
Show more...
1 year ago
54 minutes

The AutoML Podcast
Introduction To New Co-Host, Theresa Eimer
In today's episode, we're introducing the very special Theresa Eimer to the show. Theresa will be taking over the hosting of many of the future episodes. Theresa has already recorded multiple episodes and we are stoked to air those shortly. We also spend a few moments explaining my relative absence in the last few months (since the war in the middle east erupted) and what I'm up to now. Theresa, we are all so excited to be doing this together! To learn more about Theresa, Follow her on T...
Show more...
1 year ago
13 minutes

The AutoML Podcast
AutoGluon: The Story
Today we're talking with Nick Erickson from AutoGluon. We discuss AutoGluon's fascinating origin story, its unique point of view, the science and engineering behind some of its unique contributions, Amazon's Machine Learning University, AutoGluon's multi-layer stack ensembler in all its detail, their feature preprocessing pipeline, their feature type inference, their adaptive approach to early stopping, controlling for inference speeds, the different multi-modal architectures, the ML culture...
Show more...
2 years ago
3 hours 13 minutes

The AutoML Podcast
How to Integrate Logic and Argumentation into Human-Centric AutoML
Today we're talking with Joseph Giovanelli about his work on integrating logic and argumentation into AutoML systems. Joseph is a PhD student at the University of Bologna. He was more recently in Hannover working on ethics and fairness with Marius’ team. The paper he published presents his framework, HAMLET, which stands for Human-centric AutoML via Logic and Argumentation. It allows a user to iteratively specify constraints in a formal manner and, once defined, those constraints become logi...
Show more...
2 years ago
43 minutes

The AutoML Podcast
How to Design an AutoML System using Error Decomposition
Today we're talking with Caitlin Owen, a post-doc at the University of Otago about her work on error decomposition. She recently published a paper titled "Towards Explainable AutoML Using Error Decomposition" about how a more granular view of the components of error can lead the construction of better AutoML systems. Read her paper here: https://link.springer.com/chapter/10.1007/978-3-031-22695-3_13 Follow her on Twitter here: @CaitAshfordOwen Connect with her on LinkedIn here: https://www...
Show more...
2 years ago
28 minutes

The AutoML Podcast
The Semantic Layer and AutoML
Today we're talking with Gaurav Rao, the EVP & GM of Machine Learning and AI at AtScale, a company centered around the semantic layer. For some time now, I've been feeling that there is a deep connection between a formal articulation of business context and the realization of the dream of AutoML, so I searched for people in the space who can help shine light on this direction. Gaurav is one of the few who can speak about this. As you'll hear, he's extremely pedagogic and he's walking us...
Show more...
2 years ago
57 minutes

The AutoML Podcast
Foundation Models: The term and its origins
Today Ankush Garg is speaking with Rishi Bommasani, PhD student at Stanford and one of the originator of the term Foundation Models. They’re talking about the origins of the term Foundation Model, which he and his group advanced, in the paper "On the Opportunities and Risks of Foundation Models". They’ll talk about self-supervision, issues of scale, the motivation behind the terminology, the origins of the Research for Foundation Models Institute at Stanford, outcome homogenization, emergenc...
Show more...
2 years ago
1 hour 10 minutes

The AutoML Podcast
The Business and Engineering of AutoML Products with Raymond Peck
Today we're talking with Raymond Peck, a senior engineer and director in the AutoML space. He spent time at H2O, dotData, Alteryx and many other places. This is a fascinating conversation about the business, engineering, and science of machine learning automation in production. Learning about his experience is crucial for understanding the biography of the space. We discuss the early motivations behind AutoML, the initial value propositions that propelled the first movers in the market, the...
Show more...
2 years ago
2 hours 1 minute

The AutoML Podcast
TabPFN: A Revolution in AutoML?
Today we’re talking to Noah Hollmann and Samuel Muller about their paper on TabPFN - which is an incredible spin on AutoML based on Bayesian inference and transformers. [Quick note on audio quality]: Some of the tracks have not recorded perfectly but I felt that the content there was too important not to release. Sorry for any ear-strain! In the episode, we spend some time discussing posterior predictive probabilities before discussing how exactly they’ve pre-fitted their network, how they g...
Show more...
2 years ago
1 hour 16 minutes

The AutoML Podcast
How financial institutions manage model risk
Today we’re talking to Sean Sexton, the Director of Modeling and Analytics Consulting at KPMG, about the role of models in financial institutions and how the risks associated with them is managed. This turned out to be an incredibly deep and interesting topic, and we really only scratched the surface of it. Sean has a unique ability to summarize developments in an entire space. If you're interested to learn more about modeling in financial institutions and about the history of how we got he...
Show more...
2 years ago
1 hour 12 minutes

The AutoML Podcast
How to solve dynamical systems by fusing data and mechanism
Today we’re talking to Matt Levine. Matt is a PhD student in computing and mathematical sciences at Caltech, and he focuses on improving the prediction and inference of physical systems by blending together both mechanistic modeling and machine learning. This episode is one of my favorites: we go pretty deep into dynamical systems, and into Matt's new framework for solving them by blending traditional, mechanistic, approaches with machine learning. This is a fascinating use of machine ...
Show more...
2 years ago
1 hour 9 minutes

The AutoML Podcast
DASH: How to Search Over Convolutions
Today we’re chatting with Junhong Shen, a PhD student at Carnegie Mellon. Junhong and her team are working on the generalizability of NAS algorithms across a diverse set of tasks. Today we'll be talking about DASH, a NAS algorithm that takes diversity of tasks at its center. In order to implement DASH, Junhong and her team implemented three clever ideas that she'll share with us. Efficient Architecture Search for Diverse Tasks - https://arxiv.org/pdf/2204.07554.pdf Tackling Diverse Tasks ...
Show more...
2 years ago
1 hour 18 minutes

The AutoML Podcast
Human-Centered AutoML: The New Paradigm
Today we're speaking with Marius Lindauer and it is certainly one of my favorite episodes! As you’ll hear, Marius is full of ideas for where AutoML systems can and should go. These ideas are crystallized in a blog-post, published here: https://www.automl.org/rethinking-automl-advancing-from-a-machine-centered-to-human-centered-paradigm/ If you’re searching for research directions, this conversation left me with dozens of ideas. Marius and his team are doing phenomenal work to make AutoML syst...
Show more...
2 years ago
1 hour 10 minutes

The AutoML Podcast
BERT-Sort: How to use language models to semantically order categorical values
Today Ankush Garg is talking to Mehdi Bahrami about his recent project: BERT-Sort. BERT-Sort is an example of how large language models can add useful context to tabular datasets, and to AutoML systems. Mehdi is a Member of Research Staff at Fujitsu and, as he describes, he began using AutoML systems for his research, yet he came across some crucial limitations of existing solutions. The modifications he made highlight a promising future for the relationship between language models and Auto...
Show more...
2 years ago
40 minutes

The AutoML Podcast
AutoML is dead an LLMs have killed it? MLGym is a benchmark and framework testing this theory. Roberta Raileanu and Deepak Nathani discuss how well current LLMs are doing at solving ML tasks, what the biggest roadblocks are, and what that means for AutoML generally. Check out the paper: https://arxiv.org/pdf/2502.14499 More on Roberta: https://rraileanu.github.io/ More on Deepak: https://dnathani.net/