Send us a text The Causal Gap: Truly Responsible AI Needs to Understand the Consequences Why do LLMs systematically drive themselves to extinction, and what does it have to do with evolution, moral reasoning, and causality? In this brand-new episode of Causal Bandits, we meet Zhijing Jin (Max Planck Institute for Intelligent Systems, University of Toronto) to answer these questions and look into the future of automated causal reasoning. In this episode, we discuss: - Zhijing's new work on...
All content for Causal Bandits Podcast is the property of Alex Molak and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Send us a text The Causal Gap: Truly Responsible AI Needs to Understand the Consequences Why do LLMs systematically drive themselves to extinction, and what does it have to do with evolution, moral reasoning, and causality? In this brand-new episode of Causal Bandits, we meet Zhijing Jin (Max Planck Institute for Intelligent Systems, University of Toronto) to answer these questions and look into the future of automated causal reasoning. In this episode, we discuss: - Zhijing's new work on...
Causal AI & Individual Treatment Effects | Scott Mueller Ep. 20 | CausalBanditsPodcast.com
Causal Bandits Podcast
52 minutes
1 year ago
Causal AI & Individual Treatment Effects | Scott Mueller Ep. 20 | CausalBanditsPodcast.com
Send us a textCan we say something about YOUR personal treatment effect?The estimation of individual treatment effects is the Holy Grail of personalized medicine.It's also extremely difficult.Yet, Scott is not discouraged from studying this topic.In fact, he quit a pretty successful business to study it.In a series of papers, Scott describes how combining experimental and observational data can help us understand individual causal effects.Although this sounds enigmatic to many, the intuition ...
Causal Bandits Podcast
Send us a text The Causal Gap: Truly Responsible AI Needs to Understand the Consequences Why do LLMs systematically drive themselves to extinction, and what does it have to do with evolution, moral reasoning, and causality? In this brand-new episode of Causal Bandits, we meet Zhijing Jin (Max Planck Institute for Intelligent Systems, University of Toronto) to answer these questions and look into the future of automated causal reasoning. In this episode, we discuss: - Zhijing's new work on...