Send us a text The Causal Gap: Truly Responsible AI Needs to Understand the Consequences Why do LLMs systematically drive themselves to extinction, and what does it have to do with evolution, moral reasoning, and causality? In this brand-new episode of Causal Bandits, we meet Zhijing Jin (Max Planck Institute for Intelligent Systems, University of Toronto) to answer these questions and look into the future of automated causal reasoning. In this episode, we discuss: - Zhijing's new work on...
All content for Causal Bandits Podcast is the property of Alex Molak and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Send us a text The Causal Gap: Truly Responsible AI Needs to Understand the Consequences Why do LLMs systematically drive themselves to extinction, and what does it have to do with evolution, moral reasoning, and causality? In this brand-new episode of Causal Bandits, we meet Zhijing Jin (Max Planck Institute for Intelligent Systems, University of Toronto) to answer these questions and look into the future of automated causal reasoning. In this episode, we discuss: - Zhijing's new work on...
MSFT Scientist: Agents, Causal AI & Future of DoWhy | Amit Sharma S2E4 | CausalBanditsPodcast.com
Causal Bandits Podcast
1 hour 10 minutes
7 months ago
MSFT Scientist: Agents, Causal AI & Future of DoWhy | Amit Sharma S2E4 | CausalBanditsPodcast.com
Send us a text *Agents, Causal AI & The Future of DoWhy* The idea of agentic systems taking over more complex human tasks is compelling. New "production-grade" frameworks to build agentic systems pop up, suggesting that we're close to achieving full automation of these challenging multi-step tasks. But is the underlying agentic technology itself ready for production? And if not, can LLM-based systems help us making better decisions? Recent new developments in the DoWhy/PyWhy ecosyste...
Causal Bandits Podcast
Send us a text The Causal Gap: Truly Responsible AI Needs to Understand the Consequences Why do LLMs systematically drive themselves to extinction, and what does it have to do with evolution, moral reasoning, and causality? In this brand-new episode of Causal Bandits, we meet Zhijing Jin (Max Planck Institute for Intelligent Systems, University of Toronto) to answer these questions and look into the future of automated causal reasoning. In this episode, we discuss: - Zhijing's new work on...