Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Fiction
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts115/v4/83/cf/46/83cf465b-e772-a79f-66e2-745bc524d32d/mza_9017597517411926601.jpg/600x600bb.jpg
The Most Interesting People I Know
Garrison Lovely
37 episodes
7 months ago
Interviews with interesting people on science, ethics, and politics, with a focus on guests from Effective Altruism and Left-wing communities.
Show more...
Politics
Society & Culture,
Philosophy,
News
RSS
All content for The Most Interesting People I Know is the property of Garrison Lovely and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Interviews with interesting people on science, ethics, and politics, with a focus on guests from Effective Altruism and Left-wing communities.
Show more...
Politics
Society & Culture,
Philosophy,
News
https://pbcdn1.podbean.com/imglogo/ep-logo/pbblog3153300/bengio_episode_art_zmqtsg.jpg
35 - Yoshua Bengio on Why AI Labs are “Playing Dice with Humanity’s Future”
The Most Interesting People I Know
47 minutes 57 seconds
1 year ago
35 - Yoshua Bengio on Why AI Labs are “Playing Dice with Humanity’s Future”
I'm really excited to come out of hiatus to share this conversation with you. You may have noticed people are talking a lot about AI, and I've started focusing my journalism on the topic. I recently published a 9,000 word cover story in Jacobin’s winter issue called “Can Humanity Survive AI,” and was fortunate to talk to over three dozen people coming at AI and its possible risks from basically every angle. You can find a full episode transcript here. My next guest is about as responsible as anybody for the state of AI capabilities today. But he's recently begun to wonder whether the field he spent his life helping build might lead to the end of the world. Following in the tradition of the Manhattan Project physicists who later opposed the hydrogen bomb, Dr. Yoshua Bengio started warning last year that advanced AI systems could drive humanity extinct.  (I’ve started a Substack since my last episode was released. You can subscribe here.) The Jacobin story asked if AI poses an existential threat to humanity, but it also introduced the roiling three-sided debate around that question. And two of the sides, AI ethics and AI safety, are often pitched as standing in opposition to one another. It's true that the AI ethics camp often argues that we should be focusing on the immediate harms posed by existing AI systems. They also often argue that the existential risk arguments overhype the capabilities of those systems and distract from their immediate harms. It's also the case that many of the people focusing on mitigating existential risks from AI don't really focus on those issues. But Dr. Bengio is a counterexample to both of these points. He has spent years focusing on AI ethics and the immediate harms from AI systems, but he also worries that advanced AI systems pose an existential risk to humanity. And he argues in our interview that it's a false choice between AI ethics and AI safety, that it's possible to have both. Yoshua Bengio is the second-most cited living scientist and one of the so-called “Godfathers of deep learning.” He and the other “Godfathers,” Geoffrey Hinton and Yann LeCun, shared the 2018 Turing Award, computing’s Nobel prize.  In November, Dr. Bengio was commissioned to lead production of the first “State of the Science” report on the “capabilities and risks of frontier AI” — the first significant attempt to create something like the Intergovernmental Panel on Climate Change (IPCC) for AI. I spoke with him last fall while reporting my cover story for Jacobin’s winter issue, “Can Humanity Survive AI?” Dr. Bengio made waves last May when he and Geoffrey Hinton began warning that advanced AI systems could drive humanity extinct.   We discuss: His background and what motivated him to work on AI Whether there's evidence for existential risk (x-risk) from AI How he initially thought about x-risk Why he started worrying How the machine learning community's thoughts on x-risk have changed over time Why reading more on the topic made him more concerned Why he thinks Google co-founder Larry Page’s AI aspirations should be criminalized Why labs are trying to build artificial general intelligence (AGI) The technical and social components of aligning AI systems The why and how of universal, international regulations on AI Why good regulations will help with all kinds of risks Why loss of control doesn't need to be existential to be worth worrying about How AI enables power concentration Why he thinks the choice between AI ethics and safety is a false one Capitalism and AI risk The "dangerous race" between companies Leading indicators of AGI Why the way we train AI models creates risks Background Since we had limited time, we jumped straight into things and didn’t cover much of the basics of the idea of AI-driven existential risk, so I’m including some quotes and background in the intro. If you’re familiar with these ideas, you can skip straight to the interview at 7:24.  Unless stated otherwise, the below are quo
The Most Interesting People I Know
Interviews with interesting people on science, ethics, and politics, with a focus on guests from Effective Altruism and Left-wing communities.