Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
News
Sports
TV & Film
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/17/bd/b5/17bdb5a1-b0d0-1b26-9cfa-2a8cae8c8842/mza_17542379518000041042.jpg/600x600bb.jpg
Consistently Candid
Sarah Hastings-Woodhouse
19 episodes
2 months ago
In this episode, I chatted with Frances Lorenz, events associate at the Centre for Effective Altruism. We covered our respective paths into AI safety, the emotional impact of learning about x-risk, what it's like to be female in a male-dominated community and more! Follow Frances on Twitter Subscribe to her Substack Apply for EAG London!
Show more...
Technology
Society & Culture,
Philosophy
RSS
All content for Consistently Candid is the property of Sarah Hastings-Woodhouse and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
In this episode, I chatted with Frances Lorenz, events associate at the Centre for Effective Altruism. We covered our respective paths into AI safety, the emotional impact of learning about x-risk, what it's like to be female in a male-dominated community and more! Follow Frances on Twitter Subscribe to her Substack Apply for EAG London!
Show more...
Technology
Society & Culture,
Philosophy
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/17/bd/b5/17bdb5a1-b0d0-1b26-9cfa-2a8cae8c8842/mza_17542379518000041042.jpg/600x600bb.jpg
#19 Gabe Alfour on why AI alignment is hard, what it would mean to solve it & what ordinary people can do about existential risk
Consistently Candid
1 hour 36 minutes
7 months ago
#19 Gabe Alfour on why AI alignment is hard, what it would mean to solve it & what ordinary people can do about existential risk
Gabe Alfour is a co-founder of Conjecture and an advisor to Control AI, both organisations working to reduce risks from advanced AI. We discussed why AI poses an existential risk to humanity, what makes this problem very hard to solve, why Gabe believes we need to prevent the development of superintelligence for at least the next two decades, and more. Follow Gabe on Twitter Read The Compendium and A Narrow Path
Consistently Candid
In this episode, I chatted with Frances Lorenz, events associate at the Centre for Effective Altruism. We covered our respective paths into AI safety, the emotional impact of learning about x-risk, what it's like to be female in a male-dominated community and more! Follow Frances on Twitter Subscribe to her Substack Apply for EAG London!