Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts125/v4/c0/c1/cc/c0c1cc46-def3-ec22-28b9-acc0c036cacb/mza_3297504708566811701.png/600x600bb.jpg
AXRP - the AI X-risk Research Podcast
Daniel Filan
59 episodes
1 week ago
AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.
Show more...
Technology
Science
RSS
All content for AXRP - the AI X-risk Research Podcast is the property of Daniel Filan and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.
Show more...
Technology
Science
https://static.libsyn.com/p/assets/8/b/4/f/8b4f1173cd54f41816c3140a3186d450/Untitled_Artwork-20250615-oxmntfq1tt.png
43 - David Lindner on Myopic Optimization with Non-myopic Approval
AXRP - the AI X-risk Research Podcast
1 hour 40 minutes 59 seconds
4 months ago
43 - David Lindner on Myopic Optimization with Non-myopic Approval
AXRP - the AI X-risk Research Podcast
AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.