Techno-philosopher Eliezer Yudkowsky recently went on Ezra Klein's podcast to argue that if we continue on our path toward superintelligent AI, these machines will destroy humanity. In this episode, Cal responds to Yudkowsky’s argument point by point, concluding with a more general claim that these general styles of discussions suffer from what he calls “the philosopher’s fallacy,” and are distracting us from real problems with AI that are actually afflicting us right now. He then answers listener questions about AI, responds to listener comments from an earlier AI episode, and ends by discussing Alpha schools, which claim to use AI to 2x the speed of education.
Show more...