
What if machines could reason like humans? We're racing toward that reality. It's called artificial general intelligence. The question is, can we build it safely?
In the second part of our conversation with Dr. Zico Kolter, head of Carnegie Mellon University's Machine Learning Department and OpenAI board member, where he chairs their Safety and Security Committee, we explore the critical challenges facing AI development today.
Dr. Kolter addresses deepfakes and the erosion of trust in media, explaining how AI accelerates existing problems while offering potential technological solutions. We examine privacy concerns, debunking common misconceptions about how chatbots use personal data. The discussion covers data scarcity, infrastructure challenges, and the massive energy demands of AI systems.
We also explore bias in AI models, the psychological impact of human-AI relationships on vulnerable populations, and the concept of artificial general intelligence (AGI). Dr. Kolter shares his optimistic yet cautious vision for the next five years, emphasizing the importance of building AI systems that safely serve humanity's best interests.
This is the Season Two finale of Where What If Becomes What's Next from Carnegie Mellon University.