This weekly podcast explores how Artificial Intelligence is impacting the future of humanity by diving into such topics as the AI Alignment Problem, Responsible AI, AI in Education, and AI in Healthcare.
All content for OpenAI Changes Everything is the property of Stephen Walther and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
This weekly podcast explores how Artificial Intelligence is impacting the future of humanity by diving into such topics as the AI Alignment Problem, Responsible AI, AI in Education, and AI in Healthcare.
This week, we’re diving into one of the most important—and tricky—questions in Responsible AI: Are Responsible AI principles culturally relative? If so, what does that mean for companies trying to implement Responsible AI for their products?
For example, if fairness means something different in Norway, the United States, or a Muslim country, how can companies possibly implement Responsible AI in a way that works for everyone?
To help unpack this, I’m joined this week by Pouria Akbari who is a PhD student who has published several papers on the practical challenges of implementing Responsible AI.
OpenAI Changes Everything
This weekly podcast explores how Artificial Intelligence is impacting the future of humanity by diving into such topics as the AI Alignment Problem, Responsible AI, AI in Education, and AI in Healthcare.