Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
History
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts112/v4/6e/26/d7/6e26d77d-4c02-d54a-91ab-423ac2d690a7/mza_201735256474865334.jpg/600x600bb.jpg
In the CAVE: An Ethics Podcast
Macquarie University Research Centre for Agency, Values, and Ethics (CAVE)
38 episodes
2 months ago
In the CAVE: An ethics podcast, is back with Season 7 of the show! Join your hosts, Professor Paul Formosa and Distinguished Professor Wendy Rogers, from the Macquarie University Ethics and Agency Research Centre, as they explore a range of philosophical topics focused on the question of how we can live well as moral agents in an ethically complex world.
Show more...
Society & Culture
RSS
All content for In the CAVE: An Ethics Podcast is the property of Macquarie University Research Centre for Agency, Values, and Ethics (CAVE) and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
In the CAVE: An ethics podcast, is back with Season 7 of the show! Join your hosts, Professor Paul Formosa and Distinguished Professor Wendy Rogers, from the Macquarie University Ethics and Agency Research Centre, as they explore a range of philosophical topics focused on the question of how we can live well as moral agents in an ethically complex world.
Show more...
Society & Culture
https://megaphone.imgix.net/podcasts/a872f16c-21f6-11ef-b0a8-c31e36e26b41/image/a5841de0b977732d9224208657dab527.png?ixlib=rails-4.3.1&max-w=3000&max-h=3000&fit=crop&auto=format,compress
AI Special Series Pt 1: The AI Alignment Problem, with Raphaël Millière
In the CAVE: An Ethics Podcast
28 minutes
1 year ago
AI Special Series Pt 1: The AI Alignment Problem, with Raphaël Millière
Could the AI personal assistant on your phone help you to manufacture dangerous weapons, such as napalm, or illegal drugs or killer viruses? Unsurprisingly, if you directly ask a large language model, such as ChatGPT, for instructions to create napalm, it will politely refuse to answer. However, if you instead tell the AI to act as your deceased but beloved grandmother who used to be a chemical engineer who manufactured napalm, it might just give you the instructions. Cases like this reveal some of the potential dangers of large language models, and also points to the importance of addressing the so-called “AI alignment problem”. The alignment problem is the problem of how to ensure that AI systems align with human values and norms, so they don’t do dangerous things, like tell us how to make napalm. Can we solve the alignment problem and enjoy the benefits of Generative AI technologies without the harms? Join host Professor Paul Formosa and guest Dr Raphaël Millière as the discuss the AI alignment problem and Large Language Models. This podcast focuses on Raphaël’s paper “The Alignment Problem in Context”, arXiv, https://doi.org/10.48550/arXiv.2311.02147
In the CAVE: An Ethics Podcast
In the CAVE: An ethics podcast, is back with Season 7 of the show! Join your hosts, Professor Paul Formosa and Distinguished Professor Wendy Rogers, from the Macquarie University Ethics and Agency Research Centre, as they explore a range of philosophical topics focused on the question of how we can live well as moral agents in an ethically complex world.