We normally think: how can we stop AI harming humans? We may also have to ask: how can we stop humans harming AI? After all, there’s a surprisingly strong case for the rights of future AIs. If future (or present!) AIs have rights, why? And what actual specific rights could some future ChatGPT assistant even have? Will AIs wake up, and become conscious or sentient? Or is digital consciousness just not ever possible? Given the risks, should we stop AI development in its tracks to avoid creating...
All content for Surprising Ethics is the property of Dr William Gildea and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
We normally think: how can we stop AI harming humans? We may also have to ask: how can we stop humans harming AI? After all, there’s a surprisingly strong case for the rights of future AIs. If future (or present!) AIs have rights, why? And what actual specific rights could some future ChatGPT assistant even have? Will AIs wake up, and become conscious or sentient? Or is digital consciousness just not ever possible? Given the risks, should we stop AI development in its tracks to avoid creating...
We normally think: how can we stop AI harming humans? We may also have to ask: how can we stop humans harming AI? After all, there’s a surprisingly strong case for the rights of future AIs. If future (or present!) AIs have rights, why? And what actual specific rights could some future ChatGPT assistant even have? Will AIs wake up, and become conscious or sentient? Or is digital consciousness just not ever possible? Given the risks, should we stop AI development in its tracks to avoid creating...
We think of ourselves as rational agents, able to choose well for ourselves. Professor Sarah Conly calls this into question. She argues that we’re reliably bad at making certain decisions. So much so that governments should step in, and make many bad choices like smoking illegal – for our own good. But where does she draw the line? Aren't some decisions sacrosanct? What is the true value of freedom? Is paternalism insulting, or could it be the answer to societal crises? Tune in to hear Conly’...
Hedonists claim that pleasure is all that makes for a good life. Are they right that relationships, achievements, and meaningfulness have no intrinsic value? We explore the surprising arguments on both sides of this debate about wellbeing, including Nozick’s infamous experience machine thought experiment. Would you plug into an experience simulator, forever cutting yourself off from the real world to have the best time of your life? Podcast website for contact details and more: surpri...
Some philosophers now argue that monogamy is morally wrong. Imagine your friend came and told you that you can’t have any other friendships. You’d be bemused. But what’s the difference between this and exclusivity in love relationships? Is jealousy a good reason to be monogamous? Or is ethical non-monogamy – such as open relationships or polyamory – the only ethical approach? Podcast website for contact details and more: surprisingethics.buzzsprout.com Instagram: @surprising_ethics_podcast t...
Society assumes that animals do not have moral rights. But what could this be based on? How could we argue that humans are the only animals to have rights? And where do we draw the line? These questions about animal ethics also raise the question: why does each of us human beings, ultimately, matter as an individual? Podcast website for contact details and more: surprisingethics.buzzsprout.com Instagram: @surprising_ethics_podcast tinyurl.com/surprisingethics
A trailer briefly outlining Surprising Ethics, a new podcast launching on 1st September 2025. Podcast website for contact details and more: surprisingethics.buzzsprout.com Instagram: @surprising_ethics_podcast tinyurl.com/surprisingethics
We normally think: how can we stop AI harming humans? We may also have to ask: how can we stop humans harming AI? After all, there’s a surprisingly strong case for the rights of future AIs. If future (or present!) AIs have rights, why? And what actual specific rights could some future ChatGPT assistant even have? Will AIs wake up, and become conscious or sentient? Or is digital consciousness just not ever possible? Given the risks, should we stop AI development in its tracks to avoid creating...