We normally think: how can we stop AI harming humans? We may also have to ask: how can we stop humans harming AI? After all, there’s a surprisingly strong case for the rights of future AIs. If future (or present!) AIs have rights, why? And what actual specific rights could some future ChatGPT assistant even have? Will AIs wake up, and become conscious or sentient? Or is digital consciousness just not ever possible? Given the risks, should we stop AI development in its tracks to avoid creating...
All content for Surprising Ethics is the property of Dr William Gildea and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
We normally think: how can we stop AI harming humans? We may also have to ask: how can we stop humans harming AI? After all, there’s a surprisingly strong case for the rights of future AIs. If future (or present!) AIs have rights, why? And what actual specific rights could some future ChatGPT assistant even have? Will AIs wake up, and become conscious or sentient? Or is digital consciousness just not ever possible? Given the risks, should we stop AI development in its tracks to avoid creating...
Should the state ban smoking, restrict calories, and stop us harming ourselves? | Prof Sarah Conly on paternalism | Ep. 4
Surprising Ethics
58 minutes
1 month ago
Should the state ban smoking, restrict calories, and stop us harming ourselves? | Prof Sarah Conly on paternalism | Ep. 4
We think of ourselves as rational agents, able to choose well for ourselves. Professor Sarah Conly calls this into question. She argues that we’re reliably bad at making certain decisions. So much so that governments should step in, and make many bad choices like smoking illegal – for our own good. But where does she draw the line? Aren't some decisions sacrosanct? What is the true value of freedom? Is paternalism insulting, or could it be the answer to societal crises? Tune in to hear Conly’...
Surprising Ethics
We normally think: how can we stop AI harming humans? We may also have to ask: how can we stop humans harming AI? After all, there’s a surprisingly strong case for the rights of future AIs. If future (or present!) AIs have rights, why? And what actual specific rights could some future ChatGPT assistant even have? Will AIs wake up, and become conscious or sentient? Or is digital consciousness just not ever possible? Given the risks, should we stop AI development in its tracks to avoid creating...