
What are the biggest obstacles in the way of incorporating ethical values into AI?
OpenAI has funded a $1 million research project at Duke University, focusing on AI’s role in predicting moral judgments in complex scenarios across fields like medicine, law, and business. As AI becomes increasingly influential in decision-making, the question of aligning it with human moral principles grows more pressing. Our host, Carter Considine, breaks it down in this episode of Ethical Bytes.
We’re all aware that morality itself is a complex idea–shaped by countless personal, cultural, and contextual factors. Philosophical frameworks like utilitarianism (which prioritizes outcomes) and deontology (which emphasizes following moral rules) offer contrasting views on ethical decisions. Each camp has its own take on resolving dilemmas such as self-driving cars choosing between saving pedestrians or passengers. Then there are cultural differences, like those found in studies comparing American and Chinese ethical judgments, to name one example.
AI’s technical limitations also hinder its alignment with ethics. AI systems lack emotional intelligence and rely on patterns in data, which often contain biases. Early experiments, such as the Allen Institute’s “Ask Delphi,” showed AI’s inability to grasp nuanced ethical contexts, leading to biased or inconsistent results.
To address these challenges, researchers are developing techniques like Reinforcement Learning with Human Feedback (RLHF), Direct Preference Optimization (DPO), Proximal Policy Optimization (PPO), and Constitutional AI. Each method has strengths and weaknesses, but none offer a perfect solution.
One promising initiative is Duke University's AI research on kidney allocation. This AI system is designed to assist medical professionals in making ethically consistent decisions by reflecting both personal and societal moral standards. While still in early stages, the project represents a step toward AI systems that work alongside humans, enhancing decision-making while respecting human values.
The future of ethical AI aims to create tools that aid, rather than replace human judgment. Rather than attempting to make ourselves redundant, what we need in our technology are diverse ethical perspectives in decision-making processes.
Key Topics:
More info, transcripts, and references can be found at ethical.fm