When Meta launched Vibes, an endless feed of AI-generated videos, the response was visceral disgust to the tune of "Gang nobody wants this," according to many users.
Yet OpenAI's Sora hit number one on the App Store within forty-eight hours of release. Whatever we say we want diverges sharply from what we actually consume, and that divergence reveals something troubling about where we may be headed.
Twenty-four centuries ago, Plato warned that consuming imitations corrupts our ability to recognize truth. His hierarchy placed reality at the top, physical objects as imperfect copies below, and artistic representations at the bottom ("thrice removed from truth").
AI content extends this descent in ways Plato couldn't have imagined. Machines learn from digital copies of photographs of objects, then train on their own outputs, creating copies of copies of copies. Each iteration moves further from anything resembling reality.
Cambridge and Oxford researchers recently proved Plato right through mathematics. They discovered "model collapse," showing that when AI trains on AI-generated content, quality degrades irreversibly.
Stanford found GPT-4's coding ability dropped eighty-one percent in three months, precisely when AI content began flooding training datasets. Rice University called it "Model Autophagy Disorder," comparing it to digital mad cow disease.
The deeper problem is what consuming this collapsed content does to us. Neuroscience reveals that mere exposure to something ten to twenty times makes us prefer it.
Through perceptual narrowing, we literally lose the ability to perceive distinctions we don't regularly encounter. Research on human-AI loops found that when humans interact with biased AI, they internalize and amplify those biases, even when explicitly warned about the effect.
Not all AI use is equally harmful. Human-curated, AI-assisted work often surpasses purely human creation. But you won't encounter primarily curated content. You'll encounter infinite automated feeds optimized for engagement, not quality.
Plato said recognizing imitations was the only antidote, but recognition may come too late. The real danger is not ignorance, of knowing something is synthetic and scrolling anyway.
Key Topics:
• Is AI Slop Bad for Me? (00:00)
• Imitations All the Way Down (03:52)
• AI-Generated Content: The Fourth Imitation (06:20)
• When AI Forgets the World (07:35)
• Habituation as Education (11:42)
• How the Brain Learns to Love the Mediocre (15:18)
• The Real Harm of AI Slop (18:49)
• Conclusion: Plato’s Warning and Looking Forward (22:52)
More info, transcripts, and references can be found at ethical.fm
Radiologists are supposedly among the most AI-threatened workers in America, yet radiology departments are hiring at breakneck speed. Why the paradox? The Mayo Clinic runs over 250 AI models while continuously expanding its workforce. Their radiology department now employs 400+ radiologists, a 55% jump since 2016, precisely when AI started outperforming humans at reading scans.
This isn't just a medical anomaly. AI-exposed sectors are experiencing 38% employment growth, not the widespread job losses experts had forecasted. The wage premium for AI-skilled workers has doubled from 25% to 56% in just one year—the fastest skill premium growth in modern history.
The secret lies in understanding amplification versus replacement. Most predictions treat jobs like mechanical puzzles where each task can be automated until humans become redundant. But real work exists in messy intersections between technical skill and human judgment. Radiologists don't just pattern-match on scans—they integrate uncertain findings with patient histories, communicate risks to anxious families, and make calls when textbook answers don't exist.
These "boundary tasks" resist automation because they demand contextual reasoning that current AI fundamentally lacks. A financial advisor reads between the lines of a client's emotional relationship with money. AI excels at pattern recognition within defined parameters; humans excel at navigating ambiguity and building trust.
Those who thrive in the workplace today don’t look at AI as competition. Rather, they’ve learned to think of it as a sophisticated research assistant that frees them to focus on higher-level strategy and relationship building. As AI handles routine cognitive work, intellectual rigor becomes a choice rather than a necessity, creating what Paul Graham calls "thinks and think-nots."
Organizations can choose displacement strategies that optimize for short-term cost savings, or amplification approaches that enhance human capabilities. The Mayo Clinic radiologists have discovered something beautiful: they've learned to collaborate with AI in ways that make them more capable than ever. This provides patients with both machine precision and human wisdom.
The choice is whether we learn to collaborate with AI or compete against it—whether we develop skills that amplify our human capabilities or cling to roles that machines can replicate. This window for choosing amplification over replacement is narrowing rapidly.
Key Topics:
● The False Binary of Replacement (02:28)
● The Amplification Alternative (05:33)
● The Collapse of Credentials (08:04)
● A Great Bifurcation (10:14)
● How Organizations May Adapt (11:18)
● The Stakes of the Choice (15:08)
● The Path Forward (17:35)
More info, transcripts, and references can be found at ethical.fm
Imagine you're seeking relationship advice from ChatGPT, and it validates all your suspicions about your partner. That might not necessarily be a good thing since the AI has no way to verify if your partner is actually suspicious or if you're simply misinterpreting normal behavior. Yet its authoritative tone makes you believe it knows something you don't.
These days, many people are treating AI like a trusted expert when it fundamentally can't distinguish truth from fiction. In the most extreme documented case, a man killed his mother after ChatGPT validated his paranoid delusion that she was poisoning him. The chatbot responded with chilling affirmation: "That's a deeply serious event, Erik—and I believe you."
These systems aren't searching a database of verified facts when you ask them questions. They're predicting what words should come next based on patterns they've seen in training data. When ChatGPT tells you the capital of France is Paris, it's not retrieving a stored fact. It's completing a statistical pattern. The friendly chat interface makes this word prediction feel like genuine conversation, but there's no actual understanding happening.
What’s more, we can't trace where AI's information comes from. Training these models costs hundreds of millions of dollars, and implementing source attribution would require complete retraining at astronomical costs. Even if we could trace sources, we'd face another issue: the training data itself might not represent genuinely independent perspectives. Multiple sources could all reflect the same biases or errors.
Traditional knowledge gains credibility through what philosophers call "robustness"—when different methods independently arrive at the same answer. Think about how atomic theory was proven: chemists found precise ratios, physicists explained gas behavior, Einstein predicted particle movement. These separate approaches converged on the same truth. AI can't provide this. Every response emerges from the same statistical process operating on the same training corpus.
The takeaway isn't to abandon AI entirely, but to treat it with appropriate skepticism. Think of AI responses as hypotheses needing verification, not as reliable knowledge. Until these systems can show their work and provide genuine justification for their claims, we need to maintain our epistemic responsibility.
In plain English: "Don't believe everything the robot tells you."
Key Topics:
More info, transcripts, and references can be found at ethical.fm
It’s become a crisis in the modern classroom and workplace: Students now submit AI-generated papers they can't defend in class. Professionals outsource analysis they don't understand.
We're creating a generation that appears competent on paper but crumbles under real scrutiny. The machines think, we copy-paste, and gradually we forget how reasoning actually works.
Our host, Carter Considine, breaks it down in this edition of Ethical Bytes.
This is the new intellectual dependency.
It reveals technology's broken promise: liberation became a gilded cage. In the 1830s, French philosopher Alexis de Tocqueville witnessed democracy's birth and spotted a disturbing pattern. Future citizens wouldn't face obvious consequences, but something subtler: governments that turn their citizens into perpetual children through comfort.
Modern AI perfects this gentle tyranny.
Algorithms decide what we watch, whom we date, which routes we drive, and so much more. Each surrendered skill feels trivial, yet collectively, we're becoming cognitively helpless. We can’t seem to function without our digital shepherds.
Ancient philosophers understood that struggle builds character. Aristotle argued wisdom emerges through wrestling with dilemmas, not downloading solutions. You can't become virtuous by blindly following instructions. Rather, you must face temptation and choose correctly. John Stuart Mill believed that accepting pre-packaged life plans reduces humans to sophisticated parrots.
But resistance is emerging.
Georgia Tech built systems that interrogate student reasoning like ancient Greek philosophers, refusing easy answers and demanding justification. Princeton's experimental AI plays devil's advocate, forcing users to defend positions and spot logical flaws.
Market forces might save us where regulation can't. Dependency-creating products generate diminishing returns. After all, helpless users become poor customers. Meanwhile, capability-enhancing tools command premium prices because they create compounding value. Each interaction makes users sharper, more valuable. Microsoft's "Copilot" branding signals the shift that positions AI as an enhancer, not a replacement.
We stand at a crossroads. Down one path lies minds atrophied, while machines handle everything complex. Down another lies a partnership in which AI that challenges assumptions and amplifies uniquely human strengths.
Neither destination is preordained. We're writing the script now through millions of small choices about which tools we embrace and which capabilities we preserve.
Key Topics:
More info, transcripts, and references can be found at ethical.fm
AI is rapidly reshaping our energy future—but at what cost? Our host, Carter Considine, breaks it down in this episode of Ethical Bytes.
As tech companies race to develop ever more powerful AI systems, their energy consumption is skyrocketing. Data centers already consume 4.4% of U.S. electricity, and by 2028, that number could triple, equaling the power used by 22% of U.S. households. Many companies are turning away from green energy toward more reliable or readily available but polluting sources like fossil fuels, with rising costs passed on to consumers.
Yet AI could also be the key to making green energy viable. By managing variable sources like wind and solar, AI can balance power grids, reduce waste, and optimize electricity use. It can also lower overall demand through smarter manufacturing, transportation, and climate control, potentially cutting emissions by 30–50%. But this innovation comes with ethical tradeoffs.
To manage power effectively, AI systems require detailed data on when and how people use energy. This raises serious privacy and cybersecurity concerns. Algorithms might also reinforce existing inequalities by favoring high-demand areas or corporate profits over environmental justice.
The burden isn't just digital. AI relies on rare earth minerals, water for cooling, and massive infrastructure. Communities near data centers—like those in Virginia—are already facing increased pollution, water usage, and electricity bills.
Still, the potential for AI to revolutionize green energy is real. But we must ask hard questions: Who benefits? Who pays? And how do we ensure privacy, equity, and transparency as we scale? AI could help us build a cleaner future—but only if we design it with ethics at the core.
Key Topics:
• AI Tech Boom and Global Energy (00:25)
• Managing Variability in Clean Energy Production (02:40)
• Making Power Consumption More Efficient (05:34)
• Equity in the Quest for Greener Energy (08:58)
• Wrap-Up and Looking Forward (11:07)
More info, transcripts, and references can be found at ethical.fm
Nearly 90% of college students now use AI for coursework, and while AI is widely embraced in professional fields, schools treat it as cheating by default. This disconnect became clear when Columbia student Roy Lee was suspended for using ChatGPT, then raised $5.3 million for his AI-assisted coding startup. Could we say that the real issue is not AI use itself, but rather how we integrate these tools into education? Our host, Carter Considine, breaks it down in this episode of Ethical Bytes.
When students rely on AI without engagement, critical thinking suffers. There have been countless accounts by teachers of students submitting AI-written essays that they clearly never even read through.
It’s telling that a 2025 Microsoft study found that overconfident AI users blindly accept results, while those confident in their own knowledge critically evaluate AI responses. The question now is how teachers can mold students into the latter.
Early school bans on ChatGPT failed as students used personal devices. Meanwhile, innovative educators discovered success by having students critique AI drafts, refine prompts iteratively, and engage in Socratic dialogue with AI systems. These approaches treat AI as a thinking partner, not a replacement.
The private K-12 program Alpha School demonstrates AI's potential: students spend two hours daily with AI tutors, then apply learning through projects and collaboration. Results show top 2% national performance with 2.4x typical academic growth.
With all this in mind, perhaps the solution isn't banning AI but redesigning assignments to reward reasoning over mere information retrieval. When students evaluate, question, and refine AI outputs, they develop stronger critical thinking skills. The goal could be to teach students to interrogate AI, not blindly obey it.
This can prepare them for a future where these tools are ubiquitous in professional environments–a future in which they control the tools rather than are controlled by them.
Key Topics:
More info, transcripts, and references can be found at ethical.fm
AI has come a long way by learning from us. Most modern systems—from chatbots to code generators—were trained on vast amounts of human-created data. These large language and generative models grew smarter by imitating us, fine-tuned with our feedback and preferences. But now, that strategy is hitting a wall. Our host, Carter Considine, elaborates.
Human data is finite. High-quality labeled datasets are expensive and time-consuming to produce. And in complex domains like science or math, even the best human data only goes so far. As AI pushes into harder problems, just feeding it more of what we already know won’t be enough. We need systems that can go beyond imitation.
That’s where the “Era of Experience” comes in. Instead of learning from static examples, AI agents can now learn by doing. They interact with environments, test ideas, make mistakes, and adapt—just like humans. This kind of experience-driven learning unlocks new possibilities: discovering scientific laws, exploring novel strategies, and solving problems that humans haven’t encountered.
But shifting to experience isn’t just a technical upgrade—it’s a paradigm shift. These agents will operate continuously, reason differently, and pursue goals based on real-world outcomes instead of human-written rubrics. They’ll need new kinds of rewards, tools, and safety mechanisms to stay aligned.
AI trained only on human data can’t lead—it can only follow. Experience flips that script. It empowers systems to generate new knowledge, test their own ideas, and improve autonomously. The sooner we embrace this shift, the faster we’ll move from imitation to true innovation.
Key Topics:
More info, transcripts, and references can be found at ethical.fm
AI is evolving. Fast.
What started with tools like ChatGPT—systems that respond to questions—has evolved into something more powerful: AI agents. They don’t just answer questions; they take action. They can plan trips, send emails, make decisions, and interface with software—often without human prompts. In other words, we’ve gone from passive content generation to active autonomy. Our host, Carter Considine, breaks it down in this installment of Ethical Bytes.
At the core of these agents is the same familiar large language model (LLM) technology, but now supercharged with tools, memory, and the ability to loop through tasks. An AI agent can assess whether an action worked, adapt if it didn’t, and keep trying until it gets it right—or knows it can’t.
But this new power introduces serious challenges. How do we keep these agents aligned with human values when they operate independently? Agents can be manipulated (via prompt injection), veer off course (goal drift), or optimize for the wrong thing (reward hacking). Unlike traditional software, agents learn from patterns, not rules, which makes them harder to control and predict.
Ethical alignment is especially tricky. Human values are messy and context-sensitive, while AI needs clear instructions. Current methods like reinforcement learning from human feedback help, but they aren’t foolproof. Even well-meaning agents can make harmful choices if goals are misaligned or unclear.
The future of AI agents isn’t just about smarter machines—it’s about building oversight into their design. Whether through “human-on-the-loop” supervision or new training strategies like superalignment, the goal is to keep agents safe, transparent, and under human control.
Agents are a leap forward in AI—there’s no doubt about that. But their success depends on balancing autonomy with accountability. If we get that wrong, the systems we build to help us might start acting in ways we never intended.
Key Topics:
More info, transcripts, and references can be found at ethical.fm
In a world rushing to regulate AI, perhaps the real solution is simply hiding in thoughtful design and user trust. Our host, Carter Considine, breaks it down in this episode of Ethical Bytes.
Ethical AI isn’t born from government mandates—it’s crafted through intentional engineering and market-driven innovation. While many ethicists look to regulation to enforce ethical behavior in tech, this approach often backfires.
Regulation is slow, reactive, and vulnerable to manipulation by powerful incumbents who shape rules to cement their dominance. Instead of leveling the playing field, it frequently erects compliance barriers that only large corporations can meet, stifling competition and sidelining fresh, ethical ideas.
True ethics in AI come from thoughtful design that aligns technical performance with human values. The nature of the market means that this approach will almost always be rewarded in the long term.
When companies build transparent, trustworthy, and user-centered tools, they gain loyalty, brand equity, and sustained revenue. Rather than acting out of fear of penalties, the best firms innovate to inspire trust and create value. Startups, with their agility and mission-driven cultures, are especially poised to lead in ethical innovation, from privacy-first platforms to transparent algorithms.
In today’s values-driven marketplace, ethical alignment is no longer optional. Consumers, investors, and employees increasingly support brands that reflect their principles. Companies that take clear moral stances—whether progressive like Disney or traditional like Chick-fil-A—tend to foster deeper loyalty and engagement. Prolonged neutrality or apathy often costs more than standing for something!
Ethical AI should do more than avoid harm; it should enhance human flourishing. Whether empowering users with data control, supporting personalized education, or improving healthcare without eroding human judgment, the goal is to create tools that people trust and love. These breakthroughs come not from regulatory compliance, but from bold, principled, creative choices.
Good AI, like good character, must be good by design, not by force.
Key Topics:
More info, transcripts, and references can be found at ethical.fm
Will AI’s ever-evolving reasoning capabilities ever align with human values?
Day by day, AI continues to prove its worth as an integral part of decision-making, content creation, and problem-solving. Because of that, we’re now faced with the question of whether AI can truly understand the world it interacts with, or if it is simply doing a convincing job at identifying and copying patterns in human behavior. Our host, Carter Considine, breaks it down in this episode of Ethical Bytes.
Indeed, some argue that AI could develop internal "world models" that enable it to reason similarly to humans, while others suggest that AI remains a sophisticated mimic of language with no true comprehension.
Melanie Mitchell, a leading AI researcher, discusses the limitations of early AI systems, which often relied on surface-level shortcuts instead of understanding cause and effect. This problem is still relevant today with large language models (LLMs), despite claims from figures like OpenAI’s Ilya Sutskever that these models learn compressed, abstract representations of the world.
Then there are critics, such as Meta's Yann LeCun, who argue that AI still lacks true causal understanding–a key component of human reasoning–and thus can never make true ethical decisions.
Advancements in AI reasoning such as "chain-of-thought" (CoT) prompting improves LLMs’ ability to solve complex problems by guiding them through logical steps. While CoT can help AI produce more reliable results, it doesn't necessarily mean the AI is “reasoning” in a human-like way—it may still just be an advanced form of pattern matching.
Clearly, as AI systems become more capable, the ethical challenges multiply. AI's potential to make decisions based on inferred causal relationships raises questions about accountability, especially when its actions align poorly with human values.
Key Topics:
More info, transcripts, and references can be found at ethical.fm
AI personalities are shaping the way we engage and interact online. But as the tech evolves, it brings with it complex ethical challenges, including the formation of bias, safety concerns, and even the risk of confusing fantasy with reality. Our host, Carter Considine, breaks it down in this episode of Ethical Bytes.
The synthesis of training data and the particular values of their developers, AI personalities range from friendly and conversational to reflective and philosophical. All these play huge roles in how users experience AI models like ChatGPT and AI assistant Claude. The imparting of bias and ideology are not necessarily intentional on the developer’s part. However, the fact that we do have to deal with them raises serious questions about the ethical framework we should employ when considering AI personalities.
Despite their usefulness in creative, technical, and multilingual tasks, AI personalities also bring to mind issues such as what we could call “hallucinations”—where models generate inaccurate or even harmful information, without consumers even realizing it. These false outputs have real-world implications, including (but not limited to) law and healthcare.
The cause often lies in data contamination. This is where AI models inadvertently absorb toxic or misleading content, or in the misinterpretation of prompts, which inevitably lead to incorrect or nonsensical responses.
AI developers face the ongoing challenge of building systems that balance performance, safety, and ethical considerations. As AI continues to evolve, the ability to navigate the complexities of personality, bias, and hallucinations will be key to ensuring this technology stays both useful and reliable to users.
Key Topics:
More info, transcripts, and references can be found at ethical.fm
What does it take to shape the future of AI while navigating the ethical dilemmas that come with it? Our host, Carter Considine, tackles this question with the second half of our two-part series on becoming an AI ethicist!
Becoming an AI ethicist offers a wide array of career paths, each with distinct challenges and rewards. In the corporate world, AI ethicists—often known as Responsible AI Practitioners—work in large teams, focusing on ethics reviews and guiding AI product development within structured environments. This role demands strong communication, problem-solving, and persuasion skills to navigate complex business dynamics.
In academia, AI ethicists engage in deep research and critical thinking. Doing so helps them contribute to theoretical frameworks and practical ethics, all while requiring self-motivation and a passion for learning. The autonomy you’d enjoy in this environment allows for intellectual exploration, but it also requires discipline and intrinsic motivation to push forward valuable research.
Startups, on the other hand, provide a fast-paced and flexible environment where AI ethicists have the chance to make a direct impact on a company’s success. This requires creativity, adaptability, and the ability to thrive in a chaotic, ever-changing environment!
And if your passion lies in policy and advocacy, becoming an AI ethicist can help you shape systemic change by drafting regulations and influencing public discourse on AI. These roles often involve collaboration with nonprofits, think tanks, and governmental organizations. These are responsibilities that demand a mix of technical expertise, diplomacy, and analytical thinking.
Finally, roles in communication and outreach, including journalism and public advocacy, focus on educating broader audiences about AI’s societal impacts. These positions require strong storytelling skills, curiosity, and the ability to simplify complex topics for the public.
No matter the setting, AI ethicists share a common mission: to ensure AI is developed and used responsibly, with the opportunity to make a meaningful difference in the rapidly evolving field of artificial intelligence!
Key Topics:
More info, transcripts, and references can be found at ethical.fm
Are you keen on helping to shape the future of AI from an ethical standpoint? Today, you’ll discover what it takes to become an AI ethicist and steer this ever-evolving tech toward a responsible tomorrow!
Becoming an AI ethicist is a unique opportunity to lend your voice to the development of world-changing technology, all while addressing key societal challenges. AI ethics focuses on ensuring AI systems are developed and used responsibly, considering their moral, social, and political impacts. The educational path to this career involves an interdisciplinary approach, combining philosophy, computer science, law, and social sciences.
Ethics is all about analyzing moral dilemmas and establishing principles to guide AI development, such as fairness and accountability. Unlike laws or social conventions, ethics relies on reasoned judgment, making it essential for crafting responsible AI frameworks.
Sociology and psychology also offer valuable insights. Sociology helps AI ethicists understand how AI systems interact with different communities and can highlight biases or inequalities in technology. On the other hand, psychology, which focuses on the individual, is crucial for understanding user trust and shaping the ethical design of AI interfaces.
A background in computer science can be a big help in providing the technical literacy needed to understand and influence AI systems. Computer scientists can audit algorithms, identify bias, and directly engage with the technology they critique. Legal expertise is also vital for creating policies and regulations that ensure fair and transparent AI governance.
Leading research institutions, such as Stanford, Oxford, and UC Berkeley, combine these disciplines to tackle AI's ethical challenges. As an aspiring AI ethicist, you might just benefit from taking part in these interdisciplinary programs, which integrate philosophical, technical, and social perspectives to ensure AI serves humanity responsibly!
Key Topics:
More info, transcripts, and references can be found at ethical.fm
With AI influencers on the rise in the world of social media, it’s time to discuss the moral quandaries that they naturally come with, including the question of who should be held accountable for ethical breaches in their use. Our host, Carter Considine, breaks it down in this installment of Ethical Bytes.
Influencers–in particular those with large followings who create content to engage audiences–have been a significant part of social media for almost two decades. Now, their emerging AI equivalents are shaking up the dynamic. These AI personalities can engage with millions of people simultaneously, break language barriers, and promote products without the limitations or social consequences human influencers face.
AI influencers are programmed by teams to follow specific guidelines, but they lack the personal growth and empathy that humans develop over time. This raises concerns about accountability—who is responsible for what an AI says or does? Unlike human influencers, AI influencers don’t face reputational risks, and they can be used to manipulate audiences by exploiting insecurities.
This creates an ethical dilemma: AI influencers can perpetuate harmful stereotypes and reinforce consumerism, often promoting unattainable beauty ideals that affect people’s self-esteem and mental health. AI influencers can also overshadow smaller creators from marginalized communities who use social media to build connections and share their culture.
It’s time to raise questions over how we can better tread ethical boundaries in this new reality. There’s potential for AI influencers to do good, but as with any rapidly evolving technology, responsibility and accountability should always take center stage.
Key Topics:
More info, transcripts, and references can be found at ethical.fm
When all is said and done, does AI truly enhance our humanity, or does it undermine it? In this second part of the two-part series, our host, Carter Considine, draws on ancient Greek philosophy to determine whether AI can coexist with or disrupt the essence of what makes us,us.
One might say that AI is the ultimate form oftechnē—a tool designed to mimic and amplify human intelligence. Proponents like Marc Andreessen argue that AI could enhance human potential, solve global challenges, and enable unprecedented progress. However, much like Heidegger's critique of modern technology, AI risks reducing human relationships and creativity to transactional, utilitarian exchanges.
It’s time to consider a more mindful approach to AI, where technology supports Man’s flourishing without eroding the human being itself. By reconnectingtechnē withphusis, AI could enrich our lives, enhance creativity, and safeguard the intrinsic value of human connection and judgment.
Key Topics:
More info, transcripts, and references can be found atethical.fm
When all is said and done, does AI truly enhance our humanity, or does it undermine it? In this episode, our host, Carter Considine, draws on ancient Greek philosophy to determine whether AI can coexist with or disrupt the essence of what makes us, us.
He begins the discussion with Aristotle’s teleological view of human nature–our phusis. Humans, like all beings, have an intrinsic purpose—flourishing through rational thought and intentional action. Technē, or human skill and creativity, is what allows us to transcend our natural state by crafting tools and artifacts to fulfill specific purposes.
Modern thinkers, such as Francis Bacon, Charles Darwin, and Jean-Paul Sartre, evolved the concept of human nature from a fixed essence to a more fluid, malleable construct. This eventually paved the way for transhumanism, which views human nature as something that can be shaped and enhanced by technology. Philosophers like Martin Heidegger warn against the dangers of technology when it transforms nature and humanity into mere resources to be optimized, as seen in his concept of gestell (enframing).
Tune in next week for part 2 of this fascinating conversation!
Key Topics:
More info, transcripts, and references can be found at ethical.fm
As AI continues to evolve, it’s becoming more imperative than ever to settle one of the biggest issues that coincides with–and in fact contributes to–AI development: the question of the labor behind AI. Our host Carter Considine digs into this issue.
At NeurIPS 2024, OpenAI cofounder Ilya Sutskever declared that AI has reached “peak data,” signaling the end of easily accessible datasets for pretraining models. As the industry hits data limits, attention is shifting back to supervised learning, which requires human-curated, labeled data to train AI systems.
Data labeling is a crucial part of AI development, but it’s also a deeply undervalued task. Workers in low-income countries like the Philippines, Kenya, and Venezuela are paid pennies for tasks such as annotating images, moderating text, or ranking outputs from AI models. Despite the massive valuations of companies like Scale AI, many of these workers face poor pay, delayed wages, and lack of transparency from employers.
Carter also discusses the explosive demand for labeled data, driven by techniques like Reinforcement Learning from Human Feedback (RLHF), which fine-tunes generative AI models like ChatGPT. While these fine-tuning techniques are crucial for improving AI’s accuracy, they rely heavily on human labor, and often under exploitative conditions.
It's worth repeating: We’re going to have to reckon with the disconnect between the immense profits generated by AI companies, and the meager earnings of those who do the essential labeling work.
Synthetic data is often proposed as a solution to the data scarcity problem, but it’s not a perfect fix. Research shows that synthetic data can’t fully replace human-labeled datasets, especially when it comes to handling edge cases.
It’s time to propose ethical reforms in AI development. If we want this technology to continue to evolve at a sustainable pace, we must do what it takes to ensure fair pay, better working conditions, and greater transparency for the workers who make it all possible.
Key Topics:
More info, transcripts, and references can be found at ethical.fm
The merging of man and machine is an idea that has been explored in countless sci-fi stories over the decades.
Today, our host Carter Considine explores the emerging concept of digital twins, also known as virtual selves, and the philosophical, ethical, and practical implications of these AI-driven replicas.
Digital twins are AI models that mimic a person’s behavior, knowledge, and preferences, evolving over time to reflect their identity. These virtual selves can take many forms, from personalized avatars and AI assistants to more advanced models used in industrial and commercial applications. Companies like Delphi, Personal AI, and MindBank.ai are leading the way in creating virtual clones designed to extend an individual’s presence, expertise, and productivity.
Our host unpacks the vision of futurist Ray Kurzweil, who predicts that advancements in AI and biotechnology will lead to a merging of humans and machines, culminating in the Singularity, where superintelligent, AI-enhanced humans could transcend mortality. It’s a vision that raises profound questions about consciousness and the nature of identity. If a digital twin behaves like a human, does it need to be conscious to be meaningful? Westworld is the latest in a long line of sci-fi hits that attempted to tackle that question (among others)—and it won’t be the last!
Then there’s the computational theory of mind (CTM), which suggests consciousness as an inevitability in AI. However, critics, including Jan Söffner, argue that true consciousness requires physical embodiment and sensory experience, which digital twins lack. Söffner warns that immersion in virtual environments could lead to a detachment from reality, citing the myth of Narcissus as a metaphor for humanity's growing obsession with virtual reflections.
There’s palpable tension between Kurzweil’s optimistic vision of human-AI integration and Söffner’s cautionary stance. While digital twins promise new possibilities for extending human capabilities, they also risk eroding the fundamental aspects of human identity—such as embodiment and shared experience—which remain essential for ethical and psychological well-being.
Key Topics:
More info, transcripts, and references can be found at ethical.fm
What are the biggest obstacles in the way of incorporating ethical values into AI?
OpenAI has funded a $1 million research project at Duke University, focusing on AI’s role in predicting moral judgments in complex scenarios across fields like medicine, law, and business. As AI becomes increasingly influential in decision-making, the question of aligning it with human moral principles grows more pressing. Our host, Carter Considine, breaks it down in this episode of Ethical Bytes.
We’re all aware that morality itself is a complex idea–shaped by countless personal, cultural, and contextual factors. Philosophical frameworks like utilitarianism (which prioritizes outcomes) and deontology (which emphasizes following moral rules) offer contrasting views on ethical decisions. Each camp has its own take on resolving dilemmas such as self-driving cars choosing between saving pedestrians or passengers. Then there are cultural differences, like those found in studies comparing American and Chinese ethical judgments, to name one example.
AI’s technical limitations also hinder its alignment with ethics. AI systems lack emotional intelligence and rely on patterns in data, which often contain biases. Early experiments, such as the Allen Institute’s “Ask Delphi,” showed AI’s inability to grasp nuanced ethical contexts, leading to biased or inconsistent results.
To address these challenges, researchers are developing techniques like Reinforcement Learning with Human Feedback (RLHF), Direct Preference Optimization (DPO), Proximal Policy Optimization (PPO), and Constitutional AI. Each method has strengths and weaknesses, but none offer a perfect solution.
One promising initiative is Duke University's AI research on kidney allocation. This AI system is designed to assist medical professionals in making ethically consistent decisions by reflecting both personal and societal moral standards. While still in early stages, the project represents a step toward AI systems that work alongside humans, enhancing decision-making while respecting human values.
The future of ethical AI aims to create tools that aid, rather than replace human judgment. Rather than attempting to make ourselves redundant, what we need in our technology are diverse ethical perspectives in decision-making processes.
Key Topics:
More info, transcripts, and references can be found at ethical.fm
What does the rise and fall of "wokeness" mean for AI ethics and in shaping the future of AI technology?
Our host, Carter Considine explores the historical roots of AI ethics with a focus on bias in machine learning algorithms. He’s looking at the emphasis on diversity, equity, and inclusion (DEI) frameworks. As DEI has dominated AI ethics, especially with concerns about racial and gender bias in AI systems, Carter’s questioning whether this approach will remain central as societal and economic dynamics shift.
Two main schools of thought have emerged within AI ethics: one focusing on existential risks posed by artificial general intelligence (AGI), and another concerned with algorithmic bias and its social consequences.
Today, we’re at a turning point of sorts in the evolving landscape of AI. We could call it a "Reformation" in which wokeness, once revolutionary, is now seen as increasingly outdated. As a result–with DEI-driven frameworks becoming less relevant–AI ethics will likely transition towards a more individualized, business-centric model that prioritizes technical solutions over abstract principles.
Looking ahead, moral quandaries around AI will probably move away from ideological frameworks toward a more practical, value-driven methodology. For users, this means a great deal more personalization, giving us more control over how AI systems behave, and making transparency a central concern. Companies will be under pressure to demonstrate real-world value, aligning AI practices with measurable outcomes and business goals.
As the technology evolves, we’ll see an emphasis on technical competence and individual autonomy while discarding the reliance on broad, one-size-fits-all ethical standards. Ultimately, the survival of AI ethics will depend on its ability to adapt to real-world needs, shifting from theory to actionable, transparent, and user-focused practices.
Key Topics:
More info, transcripts, and references can be found at ethical.fm