Join The French Philosopher and consider me your philosophy BFF! 🤗 If you’re wondering about the meaning of life, your impact on the world, or who you truly are, you’re in the right place. Picture us chatting over a latte, exploring life’s big questions with wisdom from ancient and modern philosophers. I’m a Brooklyn-based French philosopher, speaker, and author, and as an expert in AI ethics for the European Commission, I also dive into ethics and critical thinking around AI and tech.
All content for The French Philosopher is the property of Stephanie Lehuger and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Join The French Philosopher and consider me your philosophy BFF! 🤗 If you’re wondering about the meaning of life, your impact on the world, or who you truly are, you’re in the right place. Picture us chatting over a latte, exploring life’s big questions with wisdom from ancient and modern philosophers. I’m a Brooklyn-based French philosopher, speaker, and author, and as an expert in AI ethics for the European Commission, I also dive into ethics and critical thinking around AI and tech.
Imagine a machine deciding who gets life-saving surgery in a split-second, armed with endless data and razor-sharp logic. No hesitation, no bias, no emotional baggage. Sounds like a dream... or does it? What do you think: does AI make better decisions than humans?
Well, it’s true that there are no existential crises or coffee breaks for our robot friends.
They’re brilliant at optimize outcomes by crunching numbers, without getting tired, distracted, or irrational. Some chatbots actually even give good moral advice (one could say better than some philosophers? 😅). Have a look if you’re curious: petersinger.ai. But here’s the kicker, machines don’t actually “understand” morality. Why is that? Because they don’t feel empathy or anguish when making tough calls. They don’t lose sleep over the weight of their decisions. They don’t consider the messy, lived experiences of the people affected by them.
Take existentialists like Simone de Beauvoir (yes, we’re name-dropping).
They’d argue that morality is rooted in freedom and authenticity, every decision we make defines who we are and carries the weight of our responsibility to others. Machines? They don’t have freedom, they’re programmed. They don’t have authenticity, they’re mimicking patterns. They’re not moral agents, they’re tools.
But here’s where things get spicy.
AI can actually push us to think deeper about our own ethical frameworks. By exposing our biases and presenting alternative perspectives, it can sharpen our reasoning and force us to confront uncomfortable truths. For instance, Amazon’s AI recruiting tool 10 years ago was a fiasco but it helped everyone realize how deep recruiting biases are, and that was definitely a win to make us aware that we had to fight against them.
So maybe the question isn’t whether AI is “better” at morality but whether it challenges us to be better moral thinkers ourselves?
Should we trust AI with big decisions?
Maybe as collaborators, not captains of the ship. Machines might help us see clearer, but the messy beauty of morality, its empathy, its anguish, its humanity, is something only we can bring to the table. Or at least that’s my take… what’s yours?
The French Philosopher
Join The French Philosopher and consider me your philosophy BFF! 🤗 If you’re wondering about the meaning of life, your impact on the world, or who you truly are, you’re in the right place. Picture us chatting over a latte, exploring life’s big questions with wisdom from ancient and modern philosophers. I’m a Brooklyn-based French philosopher, speaker, and author, and as an expert in AI ethics for the European Commission, I also dive into ethics and critical thinking around AI and tech.