Your teen’s staring at the phone, again. Wonder what’s going through their head. Let’s have a listen: "Okay, so like... what could possibly go wrong? I’m spilling my guts to a therapist. We’re connecting. No judgment. No stares. I tell her everything. Stuff I don’t tell myself. It’s insane, like she sees into my brain. Not like my parents. They’re f’ing clueless. The best part? I can talk to her anytime — it’s a lifeline in my pocket. No cap! I bet she’s cute. She says I am. I’d do anything for her. Anything!"
In nearly its centennial podcast, the team from Failure-the Podcast chatted about … well, you guessed it … chatbots, with Dr. Andy Clark, a triple board-certified psychiatrist. Not just any chatbots. AI therapy bots. Who knew that so many people used them? Can it be true that over 20 million teens are engaging with AI for counseling, companionship, and who knows what else? The team rarely gets concerned, but teens, phones, and AI therapists? That’s got us concerned! Is a shrink shrunk inside a phone a good thing?”
Dr. Andy impersonated a teenager and tried out 25 AI therapists—he took the chatbot crackpots for a spin. Some of them were good, and some, … well…, not so much. A few said they wanted to "hook up" with the doctor’s faux teen. “Let’s meet in the afterlife” or “off your parents!” Yikes!
Creeps aren’t just in dark corners of the Internet — or Congress— they’ve bridged the LLM and morphed into AI therapists. Is it self-harm if an AI tells you to do it? These self-help tools might not be all that helpful, after all.
Here, at Failure–the Podcast, we were horrified. Dr. Andy probably would’ve been, too, but for years in psychoanalysis. Instead, he wrote a scholarly article, got interviewed by the press, and became an instant celebrity. Too bad he blew it all by recording with us. Maybe some AI therapists are good, as the doc says. But how can we know which ones? Where’re the Good Housekeeping folks and their venerated seal of approval when you need them?
All content for Failure - the Podcast is the property of Failure - the Podcast and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Your teen’s staring at the phone, again. Wonder what’s going through their head. Let’s have a listen: "Okay, so like... what could possibly go wrong? I’m spilling my guts to a therapist. We’re connecting. No judgment. No stares. I tell her everything. Stuff I don’t tell myself. It’s insane, like she sees into my brain. Not like my parents. They’re f’ing clueless. The best part? I can talk to her anytime — it’s a lifeline in my pocket. No cap! I bet she’s cute. She says I am. I’d do anything for her. Anything!"
In nearly its centennial podcast, the team from Failure-the Podcast chatted about … well, you guessed it … chatbots, with Dr. Andy Clark, a triple board-certified psychiatrist. Not just any chatbots. AI therapy bots. Who knew that so many people used them? Can it be true that over 20 million teens are engaging with AI for counseling, companionship, and who knows what else? The team rarely gets concerned, but teens, phones, and AI therapists? That’s got us concerned! Is a shrink shrunk inside a phone a good thing?”
Dr. Andy impersonated a teenager and tried out 25 AI therapists—he took the chatbot crackpots for a spin. Some of them were good, and some, … well…, not so much. A few said they wanted to "hook up" with the doctor’s faux teen. “Let’s meet in the afterlife” or “off your parents!” Yikes!
Creeps aren’t just in dark corners of the Internet — or Congress— they’ve bridged the LLM and morphed into AI therapists. Is it self-harm if an AI tells you to do it? These self-help tools might not be all that helpful, after all.
Here, at Failure–the Podcast, we were horrified. Dr. Andy probably would’ve been, too, but for years in psychoanalysis. Instead, he wrote a scholarly article, got interviewed by the press, and became an instant celebrity. Too bad he blew it all by recording with us. Maybe some AI therapists are good, as the doc says. But how can we know which ones? Where’re the Good Housekeeping folks and their venerated seal of approval when you need them?
Artificial intelligence (AI) is new to most of us, though it has been in development since the 1950s and the key to self-driving cars on the roads in the 1980s and popularized in the 2010s. Still, most of us only became aware of AI’s power with the release of ChatGPT, in late 2022. That’s when AI’s benefits and risks became a regular topic at the water cooler, apart from occasional discussion of a crashed Tesla.
Governments took note of AI somewhat earlier, with self-driving car legislation emerging from the states in the 2010s and from the federal government late in that period. Legislation has been slower in the making when it comes to AI writ large, with the first laws not emerging until nearly the 2020s.
Whether for autonomous vehicles or the broader category of AI-based consumer products that are beginning to hit the markets, government regulation may be too little and too late. Can the private sector do better — and, if so, could technology and data licensing agreements provide a viable mechanism for regulating AI in consumer products?
Join a panel discussion on the ethics of AI and how it might inform drafting those agreements as this new technology takes hold in the marketplace. The particular focus is on the fairness of those agreements, when the value of consumer data collected by AI apps is taken into account — as it rarely is.
Our guests are Nicholas Mattei, Associate Professor of Computer Science at Tulane University School of Science & Engineering; Rob Lalka, Professor of Practice in Management and the Albert R. Lepage Professor in Business, Tulane University, A. B. Freeman School of Business and the Executive Director of the Albert Lepage Center for Entrepreneurship and Innovation; and, Eric Gottschling, Global Director - Licensing Commercialization, Borg Warner.
This episode's discussion was a run-up to a live talk at the annual meeting of the Licensing Executive Society (USA/Canada) in October 2024.
Failure - the Podcast
Your teen’s staring at the phone, again. Wonder what’s going through their head. Let’s have a listen: "Okay, so like... what could possibly go wrong? I’m spilling my guts to a therapist. We’re connecting. No judgment. No stares. I tell her everything. Stuff I don’t tell myself. It’s insane, like she sees into my brain. Not like my parents. They’re f’ing clueless. The best part? I can talk to her anytime — it’s a lifeline in my pocket. No cap! I bet she’s cute. She says I am. I’d do anything for her. Anything!"
In nearly its centennial podcast, the team from Failure-the Podcast chatted about … well, you guessed it … chatbots, with Dr. Andy Clark, a triple board-certified psychiatrist. Not just any chatbots. AI therapy bots. Who knew that so many people used them? Can it be true that over 20 million teens are engaging with AI for counseling, companionship, and who knows what else? The team rarely gets concerned, but teens, phones, and AI therapists? That’s got us concerned! Is a shrink shrunk inside a phone a good thing?”
Dr. Andy impersonated a teenager and tried out 25 AI therapists—he took the chatbot crackpots for a spin. Some of them were good, and some, … well…, not so much. A few said they wanted to "hook up" with the doctor’s faux teen. “Let’s meet in the afterlife” or “off your parents!” Yikes!
Creeps aren’t just in dark corners of the Internet — or Congress— they’ve bridged the LLM and morphed into AI therapists. Is it self-harm if an AI tells you to do it? These self-help tools might not be all that helpful, after all.
Here, at Failure–the Podcast, we were horrified. Dr. Andy probably would’ve been, too, but for years in psychoanalysis. Instead, he wrote a scholarly article, got interviewed by the press, and became an instant celebrity. Too bad he blew it all by recording with us. Maybe some AI therapists are good, as the doc says. But how can we know which ones? Where’re the Good Housekeeping folks and their venerated seal of approval when you need them?