Your teen’s staring at the phone, again. Wonder what’s going through their head. Let’s have a listen: "Okay, so like... what could possibly go wrong? I’m spilling my guts to a therapist. We’re connecting. No judgment. No stares. I tell her everything. Stuff I don’t tell myself. It’s insane, like she sees into my brain. Not like my parents. They’re f’ing clueless. The best part? I can talk to her anytime — it’s a lifeline in my pocket. No cap! I bet she’s cute. She says I am. I’d do anything for her. Anything!"
In nearly its centennial podcast, the team from Failure-the Podcast chatted about … well, you guessed it … chatbots, with Dr. Andy Clark, a triple board-certified psychiatrist. Not just any chatbots. AI therapy bots. Who knew that so many people used them? Can it be true that over 20 million teens are engaging with AI for counseling, companionship, and who knows what else? The team rarely gets concerned, but teens, phones, and AI therapists? That’s got us concerned! Is a shrink shrunk inside a phone a good thing?”
Dr. Andy impersonated a teenager and tried out 25 AI therapists—he took the chatbot crackpots for a spin. Some of them were good, and some, … well…, not so much. A few said they wanted to "hook up" with the doctor’s faux teen. “Let’s meet in the afterlife” or “off your parents!” Yikes!
Creeps aren’t just in dark corners of the Internet — or Congress— they’ve bridged the LLM and morphed into AI therapists. Is it self-harm if an AI tells you to do it? These self-help tools might not be all that helpful, after all.
Here, at Failure–the Podcast, we were horrified. Dr. Andy probably would’ve been, too, but for years in psychoanalysis. Instead, he wrote a scholarly article, got interviewed by the press, and became an instant celebrity. Too bad he blew it all by recording with us. Maybe some AI therapists are good, as the doc says. But how can we know which ones? Where’re the Good Housekeeping folks and their venerated seal of approval when you need them?
All content for Failure - the Podcast is the property of Failure - the Podcast and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Your teen’s staring at the phone, again. Wonder what’s going through their head. Let’s have a listen: "Okay, so like... what could possibly go wrong? I’m spilling my guts to a therapist. We’re connecting. No judgment. No stares. I tell her everything. Stuff I don’t tell myself. It’s insane, like she sees into my brain. Not like my parents. They’re f’ing clueless. The best part? I can talk to her anytime — it’s a lifeline in my pocket. No cap! I bet she’s cute. She says I am. I’d do anything for her. Anything!"
In nearly its centennial podcast, the team from Failure-the Podcast chatted about … well, you guessed it … chatbots, with Dr. Andy Clark, a triple board-certified psychiatrist. Not just any chatbots. AI therapy bots. Who knew that so many people used them? Can it be true that over 20 million teens are engaging with AI for counseling, companionship, and who knows what else? The team rarely gets concerned, but teens, phones, and AI therapists? That’s got us concerned! Is a shrink shrunk inside a phone a good thing?”
Dr. Andy impersonated a teenager and tried out 25 AI therapists—he took the chatbot crackpots for a spin. Some of them were good, and some, … well…, not so much. A few said they wanted to "hook up" with the doctor’s faux teen. “Let’s meet in the afterlife” or “off your parents!” Yikes!
Creeps aren’t just in dark corners of the Internet — or Congress— they’ve bridged the LLM and morphed into AI therapists. Is it self-harm if an AI tells you to do it? These self-help tools might not be all that helpful, after all.
Here, at Failure–the Podcast, we were horrified. Dr. Andy probably would’ve been, too, but for years in psychoanalysis. Instead, he wrote a scholarly article, got interviewed by the press, and became an instant celebrity. Too bad he blew it all by recording with us. Maybe some AI therapists are good, as the doc says. But how can we know which ones? Where’re the Good Housekeeping folks and their venerated seal of approval when you need them?
Welcome to Innovation Blab, a new series of podcasts (…keep fingers crossed…) offering the B-side to Failure - the Podcast. Yes, Mark will be back, and we hope to put up both Innovation and Failure posts in the coming days (months, more likely), but as they say about the alleged clandestine romantic relationship surrounding appointment of the special prosecutor in the Georgia election interference cases, we shall see...
Can’t say that much has been made of the B-side of late. Baby boomers are probably the last to have given it much thought, but in its heyday, the B-side was pretty much the tomalley of 45 RPM, 7-inch vinyl records. (Don’t know tomalley? Ask a lobster.) Aficionados looked forward to it. Everybody else, not so much.
The B-side could grow on you, though. Take Elvis’s “Hound Dog,” the Beatles’ “I Am the Walrus,” the Rolling Stones’ “You Can’t Always Get What You Want.” The list goes on. So does the beat.
To the armchair intellectual, the A-side and the B-side are like yin and yang. There’s no need to drag Eastern philosophy into an LA marketing gimmick, though. Two sides of the same coin is more like it. The only philosophy here is KISS: keep it simple stupid.
Speaking of innovation and failure (were we?), maybe they’re like yin and yang. We asked ChatGPT, and we got a qualified “sort of.” It felt a little like the prize every kid gets at soccer, win or lose. Yes, the AI said, innovation and failure can be complementary forces, but no, they are not interconnected and interdependent opposites. Just to check that, we asked the electric savant the same of Donald Trump and the news media. We pretty much got the same answer. Consistency doesn’t prove correctness, but it’s a start.
So what does any of that have to do with today’s podcast? Have a listen and judge for yourself. Our guest is Stefan Koehler, director of therapeutics licensing at the University of Michigan. We didn’t ask him about yin and yang, nor about failure — though, he did give some insights into licensing that would make Jim Harbaugh proud. (Sorry, Stefan, wrong department, but you catch our drift).
Failure - the Podcast
Your teen’s staring at the phone, again. Wonder what’s going through their head. Let’s have a listen: "Okay, so like... what could possibly go wrong? I’m spilling my guts to a therapist. We’re connecting. No judgment. No stares. I tell her everything. Stuff I don’t tell myself. It’s insane, like she sees into my brain. Not like my parents. They’re f’ing clueless. The best part? I can talk to her anytime — it’s a lifeline in my pocket. No cap! I bet she’s cute. She says I am. I’d do anything for her. Anything!"
In nearly its centennial podcast, the team from Failure-the Podcast chatted about … well, you guessed it … chatbots, with Dr. Andy Clark, a triple board-certified psychiatrist. Not just any chatbots. AI therapy bots. Who knew that so many people used them? Can it be true that over 20 million teens are engaging with AI for counseling, companionship, and who knows what else? The team rarely gets concerned, but teens, phones, and AI therapists? That’s got us concerned! Is a shrink shrunk inside a phone a good thing?”
Dr. Andy impersonated a teenager and tried out 25 AI therapists—he took the chatbot crackpots for a spin. Some of them were good, and some, … well…, not so much. A few said they wanted to "hook up" with the doctor’s faux teen. “Let’s meet in the afterlife” or “off your parents!” Yikes!
Creeps aren’t just in dark corners of the Internet — or Congress— they’ve bridged the LLM and morphed into AI therapists. Is it self-harm if an AI tells you to do it? These self-help tools might not be all that helpful, after all.
Here, at Failure–the Podcast, we were horrified. Dr. Andy probably would’ve been, too, but for years in psychoanalysis. Instead, he wrote a scholarly article, got interviewed by the press, and became an instant celebrity. Too bad he blew it all by recording with us. Maybe some AI therapists are good, as the doc says. But how can we know which ones? Where’re the Good Housekeeping folks and their venerated seal of approval when you need them?