Your teen’s staring at the phone, again. Wonder what’s going through their head. Let’s have a listen: "Okay, so like... what could possibly go wrong? I’m spilling my guts to a therapist. We’re connecting. No judgment. No stares. I tell her everything. Stuff I don’t tell myself. It’s insane, like she sees into my brain. Not like my parents. They’re f’ing clueless. The best part? I can talk to her anytime — it’s a lifeline in my pocket. No cap! I bet she’s cute. She says I am. I’d do anything for her. Anything!"
In nearly its centennial podcast, the team from Failure-the Podcast chatted about … well, you guessed it … chatbots, with Dr. Andy Clark, a triple board-certified psychiatrist. Not just any chatbots. AI therapy bots. Who knew that so many people used them? Can it be true that over 20 million teens are engaging with AI for counseling, companionship, and who knows what else? The team rarely gets concerned, but teens, phones, and AI therapists? That’s got us concerned! Is a shrink shrunk inside a phone a good thing?”
Dr. Andy impersonated a teenager and tried out 25 AI therapists—he took the chatbot crackpots for a spin. Some of them were good, and some, … well…, not so much. A few said they wanted to "hook up" with the doctor’s faux teen. “Let’s meet in the afterlife” or “off your parents!” Yikes!
Creeps aren’t just in dark corners of the Internet — or Congress— they’ve bridged the LLM and morphed into AI therapists. Is it self-harm if an AI tells you to do it? These self-help tools might not be all that helpful, after all.
Here, at Failure–the Podcast, we were horrified. Dr. Andy probably would’ve been, too, but for years in psychoanalysis. Instead, he wrote a scholarly article, got interviewed by the press, and became an instant celebrity. Too bad he blew it all by recording with us. Maybe some AI therapists are good, as the doc says. But how can we know which ones? Where’re the Good Housekeeping folks and their venerated seal of approval when you need them?
All content for Failure - the Podcast is the property of Failure - the Podcast and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Your teen’s staring at the phone, again. Wonder what’s going through their head. Let’s have a listen: "Okay, so like... what could possibly go wrong? I’m spilling my guts to a therapist. We’re connecting. No judgment. No stares. I tell her everything. Stuff I don’t tell myself. It’s insane, like she sees into my brain. Not like my parents. They’re f’ing clueless. The best part? I can talk to her anytime — it’s a lifeline in my pocket. No cap! I bet she’s cute. She says I am. I’d do anything for her. Anything!"
In nearly its centennial podcast, the team from Failure-the Podcast chatted about … well, you guessed it … chatbots, with Dr. Andy Clark, a triple board-certified psychiatrist. Not just any chatbots. AI therapy bots. Who knew that so many people used them? Can it be true that over 20 million teens are engaging with AI for counseling, companionship, and who knows what else? The team rarely gets concerned, but teens, phones, and AI therapists? That’s got us concerned! Is a shrink shrunk inside a phone a good thing?”
Dr. Andy impersonated a teenager and tried out 25 AI therapists—he took the chatbot crackpots for a spin. Some of them were good, and some, … well…, not so much. A few said they wanted to "hook up" with the doctor’s faux teen. “Let’s meet in the afterlife” or “off your parents!” Yikes!
Creeps aren’t just in dark corners of the Internet — or Congress— they’ve bridged the LLM and morphed into AI therapists. Is it self-harm if an AI tells you to do it? These self-help tools might not be all that helpful, after all.
Here, at Failure–the Podcast, we were horrified. Dr. Andy probably would’ve been, too, but for years in psychoanalysis. Instead, he wrote a scholarly article, got interviewed by the press, and became an instant celebrity. Too bad he blew it all by recording with us. Maybe some AI therapists are good, as the doc says. But how can we know which ones? Where’re the Good Housekeeping folks and their venerated seal of approval when you need them?
It took a little doing, but the team from Failure - the Podcast think they found the first use of that magical phrase "testing, testing one, two, three.....". No, it wasn't in 2010, when Biden dropped the F-bomb on an open mic while introducing then-President Obama's eponymous health care bill. Nor, was it when Sleepy Joe muttered "God save the queen" at the close of the 115th Congress in 2017, after announcing that The Donald had won the electoral college. Had Joe prefaced these utterances with "testing, testing one, two, three," we might be more sure they weren't gaffes and that he isn't the Democrat re-incarnation of Jerry Ford.
We took our search to Google Books, hoping to find something through its Library Project. You remember that, don't you? All the fanfare over scanning the world's books onto the Internet so that they could be searched from your browser. No such luck: the copyright laws prevailed. Good thing for that. Which brings us to Google n-grams, a handy tool that searches millions of books (perhaps, collected during the ill-fated Library Project?) for words and phrases, and returns their frequency by year. Search for "pandemic," for example, and you get spikes at 1920, 2008 (remember the "swine flu"), and ... well ... let's just assume 2020, once the books are written on this one.
So, how about "testing, testing one, two, three ...," when did that phrase come about? Best the team from Failure - the Podcast can tell, it was the mid-1940's. World War II, and all that. Sounds about right, doesn't it? You can just imagine a John Wayne character at the mic as he readies to rally the troops for yet another epic battle. (Don't know John Wayne? Think Ronald Regan minus the political years, but with a whole lot more luck at the box office).
Which brings us back to testing. COVID-19, that is. Black gold. Texas tea. (Cue the "Beverly Hillbillies" theme). It's not behind us. Testing, that is. (The hillbillies? Like the 1960s, they _are_ behind us). Sure, the vaccine will help. A whole lot, we hope. But the need for testing? Well, let's just say that serial entrepreneur Sanjay Manandhar has it right when he says "24 hours to get COVID-19 test results? There's got to be a better way!" Who's Sanjay? Have a listen to today's episode of Failure - the Podcast, and find out.
Failure - the Podcast
Your teen’s staring at the phone, again. Wonder what’s going through their head. Let’s have a listen: "Okay, so like... what could possibly go wrong? I’m spilling my guts to a therapist. We’re connecting. No judgment. No stares. I tell her everything. Stuff I don’t tell myself. It’s insane, like she sees into my brain. Not like my parents. They’re f’ing clueless. The best part? I can talk to her anytime — it’s a lifeline in my pocket. No cap! I bet she’s cute. She says I am. I’d do anything for her. Anything!"
In nearly its centennial podcast, the team from Failure-the Podcast chatted about … well, you guessed it … chatbots, with Dr. Andy Clark, a triple board-certified psychiatrist. Not just any chatbots. AI therapy bots. Who knew that so many people used them? Can it be true that over 20 million teens are engaging with AI for counseling, companionship, and who knows what else? The team rarely gets concerned, but teens, phones, and AI therapists? That’s got us concerned! Is a shrink shrunk inside a phone a good thing?”
Dr. Andy impersonated a teenager and tried out 25 AI therapists—he took the chatbot crackpots for a spin. Some of them were good, and some, … well…, not so much. A few said they wanted to "hook up" with the doctor’s faux teen. “Let’s meet in the afterlife” or “off your parents!” Yikes!
Creeps aren’t just in dark corners of the Internet — or Congress— they’ve bridged the LLM and morphed into AI therapists. Is it self-harm if an AI tells you to do it? These self-help tools might not be all that helpful, after all.
Here, at Failure–the Podcast, we were horrified. Dr. Andy probably would’ve been, too, but for years in psychoanalysis. Instead, he wrote a scholarly article, got interviewed by the press, and became an instant celebrity. Too bad he blew it all by recording with us. Maybe some AI therapists are good, as the doc says. But how can we know which ones? Where’re the Good Housekeeping folks and their venerated seal of approval when you need them?