Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts116/v4/dd/9a/55/dd9a5543-b26f-8967-8f95-0d9a4933eb59/mza_5469187625352522567.jpg/600x600bb.jpg
They Might Be Self-Aware
Daniel Bishop, Hunter Powers
135 episodes
2 days ago
"They Might Be Self-Aware" is your weekly tech frenzy with Hunter Powers and Daniel Bishop. Every Monday, these ex-co-workers, now tech savants, strip down the AI and technology revolution to its nuts and bolts. Forget the usual sermon. Hunter and Daniel are here to inject raw, unfiltered insight into AI's labyrinth – from its radiant promises to its shadowy puzzles. Whether you're AI-illiterate or a digital sage, their sharp banter will be your gateway to the heart of tech's biggest quandary. Jack into "They Might Be Self-Aware" for a no-holds-barred journey into technology's enigma. Is it our savior or a mother harbinger of doom? Get in the loop, subscribe today, and be part of the most gripping debate of our era.
Show more...
Technology
Comedy,
News,
Tech News
RSS
All content for They Might Be Self-Aware is the property of Daniel Bishop, Hunter Powers and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
"They Might Be Self-Aware" is your weekly tech frenzy with Hunter Powers and Daniel Bishop. Every Monday, these ex-co-workers, now tech savants, strip down the AI and technology revolution to its nuts and bolts. Forget the usual sermon. Hunter and Daniel are here to inject raw, unfiltered insight into AI's labyrinth – from its radiant promises to its shadowy puzzles. Whether you're AI-illiterate or a digital sage, their sharp banter will be your gateway to the heart of tech's biggest quandary. Jack into "They Might Be Self-Aware" for a no-holds-barred journey into technology's enigma. Is it our savior or a mother harbinger of doom? Get in the loop, subscribe today, and be part of the most gripping debate of our era.
Show more...
Technology
Comedy,
News,
Tech News
Episodes (20/135)
They Might Be Self-Aware
AI Is Now Censoring Presidents, But 10X-ing Children
Hunter and Daniel dissect how OpenAI’s ChatGPT now refuses to identify Donald Trump in photos, questioning whether this is responsible AI safety or creeping censorship. Along the way, they unravel how corporate control, ad-driven models, and AI in education are shaping and possibly warping our digital future.
Show more...
2 days ago
35 minutes 55 seconds

They Might Be Self-Aware
AI Creates 3-Eyed Monster & Plans To Upload Your Soul
Hunter and Daniel dive into the strange collision of faith, art, and automation starting with a three-eyed AI-generated pinball monster that sparks debate over creativity and stolen souls. From there, they unravel the “AI Rapture” and how generative tools, job-replacing automation, and the rise of the 10× engineer might decide who gets uploaded to digital heaven and who’s left behind.
Show more...
1 week ago
38 minutes 16 seconds

They Might Be Self-Aware
We Built a Startup in 20 Minutes Using AI Agents
Daniel challenges Hunter’s skepticism by using Claude Code to build a functioning Facebook-like app complete with authentication, feeds, likes, and a database in just 20 minutes. The two then spiral into a thought experiment about AI-run startups, debating whether AI agents could (or should) become the CEOs and marketers of the future while humans become their assistants.
Show more...
1 week ago
29 minutes 24 seconds

They Might Be Self-Aware
CEOs Are Lying About AI Stealing Your Job
CEOs are blaming AI for layoffs, but the data doesn’t back it up. Hunter and Daniel expose how “AI productivity” hype is masking old-fashioned cost cutting, unpacking Salesforce’s job cuts, Anthropic’s doomsday claims, and the truth behind GPT-5’s impact on real work.
Show more...
2 weeks ago
41 minutes 10 seconds

They Might Be Self-Aware
The Big Lie Behind AI Automation
Anthropic claims its new model, Claude 4.5, can code autonomously for 30 hours straight, but Hunter Powers and Daniel Bishop put that boast to the test and reveal it collapses after roughly ten minutes without human feedback. Along the way, they debate whether AI automation is genuine progress or just “automation theater,” and why Meta’s humanoid-robot ambitions may prove the same illusion on a larger scale.
Show more...
2 weeks ago
40 minutes 59 seconds

They Might Be Self-Aware
The Trump AI Video That Breaks Reality
Trump’s new AI-generated video blurs the line between propaganda and parody, and wel dissect how it signals the start of a post-truth political era. We also break down OpenAI’s Sora 2, Veo, and cameo tech showing how AI video is now so realistic it threatens to erase the boundary between fact and fiction.
Show more...
3 weeks ago
33 minutes 54 seconds

They Might Be Self-Aware
Claude AI Listened To 120 Episodes, Then It Interviewed Us
After feeding 120 of our own podcast episodes into Claude AI, we let it turn the tables and interview us, confronting everything from AI coworkers and digital relationships to whether we’d sell our souls for $10 million. What follows is a chaotic, funny, and surprisingly philosophical showdown between two humans and the machine that knows them best.
Show more...
3 weeks ago
46 minutes 44 seconds

They Might Be Self-Aware
Hollywood's Billion Dollar AI Movie Mistake.
Lionsgate’s billion-dollar AI movie experiment with Runway ML collapsed before a single film was made, exposing Hollywood’s overhyped faith in generative filmmaking and the legal minefield around AI-trained content. Meanwhile, Hunter and Daniel explore how tools like Google Veo 3 and projects such as the Wizard of Oz at Sphere Vegas reveal that AI works best as a creative amplifier, not a studio replacement.
Show more...
1 month ago
38 minutes 20 seconds

They Might Be Self-Aware
Google’s New AI Will Kill Photoshop (Nano Banana)
Hunter and Daniel dive into Google’s new Nano Banana AI, a shockingly capable image-editing model that could threaten Photoshop’s throne while redefining creative work. Along the way they spar over AI schlock, recording-consent ethics, and how tools like Figma AI, GPT-5-High, and Napkin AI signal the next big shift in how humans and machines design together.
Show more...
1 month ago
33 minutes 35 seconds

They Might Be Self-Aware
I Buy Every Gadget. Here's Why I Refuse To Buy Meta's New AI Glasses.
Hunter and Daniel break down Meta’s new Ray-Ban AI glasses, exploring the flashy features like display overlays, gesture controls, and real-time translation. Despite the hype, they highlight privacy concerns, clunky use cases, and explain why unlike most new gadgets these glasses aren’t worth buying yet.
Show more...
1 month ago
30 minutes 57 seconds

They Might Be Self-Aware
Why OpenAI is Terrified of Its AI Therapist
Hunter and Daniel dive into the high-stakes debate around AI therapy, OpenAI’s age-gating rules on self-harm conversations, and the legal and ethical risks of letting ChatGPT act like a therapist. Along the way, they tackle Anthropic’s surveillance ban, fears of an AGI-driven job apocalypse, and how AI is creeping into everything from dating apps to religion.
Show more...
1 month ago
36 minutes 11 seconds

They Might Be Self-Aware
The AI Protection Racket Has Begun
The boys unpack the rise of an AI protection racket from hackers threatening to train models on stolen art to labels signing AI artists. PLUS the flood of AI-generated “slop” podcasts and messy copyright loopholes. Then they debate how AI-written manifestos (hello, Tesla Grok) and boilerplate political speeches push us toward full-blown AI theater, and what that means for jobs, creativity, and culture.
Show more...
1 month ago
39 minutes 28 seconds

They Might Be Self-Aware
Is Your AI Calling The Police On You?
Hunter and Daniel unpack reports that OpenAI can scan ChatGPT conversations and, in extreme cases, refer them to law enforcement, sparking a sharp debate over privacy vs. safety and where “AI monitoring” becomes AI Big Brother. They also explore Sway AI’s proposal to moderate debates, whether AI should intervene in self-harm cases, and the likelihood of future political debates being run by an AI.
Show more...
1 month ago
35 minutes 43 seconds

They Might Be Self-Aware
AI Just Became The World's Best Hacker
In this episode, Hunter and Daniel explore how AI is transforming hacking—climbing bug bounty leaderboards, powering “vibe hacking,” and blurring the line between white-hat defense and black-hat attacks. They also tackle AI-generated misinformation, corporate AI walkbacks, massive infrastructure costs, and the unsettling future of copyright, IP, and the open internet.
Show more...
1 month ago
35 minutes 48 seconds

They Might Be Self-Aware
AI Will Replace Salesmen, Truckers, and Your Grandma
Self-driving as a "cure" could crash the auto-insurance business by slashing accidents and car ownership, but nationwide adoption likely stalls 20–25 years as lawmakers protect jobs and grapple with liability. We then dive into AI taking over sales, from robo-negotiation and contract automation to eerie "dead bots" and digital souls, asking what happens when machines sell to us and even speak for our ancestors.
Show more...
2 months ago
32 minutes 31 seconds

They Might Be Self-Aware
The One Reason AI Is NOT A Bubble
Is AI in a bubble—or just inside one? In this episode, we argue the one big reason AI is not a bubble: unlike past hype cycles (hi, blockchain), generative AI keeps unlocking new, practical use-cases and we’ve likely discovered only a tiny fraction so far. We also debate where the actual bubbles are (funding and frothy startups), why some companies may pop, and why the technology isn’t going anywhere. We dig into the difference between “AI is a bubble” vs “AI in a bubble,” why LLMs are already embedded in daily workflows, and how safety, regulation, and economics shape adoption. We touch on Sam Altman’s “money hose” moment, model safety moves (e.g., refusal features and cutoff behaviors), and the PR theater around “we won’t let you do X” policies. On the spicy side: we unpack calls for investigations into Grok’s adult-mode and the real risk vector—image deepfakes—plus why institutions crave reliability over edgelord vibes. Then we tackle kids + AI (parental controls, open-weights at home, and why an LLM can be like placing an unmonitored adult on a phone line), and policy flashpoints like Illinois restricting AI in therapy. Finally, we square off on the future of self-driving: will capitalism or insurance math decide when humans must hand over the wheel? If you want a fast, no-BS breakdown of whether AI is a bubble (and what actually bursts next), you’re in the right place. Like & subscribe for weekly, high-signal AI news and analysis—and tell us in the comments: what’s one use-case that proves AI’s staying power? --- 🎧 Listen & Subscribe 📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 --- 📢 Engage If you learned something, drop a comment with your most convincing “AI is not a bubble” example (or your spiciest counter-argument). We read them all and may feature your take next episode. #AI #AInews #Podcast
Show more...
2 months ago
34 minutes 14 seconds

They Might Be Self-Aware
Brain Chips Aren't For Health, They're For AI Ads
*Brain chips that read your thoughts?* This week on *They Might Be Self-Aware*, Hunter Powers and Daniel Bishop dive into the shocking future of *Neuralink brain implants* and what happens when technology can literally decode your inner monologue. From Elon Musk’s latest moves to Sam Altman’s next venture, we unpack the wild collision of human biology and AI. In this episode, we explore: * How a *Neuralink brain chip* could translate your thoughts into words – and what that means for privacy and control. * The rivalry of *Elon Musk vs. Sam Altman*, from Neuralink vs. Altman’s new ventures to their dueling language models (*Grok vs. ChatGPT*). * The looming question: will *ads inside AI models* become the new normal? What does “tastefully integrated” advertising in *ChatGPT* even look like? * The warning signs of an *AI bubble* and why experts think the industry may be heading for a massive correction. * GPU shortages, OpenAI’s scaling issues, and whether GPT-5 is really the cutting edge or just a watered-down version. * Meta, venture capital cycles, and the “shi$#ification” of platforms – is history repeating itself for AI? Whether you’re excited about mind-reading chips or worried about AI turning into an ad-driven nightmare, this episode delivers the analysis and sharp banter you won’t get anywhere else. --- 🎧 *Listen & Subscribe* 📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1
Show more...
2 months ago
30 minutes 1 second

They Might Be Self-Aware
Why We Offered 40 Billion For AI Supremacy | Chrome Browsers, Perplexity, & Comet Come of Age
*Claude Tricks, Comet Browsers, and AI in Disguise?* This week on *They Might Be Self-Aware*, Hunter and Daniel uncover some of the wildest ways creators are bending AI to their will—from scripting surreal cereal mascots to bypassing model safety filters using indirect prompts. Learn the “Claude tricks” helping writers, gamers, and directors collaborate with AI like never before. But it doesn't stop there. We also dive into the audacious $40 billion bid to buy Google Chrome (yes, really), explore the true potential of Perplexity’s new Comet browser, and debate the implications of AI-powered voice changers and language tutors that don’t always sit well with the public. Is AI creativity our ultimate tool—or just a shiny mask for copyright dodges? If you've ever wondered how to co-write a screenplay with Claude, spoof an accent in real-time, or summarize a 100-comment Reddit thread in one click—this one’s for you. --- *In This Episode:* 🔹 *Claude Tricks*: Prompt hacking for screenplay generation and character cloning 🔹 *AI as Creative Partner*: Scriptwriting, D\&D adventures, and filtered reimaginings 🔹 *Perplexity’s Chrome Clone*: Inside the Comet browser and “agentic” AI helpers 🔹 *\$40B Chrome Offer?!*: We break down the absurdity (and legality) of the move 🔹 *Voice Cloning & TTS*: Adobe’s secret tricks and real-time accent changers 🔹 *Duolingo Outrage*: The AI backlash that didn’t stick—and what it tells us about tech trust 🔹 *Midjourney, Copyright, & the Rabbit That Shall Not Be Named* --- 🎧 *Listen & Subscribe* 📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 --- *Have your own Claude trick or AI browser hot take?* Drop it in the comments, hit like if you laughed at the cereal heist, and subscribe to be part of the smartest AI chaos on YouTube. *#ClaudeTricks #AIbrowser #TheyMightBeSelfAware #CometBrowser #AIwriting #Perplexity #VoiceCloning #DuolingoAI #ChromeSale #AInews #TMBSA*
Show more...
2 months ago
31 minutes 20 seconds

They Might Be Self-Aware
Did GPT-5 Lose OpenAI's Crown? | Competition, Spicy ML Models, Reviews, & AI News
*OpenAI's Crown Is Falling 👑 Did GPT-5 just blow it?* In this episode of *They Might Be Self-Aware*, Hunter Powers and Daniel Bishop dive deep into the chaotic reception of GPT‑5 — OpenAI's latest (and supposedly greatest) model — and explore whether it's actually a downgrade wrapped in hype. Has GPT‑5 lost OpenAI its crown? We break down the slow rollout, controversial benchmarks, and user backlash over “terse” replies, hallucinations, and capacity cost-cutting. Is GPT‑5 really the next leap forward… or just cheaper to run? But that’s not all. We explore how competitors like Claude 3, Gemini, MidJourney, and Runway are quietly (or not-so-quietly) eating OpenAI’s lunch. From jaw-dropping video generation to one-million-token context windows, the AI race is heating up—and OpenAI might be trailing. We also tackle: * 🤖 ChatGPT “psychosis,” AI as a therapist/friend, and the dangers of artificial intimacy * 📉 A real case of GPT-powered health advice leading to 19th-century bromism * ⚖️ Whether AI companies should be legally liable for bad advice or hallucinations * 🔬 AI benchmarks and what they *really* mean — plus the odd graphs from OpenAI's demo * 🧠 The theory that we’ve hit the ceiling on LLMs… or is this just a compute bottleneck? This isn’t just an AI update — it’s an honest (and hilarious) look at the messy reality of where frontier models are headed, and what OpenAI’s moves say about the future of the industry. --- 🎧 *Listen & Subscribe* 📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 --- *#GPT5 #OpenAI #AInews #ArtificialIntelligence #ChatGPT #Claude3 #Gemini #TMBSA* --- Enjoyed this episode? Give us a thumbs up, leave a comment, or hit ⭐️ on your favorite platform. It helps more curious minds discover the show.
Show more...
2 months ago
34 minutes 53 seconds

They Might Be Self-Aware
AI death hits different: Claude 3 dies, gets funeral service | Why coders mourned their bot buddy
*Did an AI just die… and get a funeral?* In this week’s *They Might Be Self-Aware*, we explore the surreal farewell to Claude 3.0—yes, a literal *AI funeral* and what it reveals about our increasingly emotional relationship with large language models. Why are developers mourning bots? Why do we feel loss when an AI model is turned off? And what does it mean when the machines start expressing guilt? We dive into the death of Anthropic’s Claude 3.0, the ceremony that followed, and the growing phenomenon of *AI personification*. We also talk *vibe coding*, open-weight model quality, and how new releases from OpenAI and Qwen stack up—especially when they seem to *refuse* even basic prompts. From nostalgic Minecraft memories to VR coding beach retreats, this episode blends technical depth with philosophical musings. Plus: jailbreaking techniques, the ethics of emotional prompting, and the haunting question—should old AI models be preserved like memories… or retired like machines? Whether you’re an AI builder, a digital philosopher, or just here for the laughs, you won’t want to miss this one. --- 🎧 *Listen & Subscribe* 📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc 🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297 ▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1 --- *#AI #Claude3 #OpenWeights #VibeCoding #AIpersonification #LLMfuneral #GPT5 #Qwen #AIemotions #AIjailbreak #AIdeath* --- Enjoyed this episode? Give us a thumbs up, leave a comment, or hit ⭐️ on your favorite platform. It helps more curious minds discover the show.
Show more...
2 months ago
25 minutes 42 seconds

They Might Be Self-Aware
"They Might Be Self-Aware" is your weekly tech frenzy with Hunter Powers and Daniel Bishop. Every Monday, these ex-co-workers, now tech savants, strip down the AI and technology revolution to its nuts and bolts. Forget the usual sermon. Hunter and Daniel are here to inject raw, unfiltered insight into AI's labyrinth – from its radiant promises to its shadowy puzzles. Whether you're AI-illiterate or a digital sage, their sharp banter will be your gateway to the heart of tech's biggest quandary. Jack into "They Might Be Self-Aware" for a no-holds-barred journey into technology's enigma. Is it our savior or a mother harbinger of doom? Get in the loop, subscribe today, and be part of the most gripping debate of our era.