Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/ba/43/e3/ba43e3ad-5b40-335b-06e6-2ba083ccd952/mza_3329182250069938322.jpg/600x600bb.jpg
What The Bot with Reuben Adams
Reuben Adams
4 episodes
6 days ago
Interviews about AI: what's going right, what's going wrong, and where we're all headed.
Show more...
Technology
RSS
All content for What The Bot with Reuben Adams is the property of Reuben Adams and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Interviews about AI: what's going right, what's going wrong, and where we're all headed.
Show more...
Technology
Episodes (4/4)
What The Bot with Reuben Adams
Oxford Philosophers Found a FLAW in the AI Doom Argument?

The explicit goal of OpenAI, DeepMind and others is to create AGI.This is insanely risky.It keeps me up at night.AIs smarter than us might:🚨Resist shutdown.🚨Resist us changing their goals.🚨Ruthlessly pursue goals, even if they know it’s not what we want or intended.Some people think I’m nuts for believing this. But they often come round once they hear the central arguments.At the core of the AI doom argument are two big ideas:💡Instrumental Convergence💡The Orthogonality Thesis❌If you don’t understand these ideas, you won’t truly understand why some AI researchers are so worried about AGI or Superintelligence.Oxford philosopher Rhys Southan joined me to explain the situation.💡Rhys Southan and his co-authors Helena Ward and Jen Semler argue that powerful AIs might NOT resist having their goals changed. Possibly a fatal flaw in the Instrumental Convergence Thesis.This would be a BIG DEAL. It would mean we could modify powerful AIs if they go wrong.While I don’t fully agree with their argument, it radically changed how I understand the Instrumental Convergence Thesis and forced me to rethink what it means for AIs to have goals.Check out the paper "A Timing Problem for Instrumental Convergence" here: https://link.springer.com/article/10.1007/s11098-025-02370-4

Show more...
1 week ago
58 minutes 13 seconds

What The Bot with Reuben Adams
Does ChatGPT have a mind?

Do large language models like ChatGPT actually understand what they're saying? Can AI systems have beliefs, desires, or even consciousness? Philosophers Henry Shevlin and Alex Grzankowski debunk the common arguments against LLM minds and explore whether these systems genuinely think.This episode examines popular objections to AI consciousness - from "they're just next token predictors" to "it's just matrix multiplication" - and explains why these arguments fail. The conversation covers the Moses illusion, competence vs performance, the intentional stance, and whether we're applying unfair double standards to AI that we wouldn't apply to humans or animals.Key topics discussed:

  • Why "just next token prediction" isn't a good argument against LLM minds
  • The competence vs performance distinction in cognitive science
  • How humans make similar errors to LLMs (Moses illusion, conjunction fallacy)
  • Whether LLMs can have beliefs, preferences, and understanding
  • The difference between base models and fine-tuned chatbots
  • Why consciousness in LLMs remains unlikely despite other mental states

Featured paper: "Deflating Deflationism: A Critical Perspective on Debunking Arguments Against LLM Mentality"Authored by Alex Grzankowski, Geoff Keeling, Henry Shevlin and Winnie Street


Guests:Henry Shevlin - Philosopher and AI ethicist at the Leverhulme Centre for the Future of Intelligence, University of CambridgeAlex Grzankowski - Philosopher at King's College London#AI #Philosophy #Consciousness #LLM #ArtificialIntelligence #ChatGPT #MachineLearning #CognitiveScience

Show more...
3 weeks ago
1 hour 16 minutes 30 seconds

What The Bot with Reuben Adams
AI Powered Ransomware Is Coming. Tony Anscombe, ESET.

LLMs like ChatGPT are incredibly useful for coding. So naturally they can also be useful for hacking. Tony Anscombe explains how his cybersecurity company ESET discovered the first AI powered ransomware, and its unexpected origins.

Show more...
4 weeks ago
1 hour 4 minutes 39 seconds

What The Bot with Reuben Adams
Humans Are NOT The Most Intelligent Species. Professor Peter Bentley

Different species solve different problems, so how can we say one is smarter than another? To me, it's intuitively obvious that humans are the most intelligent species on the planet. But Professor Peter Bentley from UCL argues we are intelligent in different ways and cannot be ranked.

Show more...
1 month ago
1 hour 26 minutes 35 seconds

What The Bot with Reuben Adams
Interviews about AI: what's going right, what's going wrong, and where we're all headed.