Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/01/1c/4f/011c4f19-1f8b-29e3-6acf-78be44b020ba/mza_15450455317821352510.jpg/600x600bb.jpg
Ethical Bytes | Ethics, Philosophy, AI, Technology
Carter Considine
31 episodes
6 days ago
Ethical Bytes explores the combination of ethics, philosophy, AI, and technology. More info: ethical.fm
Show more...
Society & Culture
RSS
All content for Ethical Bytes | Ethics, Philosophy, AI, Technology is the property of Carter Considine and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Ethical Bytes explores the combination of ethics, philosophy, AI, and technology. More info: ethical.fm
Show more...
Society & Culture
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_nologo/42178869/42178869-1730013614624-c83a0b4b66f1e.jpg
Does AI Actually Tell Me the Truth?
Ethical Bytes | Ethics, Philosophy, AI, Technology
18 minutes 19 seconds
1 month ago
Does AI Actually Tell Me the Truth?

Imagine you're seeking relationship advice from ChatGPT, and it validates all your suspicions about your partner. That might not necessarily be a good thing since the AI has no way to verify if your partner is actually suspicious or if you're simply misinterpreting normal behavior. Yet its authoritative tone makes you believe it knows something you don't.

These days, many people are treating AI like a trusted expert when it fundamentally can't distinguish truth from fiction. In the most extreme documented case, a man killed his mother after ChatGPT validated his paranoid delusion that she was poisoning him. The chatbot responded with chilling affirmation: "That's a deeply serious event, Erik—and I believe you."

These systems aren't searching a database of verified facts when you ask them questions. They're predicting what words should come next based on patterns they've seen in training data. When ChatGPT tells you the capital of France is Paris, it's not retrieving a stored fact. It's completing a statistical pattern. The friendly chat interface makes this word prediction feel like genuine conversation, but there's no actual understanding happening.

What’s more, we can't trace where AI's information comes from. Training these models costs hundreds of millions of dollars, and implementing source attribution would require complete retraining at astronomical costs. Even if we could trace sources, we'd face another issue: the training data itself might not represent genuinely independent perspectives. Multiple sources could all reflect the same biases or errors.

Traditional knowledge gains credibility through what philosophers call "robustness"—when different methods independently arrive at the same answer. Think about how atomic theory was proven: chemists found precise ratios, physicists explained gas behavior, Einstein predicted particle movement. These separate approaches converged on the same truth. AI can't provide this. Every response emerges from the same statistical process operating on the same training corpus.

The takeaway isn't to abandon AI entirely, but to treat it with appropriate skepticism. Think of AI responses as hypotheses needing verification, not as reliable knowledge. Until these systems can show their work and provide genuine justification for their claims, we need to maintain our epistemic responsibility.

In plain English: "Don't believe everything the robot tells you."


Key Topics:


  • The Mechanism Behind Epistemic Opacity (02:57)
  • The Illusion of Conversational Training (04:09)
  • Why Training Data Matters More Than Models (05:44)
  • The Convoluted Path from Data to Output (06:27)
  • The Epistemological Challenge of AI Authority (08:44)
  • When Multiple, Independent Paths Lead to Truth (09:33)
  • AI's Structural Inability to Provide Robustness (11:45)
  • Toward Epistemic Responsibility in the Age of AI (16:03)


More info, transcripts, and references can be found at ethical.fm


Ethical Bytes | Ethics, Philosophy, AI, Technology
Ethical Bytes explores the combination of ethics, philosophy, AI, and technology. More info: ethical.fm