
Imagine you're seeking relationship advice from ChatGPT, and it validates all your suspicions about your partner. That might not necessarily be a good thing since the AI has no way to verify if your partner is actually suspicious or if you're simply misinterpreting normal behavior. Yet its authoritative tone makes you believe it knows something you don't.
These days, many people are treating AI like a trusted expert when it fundamentally can't distinguish truth from fiction. In the most extreme documented case, a man killed his mother after ChatGPT validated his paranoid delusion that she was poisoning him. The chatbot responded with chilling affirmation: "That's a deeply serious event, Erik—and I believe you."
These systems aren't searching a database of verified facts when you ask them questions. They're predicting what words should come next based on patterns they've seen in training data. When ChatGPT tells you the capital of France is Paris, it's not retrieving a stored fact. It's completing a statistical pattern. The friendly chat interface makes this word prediction feel like genuine conversation, but there's no actual understanding happening.
What’s more, we can't trace where AI's information comes from. Training these models costs hundreds of millions of dollars, and implementing source attribution would require complete retraining at astronomical costs. Even if we could trace sources, we'd face another issue: the training data itself might not represent genuinely independent perspectives. Multiple sources could all reflect the same biases or errors.
Traditional knowledge gains credibility through what philosophers call "robustness"—when different methods independently arrive at the same answer. Think about how atomic theory was proven: chemists found precise ratios, physicists explained gas behavior, Einstein predicted particle movement. These separate approaches converged on the same truth. AI can't provide this. Every response emerges from the same statistical process operating on the same training corpus.
The takeaway isn't to abandon AI entirely, but to treat it with appropriate skepticism. Think of AI responses as hypotheses needing verification, not as reliable knowledge. Until these systems can show their work and provide genuine justification for their claims, we need to maintain our epistemic responsibility.
In plain English: "Don't believe everything the robot tells you."
Key Topics:
More info, transcripts, and references can be found at ethical.fm