Welcome to Building a Better Geek, where we explore the intersection of technology, psychology and well-being. For high-functioning introverts finding an audience and who like humans at least as much as machines. If you want to go deep on leadership, communication and all the things that go into building you; let’s grok on!
All content for Building a Better Geek is the property of Emmanuella Grace & Craig Lawton and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Welcome to Building a Better Geek, where we explore the intersection of technology, psychology and well-being. For high-functioning introverts finding an audience and who like humans at least as much as machines. If you want to go deep on leadership, communication and all the things that go into building you; let’s grok on!
Available in Video: https://www.youtube.com/watch?v=iOklnPvvDUo
TruthAmp Episode 5: How Do I Know If AI Is Lying to Me?
In this episode of Truth Amp, communication expert Emmanuella Grace and tech expert Craig Lawton tackle one of the most pressing questions about AI: How can you tell when AI gives you false information?
Key Takeaways
AI Doesn't "Lie" - It Hallucinates Craig explains that AI doesn't intentionally deceive. Instead, AI models are probabilistic systems that sometimes produce "hallucinations" - confident-sounding but inaccurate responses based on statistical patterns in their training data.
The Swiss Cheese Problem AI knowledge has gaps like holes in Swiss cheese. When your question hits one of these knowledge gaps, the AI fills in blanks with plausible-sounding but potentially false information, especially in specialized domains like psychology, medicine, or law.
Experts Aren't Immune Even domain experts can be caught off guard. Emmanuella shares how AI nearly fooled her with an incomplete psychology summary that seemed authoritative but was missing crucial information.
The Generational Divide Many older users treat AI responses as infallible truth, lacking awareness of AI's limitations. This creates a responsibility gap - who should educate users about AI's fallibility?
Practical Tips to Get Better AI Responses
Turn on web search in your AI settings so it can access current information
Specify timeframes in your prompts (e.g., "information from 2025")
Learn better prompting techniques to avoid reinforcing your existing biases
Understand AI training bias - models reflect historical data, which may contain outdated information
The Bottom Line
While the tech industry figures out responsibility and regulation, users need to take charge of their AI education. Media and educational institutions have a role to play in teaching AI literacy, especially around understanding biases, limitations, and effective prompting strategies.
Truth Amp explores complex topics through the lens of human communication and technology expertise. New episodes weekly.
Building a Better Geek
Welcome to Building a Better Geek, where we explore the intersection of technology, psychology and well-being. For high-functioning introverts finding an audience and who like humans at least as much as machines. If you want to go deep on leadership, communication and all the things that go into building you; let’s grok on!