Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts116/v4/be/25/c4/be25c4e8-5780-f1ec-ee51-1a69576ab04c/mza_5319675806980867922.jpeg/600x600bb.jpg
Building a Better Geek
Emmanuella Grace & Craig Lawton
26 episodes
6 days ago
Welcome to Building a Better Geek, where we explore the intersection of technology, psychology and well-being. For high-functioning introverts finding an audience and who like humans at least as much as machines. If you want to go deep on leadership, communication and all the things that go into building you; let’s grok on!
Show more...
Self-Improvement
Education
RSS
All content for Building a Better Geek is the property of Emmanuella Grace & Craig Lawton and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Welcome to Building a Better Geek, where we explore the intersection of technology, psychology and well-being. For high-functioning introverts finding an audience and who like humans at least as much as machines. If you want to go deep on leadership, communication and all the things that go into building you; let’s grok on!
Show more...
Self-Improvement
Education
https://pbcdn1.podbean.com/imglogo/ep-logo/pbblog16293809/generated_image_6a_2pqjkm.png
TruthAmp: Episode 5 - Would AI Lie to You
Building a Better Geek
16 minutes
1 month ago
TruthAmp: Episode 5 - Would AI Lie to You
Available in Video: https://www.youtube.com/watch?v=iOklnPvvDUo TruthAmp Episode 5: How Do I Know If AI Is Lying to Me? In this episode of Truth Amp, communication expert Emmanuella Grace and tech expert Craig Lawton tackle one of the most pressing questions about AI: How can you tell when AI gives you false information? Key Takeaways AI Doesn't "Lie" - It Hallucinates Craig explains that AI doesn't intentionally deceive. Instead, AI models are probabilistic systems that sometimes produce "hallucinations" - confident-sounding but inaccurate responses based on statistical patterns in their training data. The Swiss Cheese Problem AI knowledge has gaps like holes in Swiss cheese. When your question hits one of these knowledge gaps, the AI fills in blanks with plausible-sounding but potentially false information, especially in specialized domains like psychology, medicine, or law. Experts Aren't Immune Even domain experts can be caught off guard. Emmanuella shares how AI nearly fooled her with an incomplete psychology summary that seemed authoritative but was missing crucial information. The Generational Divide Many older users treat AI responses as infallible truth, lacking awareness of AI's limitations. This creates a responsibility gap - who should educate users about AI's fallibility? Practical Tips to Get Better AI Responses Turn on web search in your AI settings so it can access current information Specify timeframes in your prompts (e.g., "information from 2025") Learn better prompting techniques to avoid reinforcing your existing biases Understand AI training bias - models reflect historical data, which may contain outdated information The Bottom Line While the tech industry figures out responsibility and regulation, users need to take charge of their AI education. Media and educational institutions have a role to play in teaching AI literacy, especially around understanding biases, limitations, and effective prompting strategies. Truth Amp explores complex topics through the lens of human communication and technology expertise. New episodes weekly.
Building a Better Geek
Welcome to Building a Better Geek, where we explore the intersection of technology, psychology and well-being. For high-functioning introverts finding an audience and who like humans at least as much as machines. If you want to go deep on leadership, communication and all the things that go into building you; let’s grok on!