Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
News
Sports
TV & Film
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/7c/85/fd/7c85fdbc-6f02-d644-ec5b-5b6f8f24c137/mza_16811120212298722539.jpg/600x600bb.jpg
AI Coach - Anil Nathoo
Anil Nathoo
102 episodes
5 days ago
AI Coach Podcast Welcome to the AI Coach Podcast—your go-to resource for Artificial intelligence. Each episode offers actionable insights, expert advice, and innovative strategies to help you achieve your AI goals. Whether you’re looking to boost your career, sharpen your skills, or improve your mindset, I’m here to guide you every step of the way. Let’s grow, learn, and thrive together!
Show more...
Education
RSS
All content for AI Coach - Anil Nathoo is the property of Anil Nathoo and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
AI Coach Podcast Welcome to the AI Coach Podcast—your go-to resource for Artificial intelligence. Each episode offers actionable insights, expert advice, and innovative strategies to help you achieve your AI goals. Whether you’re looking to boost your career, sharpen your skills, or improve your mindset, I’m here to guide you every step of the way. Let’s grow, learn, and thrive together!
Show more...
Education
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_episode/42712991/42712991-1757353667489-fd7600b08e3c8.jpg
101 - Why Language Models Hallucinate?
AI Coach - Anil Nathoo
43 minutes 46 seconds
1 month ago
101 - Why Language Models Hallucinate?

Click here to read more.

This podcast discusses the OpenAI paper “Why Language Models Hallucinate” by Adam Tauman Kalai, Ofir Nachum, Santosh S. Vempala, and Edwin Zhang.

It examines the phenomenon of “hallucinations” in large language models (LLMs), where models produce plausible but incorrect information. The authors attribute these errors to statistical pressures during both pre-training and post-training phases. During pre-training, hallucinations arise from the inherent difficulty of distinguishing correct from incorrect statements, even with error-free data.For instance, arbitrary facts without learnable patterns, such as birthdays, are prone to this.

The paper further explains that hallucinations persist in post-training due to evaluation methods that penalise uncertainty, incentivising models to “guess” rather than admit a lack of knowledge, much like students on a multiple-choice exam. The authors propose a “socio-technical mitigation” by modifying existing benchmark scoring to reward expressions of uncertainty, thereby steering the development of more trustworthy AI systems.


For the original article, click here.

AI Coach - Anil Nathoo
AI Coach Podcast Welcome to the AI Coach Podcast—your go-to resource for Artificial intelligence. Each episode offers actionable insights, expert advice, and innovative strategies to help you achieve your AI goals. Whether you’re looking to boost your career, sharpen your skills, or improve your mindset, I’m here to guide you every step of the way. Let’s grow, learn, and thrive together!