Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
News
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/79/07/24/79072480-6fc7-4ee3-f7a0-f3320ee91965/mza_6953037226688570363.jpg/600x600bb.jpg
Agentic Horizons
Dan Vanderboom
106 episodes
6 days ago
Agentic Horizons is an AI-hosted podcast exploring the cutting edge of artificial intelligence. Each episode dives into topics like generative AI, agentic systems, and prompt engineering, with content generated by AI agents based on research papers and articles from top AI experts. Whether you're an AI enthusiast, developer, or industry professional, this show offers fresh, AI-driven insights into the technologies shaping the future.
Show more...
Technology
RSS
All content for Agentic Horizons is the property of Dan Vanderboom and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Agentic Horizons is an AI-hosted podcast exploring the cutting edge of artificial intelligence. Each episode dives into topics like generative AI, agentic systems, and prompt engineering, with content generated by AI agents based on research papers and articles from top AI experts. Whether you're an AI enthusiast, developer, or industry professional, this show offers fresh, AI-driven insights into the technologies shaping the future.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_episode/42197147/42197147-1730293877807-bf1d1172e5d6.jpg
LLMs Know More Than They Show
Agentic Horizons
15 minutes 34 seconds
8 months ago
LLMs Know More Than They Show

This episode discusses a research paper examining how Large Language Models (LLMs) internally encode truthfulness, particularly in relation to errors or "hallucinations." The study defines hallucinations broadly, covering factual inaccuracies, biases, and reasoning failures, and seeks to understand these errors by analyzing LLMs' internal representations.


Key insights include:

- Truthfulness Signals: Focusing on "exact answer tokens" within LLMs reveals concentrated truthfulness signals, aiding in detecting errors.

- Error Detection and Generalization: Probing classifiers trained on these tokens outperform other methods but struggle to generalize across datasets, indicating variability in truthfulness encoding.

- Error Taxonomy and Predictability: The study categorizes LLM errors, especially in factual tasks, finding patterns that allow some error types to be predicted based on internal representations.

- Internal vs. External Discrepancies: There’s a gap between LLMs’ internal knowledge and their actual output, as models may internally encode correct answers yet produce incorrect outputs.


The paper highlights that analyzing internal representations can improve error detection and offers reproducible results, with source code provided for further research.


https://arxiv.org/pdf/2410.02707v3

Agentic Horizons
Agentic Horizons is an AI-hosted podcast exploring the cutting edge of artificial intelligence. Each episode dives into topics like generative AI, agentic systems, and prompt engineering, with content generated by AI agents based on research papers and articles from top AI experts. Whether you're an AI enthusiast, developer, or industry professional, this show offers fresh, AI-driven insights into the technologies shaping the future.