Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts122/v4/4d/47/13/4d4713e0-6426-dc3c-3f11-8b211cdac7aa/mza_307848951373462473.png/600x600bb.jpg
Papers Read on AI
Rob
200 episodes
9 months ago
Show more...
Tech News
News
RSS
All content for Papers Read on AI is the property of Rob and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Show more...
Tech News
News
https://is1-ssl.mzstatic.com/image/thumb/Podcasts122/v4/4d/47/13/4d4713e0-6426-dc3c-3f11-8b211cdac7aa/mza_307848951373462473.png/600x600bb.jpg
Internal Consistency and Self-Feedback in Large Language Models: A Survey
Papers Read on AI
1 hour 20 minutes 28 seconds
1 year ago
Internal Consistency and Self-Feedback in Large Language Models: A Survey
Large language models (LLMs) often exhibit deficient reasoning or generate hallucinations. To address these, studies prefixed with"Self-"such as Self-Consistency, Self-Improve, and Self-Refine have been initiated. They share a commonality: involving LLMs evaluating and updating themselves. Nonetheless, these efforts lack a unified perspective on summarization, as existing surveys predominantly focus on categorization. In this paper, we use a unified perspective of internal consistency, offering explanations for reasoning deficiencies and hallucinations. Internal consistency refers to the consistency in expressions among LLMs' latent, decoding, or response layers based on sampling methodologies. Then, we introduce an effective theoretical framework capable of mining internal consistency, named Self-Feedback. This framework consists of two modules: Self-Evaluation and Self-Update. The former captures internal consistency signals, while the latter leverages the signals to enhance either the model's response or the model itself. This framework has been employed in numerous studies. We systematically classify these studies by tasks and lines of work; summarize relevant evaluation methods and benchmarks; and delve into the concern,"Does Self-Feedback Really Work?"We also propose several critical viewpoints, including the"Hourglass Evolution of Internal Consistency","Consistency Is (Almost) Correctness"hypothesis, and"The Paradox of Latent and Explicit Reasoning". The relevant resources are open-sourced at https://github.com/IAAR-Shanghai/ICSFSurvey.2024: Xun Liang, Shichao Song, Zifan Zheng, Hanyu Wang, Qingchen Yu, Xunkai Li, Rong-Hua Li, Feiyu Xiong, Zhiyu Lihttps://arxiv.org/pdf/2407.14507v3
Papers Read on AI