Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
History
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts127/v4/a8/49/90/a849903a-65af-d8fc-07a7-c0d1bbf826a6/mza_4767231250788281707.jpg/600x600bb.jpg
NLP Highlights
Allen Institute for Artificial Intelligence
145 episodes
9 months ago
Curious about the safety of LLMs? 🤔 Join us for an insightful new episode featuring Suchin Gururangan, Young Investigator at Allen Institute for Artificial Intelligence and Data Science Engineer at Appuri. 🚀 Don't miss out on expert insights into the world of LLMs!
Show more...
Science
RSS
All content for NLP Highlights is the property of Allen Institute for Artificial Intelligence and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Curious about the safety of LLMs? 🤔 Join us for an insightful new episode featuring Suchin Gururangan, Young Investigator at Allen Institute for Artificial Intelligence and Data Science Engineer at Appuri. 🚀 Don't miss out on expert insights into the world of LLMs!
Show more...
Science
https://i1.sndcdn.com/artworks-QnuRCBnjs9C7Ptd4-KO31DA-t3000x3000.png
129 - Transformers and Hierarchical Structure, with Shunyu Yao
NLP Highlights
35 minutes 43 seconds
4 years ago
129 - Transformers and Hierarchical Structure, with Shunyu Yao
In this episode, we talk to Shunyu Yao about recent insights into how transformers can represent hierarchical structure in language. Bounded-depth hierarchical structure is thought to be a key feature of natural languages, motivating Shunyu and his coauthors to show that transformers can efficiently represent bounded-depth Dyck languages, which can be thought of as a formal model of the structure of natural languages. We went on to discuss some of the intuitive ideas that emerge from the proofs, connections to RNNs, and insights about positional encodings that may have practical implications. More broadly, we also touched on the role of formal languages and other theoretical tools in modern NLP. Papers discussed in this episode: - Self-Attention Networks Can Process Bounded Hierarchical Languages (https://arxiv.org/abs/2105.11115) - Theoretical Limitations of Self-Attention in Neural Sequence Models (https://arxiv.org/abs/1906.06755) - RNNs can generate bounded hierarchical languages with optimal memory (https://arxiv.org/abs/2010.07515) - On the Practical Computational Power of Finite Precision RNNs for Language Recognition (https://arxiv.org/abs/1805.04908) Shunyu Yao's webpage: https://ysymyth.github.io/ The hosts for this episode are William Merrill and Matt Gardner.
NLP Highlights
Curious about the safety of LLMs? 🤔 Join us for an insightful new episode featuring Suchin Gururangan, Young Investigator at Allen Institute for Artificial Intelligence and Data Science Engineer at Appuri. 🚀 Don't miss out on expert insights into the world of LLMs!