Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Sports
History
Education
Business
About Us
Contact Us
Copyright
© 2024 PodJoint
Loading...
0:00 / 0:00
Podjoint Logo
US
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/41/5f/f4/415ff42b-f3e4-62d7-e017-6ccd1fe8b935/mza_9584984547342647834.jpg/600x600bb.jpg
Deep Papers
Arize AI
50 episodes
15 hours ago
Join us as we discuss Accurate KV Cache Quantization with Outlier Tokens Tracing, a deep dive into improving the efficiency of LLM inference. The authors enhance KV Cache quantization, a technique for reducing memory and compute costs during inference, by introducing a method to identify and exclude outlier tokens that hurt quantization accuracy, striking a better balance between efficiency and performance. Paper: https://arxiv.org/abs/2505.10938 Slides: https://bit.ly/45wolpr Join us for Ar...
Show more...
Mathematics
Technology,
Business,
Science
RSS
All content for Deep Papers is the property of Arize AI and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Join us as we discuss Accurate KV Cache Quantization with Outlier Tokens Tracing, a deep dive into improving the efficiency of LLM inference. The authors enhance KV Cache quantization, a technique for reducing memory and compute costs during inference, by introducing a method to identify and exclude outlier tokens that hurt quantization accuracy, striking a better balance between efficiency and performance. Paper: https://arxiv.org/abs/2505.10938 Slides: https://bit.ly/45wolpr Join us for Ar...
Show more...
Mathematics
Technology,
Business,
Science
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/41/5f/f4/415ff42b-f3e4-62d7-e017-6ccd1fe8b935/mza_9584984547342647834.jpg/600x600bb.jpg
Training Large Language Models to Reason in Continuous Latent Space
Deep Papers
24 minutes
3 months ago
Training Large Language Models to Reason in Continuous Latent Space
LLMs have typically been restricted to reason in the "language space," where chain-of-thought (CoT) is used to solve complex reasoning problems. But a new paper argues that language space may not always be the best for reasoning. In this paper read, we cover an exciting new technique from a team at Meta called Chain of Continuous Thought—also known as "Coconut." In the paper, "Training Large Language Models to Reason in a Continuous Latent Space" explores the potential of allowing LLMs to rea...
Deep Papers
Join us as we discuss Accurate KV Cache Quantization with Outlier Tokens Tracing, a deep dive into improving the efficiency of LLM inference. The authors enhance KV Cache quantization, a technique for reducing memory and compute costs during inference, by introducing a method to identify and exclude outlier tokens that hurt quantization accuracy, striking a better balance between efficiency and performance. Paper: https://arxiv.org/abs/2505.10938 Slides: https://bit.ly/45wolpr Join us for Ar...