Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/57/f0/bb/57f0bbb1-dfbb-f611-f221-c90175211817/mza_17695132971636498762.jpg/600x600bb.jpg
Large Language Model (LLM) Talk
AI-Talk
66 episodes
1 week ago
AI Explained breaks down the world of AI in just 10 minutes. Get quick, clear insights into AI concepts and innovations, without any complicated math or jargon. Perfect for your commute or spare time, this podcast makes understanding AI easy, engaging, and fun—whether you're a beginner or tech enthusiast.
Show more...
Technology
RSS
All content for Large Language Model (LLM) Talk is the property of AI-Talk and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
AI Explained breaks down the world of AI in just 10 minutes. Get quick, clear insights into AI concepts and innovations, without any complicated math or jargon. Perfect for your commute or spare time, this podcast makes understanding AI easy, engaging, and fun—whether you're a beginner or tech enthusiast.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_nologo/42833626/42833626-1736915491470-26d279f5a19fe.jpg
Mixture-of-Recursions (MoR)
Large Language Model (LLM) Talk
16 minutes 43 seconds
3 months ago
Mixture-of-Recursions (MoR)

Mixture-of-Recursions (MoR) is a unified framework built on a Recursive Transformer architecture, designed to enhance the efficiency of large language models. It achieves this by combining three core paradigms: parameter sharing (reusing shared layers across recursion steps), adaptive computation (dynamically assigning different processing depths to individual tokens via lightweight routers), and efficient Key-Value (KV) caching (selectively storing or sharing KV pairs). This integrated approach enables MoR to deliver large-model quality with significantly reduced computational and memory overhead, improving efficiency for both training and inference.

Large Language Model (LLM) Talk
AI Explained breaks down the world of AI in just 10 minutes. Get quick, clear insights into AI concepts and innovations, without any complicated math or jargon. Perfect for your commute or spare time, this podcast makes understanding AI easy, engaging, and fun—whether you're a beginner or tech enthusiast.