Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
News
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/f9/e4/6f/f9e46fac-f7bd-423c-b1a5-a7f1feb794fc/mza_11591368084059181858.jpg/600x600bb.jpg
Tech made Easy
Tech Guru
27 episodes
6 days ago
"Welcome to Tech Made Easy, the podcast where we dive deep into cutting-edge technical research papers, breaking down complex ideas into insightful discussions. Each episode, two tech enthusiasts explore a different research paper, simplifying the jargon, debating key points, and sharing their thoughts on its impact on the field. Whether you're a professional or a curious learner, join us for a geeky yet accessible journey through the world of technical research."
Show more...
Technology
RSS
All content for Tech made Easy is the property of Tech Guru and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
"Welcome to Tech Made Easy, the podcast where we dive deep into cutting-edge technical research papers, breaking down complex ideas into insightful discussions. Each episode, two tech enthusiasts explore a different research paper, simplifying the jargon, debating key points, and sharing their thoughts on its impact on the field. Whether you're a professional or a curious learner, join us for a geeky yet accessible journey through the world of technical research."
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_nologo/42114207/42114207-1727538975953-9c21613c9d9cf.jpg
Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) network
Tech made Easy
11 minutes 4 seconds
11 months ago
Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) network

This paper provides a thorough and detailed explanation of Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTMs), two popular machine learning architectures used for processing sequential data. The paper starts by deriving the canonical RNN equations from differential equations, establishing a clear foundation for understanding the behaviour of these networks. The paper then explores the concept of "unrolling" an RNN, demonstrating how a long sequence can be approximated by a series of shorter, independent sub-sequences. Subsequently, it addresses the challenges faced when training RNNs, particularly the issues of vanishing and exploding gradients. The paper then meticulously constructs the Vanilla LSTM cell from the canonical RNN, introducing gating mechanisms to control the flow of information within the cell and mitigate the vanishing gradient problem. The paper also presents an extended version of the Vanilla LSTM cell, known as the Augmented LSTM, by incorporating features like recurrent projection layers, non-causal input context windows, and an input gate. Finally, the paper details the backward pass equations for the Augmented LSTM, which are used for training the network using the Back Propagation Through Time algorithm.

Link to the Paper: https://www.sciencedirect.com/science/article/abs/pii/S0167278919305974

Tech made Easy
"Welcome to Tech Made Easy, the podcast where we dive deep into cutting-edge technical research papers, breaking down complex ideas into insightful discussions. Each episode, two tech enthusiasts explore a different research paper, simplifying the jargon, debating key points, and sharing their thoughts on its impact on the field. Whether you're a professional or a curious learner, join us for a geeky yet accessible journey through the world of technical research."