In this episode we talk about the paper "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean.
All content for Argmax is the property of Vahe Hagopian, Taka Hasegawa, Farrukh Rahman and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
In this episode we talk about the paper "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean.
This is the first episode of Argmax! We talk about our motivations for doing a podcast, and what we hope listeners will get out of it.Todays paper: Reward is Enough Summary of the paperThe authors present the Reward is Enough hypothesis: Intelligence, and its associated abilities, can be understood as subserving the maximisation of reward by an agent acting in its environment.Highlights of discussionHigh level overview of Reinforcement LearningHow evolution can be encoded as a reward maximiza...
Argmax
In this episode we talk about the paper "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean.