In this episode we talk about the paper "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean.
All content for Argmax is the property of Vahe Hagopian, Taka Hasegawa, Farrukh Rahman and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
In this episode we talk about the paper "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean.
6: Deep Reinforcement Learning at the Edge of the Statistical Precipice
Argmax
1 hour 1 minute
3 years ago
6: Deep Reinforcement Learning at the Edge of the Statistical Precipice
We discuss NeurIPS outstanding paper award winning paper, talking about important topics surrounding metrics and reproducibility.
Argmax
In this episode we talk about the paper "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean.