Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts124/v4/b1/e7/03/b1e70326-4200-a297-ec08-bd8590545dc8/mza_12174766315714829581.jpg/600x600bb.jpg
Towards Data Science
The TDS team
130 episodes
5 days ago
Note: The TDS podcast's current run has ended. Researchers and business leaders at the forefront of the field unpack the most pressing questions around data science and AI.
Show more...
Technology
RSS
All content for Towards Data Science is the property of The TDS team and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Note: The TDS podcast's current run has ended. Researchers and business leaders at the forefront of the field unpack the most pressing questions around data science and AI.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/production/podcast_uploaded_nologo400/473625/473625-1610835242571-7393225beb5b8.jpg
115. Irina Rish - Out-of-distribution generalization
Towards Data Science
50 minutes 12 seconds
3 years ago
115. Irina Rish - Out-of-distribution generalization

Imagine, for example, an AI that’s trained to identify cows in images. Ideally, we’d want it to learn to detect cows based on their shape and colour. But what if the cow pictures we put in the training dataset always show cows standing on grass?

In that case, we have a spurious correlation between grass and cows, and if we’re not careful, our AI might learn to become a grass detector rather than a cow detector. Even worse, we could only realize that’s happened once we’ve deployed it in the real world and it runs into a cow that isn’t standing on grass for the first time.

So how do you build AI systems that can learn robust, general concepts that remain valid outside the context of their training data?

That’s the problem of out-of-distribution generalization, and it’s a central part of the research agenda of Irina Rish, a core member of the Mila— Quebec AI Research institute, and the Canadian Excellence Research Chair in Autonomous AI. Irina’s research explores many different strategies that aim to overcome the out-of-distribution problem, from empirical AI scaling efforts to more theoretical work, and she joined me to talk about just that on this episode of the podcast.

***

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc

***

Chapters:

  • 2:00 Research, safety, and generalization
  • 8:20 Invariant risk minimization
  • 15:00 Importance of scaling
  • 21:35 Role of language
  • 27:40 AGI and scaling
  • 32:30 GPT versus ResNet 50
  • 37:00 Potential revolutions in architecture
  • 42:30 Inductive bias aspect
  • 46:00 New risks
  • 49:30 Wrap-up
Towards Data Science
Note: The TDS podcast's current run has ended. Researchers and business leaders at the forefront of the field unpack the most pressing questions around data science and AI.