Should powerful foundational AI models be kept under lock and key, or shared openly with the world?
In this episode of Approximately Correct, we sit down with Joelle Pineau, a professor at McGill University, former head of AI research at Meta and current chief AI officer at Cohere.. She led the development and release of the early versions of Meta’s Llama models, a series of open-weight models that challenged the closed-door approach of other AI teams.
Pineau argues that openness is the fastest path to better, safer AI, and that diversity in foundational models is essential to developing a limited ‘algorithm monoculture’.
On this episode of Approximately Correct, we talk with Michael Littman about the importance of making AI accessible and fun for everyone.
A former division director for the AI division at the National Science Foundation, Michael shares his unique perspective on AI policy, communication, and his career in reinforcement learning. He also discusses his new role as Associate Provost of Artificial Intelligence at Brown University, where he is working to coordinate AI research and teaching across the entire university.
This AI doesn't replace jobs; it collaborates with human experts to get the job done.
On this episode of Approximately Correct, we talk with Revan McQueen on the future of industrial control and AI's role in making it safer and better
Is it possible to make building a video game as easy as writing a story? What if artificial intelligence could be more than just a tool, and instead become a true creative partner?
In this episode of Approximately Correct, we dive into computational creativity with Amii Fellow and Canada CIFAR AI Chair, Matthew Guzdial. We explore how AI is being developed to collaborate with artists and designers, breaking down the technical barriers that can stand in the way of a great idea.
Learn about the future of human-AI collaboration, the philosophical questions behind AI art, and how research that starts with video games can end up solving problems in finance and even medicine.
How can we teach robots to safely navigate our unpredictable world?
On this special live episode of Approximately Correct recorded at Upper Bound 2025, we talk with Mo Chen about combining classical and modern AI to create smarter, safer, and more robust ro
Is AI this year's MVP? How is machine learning changing sports?On this episode of Approximately Correct, we talk with Chicago Blackhawks' David Radke about how machine learning is transforming the analysis of sports like hockey.
Could AI-powered ultrasounds save lives in remote areas? On this episode of Approximately Correct we talk with Dr. Jacob Jaremko about AI's revolutionary impact on medical imaging, particularly ultrasound technology, and its potential to transform healthcare.
In the latest episode of Approximately Correct, we’re taking the time to celebrate with Amii Fellow, Chief Scientific Advisor, and Canada CIFAR AI Chair Rich Sutton, newly-minted winner of the A.M. Turing Award, a prize that is often referred to as the “Nobel Prize of Computer Science.”
Discover the secret to training AI with less data!
On this episode of Approximately Correct, we talk with Amii Fellow and Canada CIFAR AI Chair Lili Mou about the challenges of training large language models and how his research on Flora addresses memory footprint concerns.
AI-powered prosthetics are changing lives, but it takes more than just technology. In this episode, Amii Fellow and Canada CIFAR AI Chair is here to talk about the work going on in his lab designing AI controls for bionic limbs and explains the unique partnership between researchers and users that's creating AI learning that is made with humans in mind.
Approximately Correct: An AI Podcast from Amii is hosted by Alona Fyshe and Scott Lilwall. It is produced by Lynda Vang, with video production by Chris Onciul.
How can machine learning revolutionize healthcare? In this episode, Amii Fellow and Canada CIFAR AI Chair Russ Greiner explores how AI is transforming survival prediction, giving doctors and patients personalized health insights that were never before possible. From creating tailored survival curves to improving treatment decisions, Greiner reveals the groundbreaking potential of AI in medicine.
In this episode, reinforcement learning legend Rich Sutton argues the focus on non-continual learning over the past 40 years is now holding AI back. Listen to one of the leading minds in machine learning explain what needs to change.
Approximately Correct: An AI Podcast from Amii is hosted by Alona Fyshe and Scott Lilwall, produced by Lynda Vang, with video production by Chris Onciul
Artificial intelligence has been a part of video games for decades. But new advances in machine learning and generative AI could radically change how we make — and play — games. In this episode, Andrew Butcher, co-founder of Artificial Agency, talks about how the company is creating technology to make games more playful, intelligent, and fun. Tune in to discover how AI is changing game development and what that means for developers and players alike.
A team of Amii researchers recently published a paper in the prestigious scientific journal Nature investigating a mysterious problem in deep learning that can hinder long-term continual learning. In this Sidebar episode, we talk to two of the co-authors about the problem of Loss of Plasticity, what their findings mean for advanced AI, and the journey from idea to Nature.
Read more about Loss of Plasticity: https://www.amii.ca/latest-from-amii/amii-researchers-investigate-ai-mystery-new-nature-paper-loss-plasticity/
Reinforcement learning is being used to make water treatment more efficient. On this week’s episode, we’re joined by Amii Fellow and Canada CIFAR AI Chair Martha White to talk about co-founding RL Core Technologies, which is exploring how RL can be used in to increase efficiency in water treatment plants and other industrial control systems. Find out more about how AI is being used in the real world and its potential large-scale impact.
How do we get the best results when AI and human beings work together? In this episode of Approximately Correct, we’re looking into Human-In-The-Loop (HITL) AI with Matt Taylor.
The Amii Fellow and Canada CIFAR AI Chair talks about the importance of human input in AI decision making, the need to recognize the strengths and weaknesses of both natural and artificial intelligence, and how he thinks HITL will be vital if people are to trust AI in their lives.
Production Credits: Lynda Vang, - Producer, Chris Onciul - Video Production,
Music Credits: Main Theme - Brooklyn Bridge by Lunareh
Medicine is built on data. Clinical studies, patient charts, test results, X-rays - data helps diagnose us when we’re unhealthy and understand how to treat us. Amii Fellow and Canada CIFAR AI Chair Ross Mitchell explores how machine learning can use this data, pulling out insights to help medical professionals work better and provide better patient outcomes.
In this Sidebar episode, we present a conversation between Mitchell and machine learning scientist Jubair Sheik about AI's massive potential in medicine and the increasing accuracy of medical AI models.
Production Credits: Lynda Vang, - Producer, Chris Onciul - Video Production,
Music Credits: Main Theme - Brooklyn Bridge by Lunareh
This week’s episode of Approximately Correct looks at how the work of Marlos C. Machado is taking reinforcement learning to new heights.
Marlos sits down with hosts Alona Fyshe and Scott Lilwall to share the insights from his work using AI to control balloons in Earth's stratosphere, and what it teaches us about how reinforcement learning works in the real world.
Production Credits: Lynda Vang - Producer, Chris Onciul - Video Production
Music Credits: Main Theme - Brooklyn Bridge by Lunareh
Dialogues have been a way of exploring complex philosophical and moral questions since the time of Plato. And now, they might offer new ways of exploring intelligence in the age of powerful large language models.
Amii Fellow and Canada CIFAR AI Chair Geoffrey Rockwell joins hosts Alona Fyshe and Scott Lilwall to talk about where philosophy and artificial intelligence intersect, and how learning more about non-human intelligence can teach us more about ourselves.
Production Credits: Lynda Vang - Producer, Chris Onciul - Video Production, Music Credits: Main Theme - Brooklyn Bridge by Lunareh
With generative AI becoming more and more powerful, seeing is no longer believing. Amii Fellow and Canada CIFAR AI Chair James Wright joins hosts Alona Fyshe and Scott Lilwall to tell the truth about telling lies. Wright talks about his work on disinformation, how artificial intelligence is affecting how we view information online, and why it’s much more complicated than just a technical question. He also shares his experience in behavioural economics and how studying humans leads to advancing AI.
Production Credits: Lynda Vang, Chris Onciul - Video Production, Jen Tomski - Social Media
Music Credits: Main Theme - Brooklyn Bridge by Lunareh