Are today’s AI systems truly global — or just Western by design? 🌍
In this episode of Women in AI Research, Jekaterina Novikova and Malikeh Ehgaghi speak with Dr. Vered Shwartz (Assistant Professor at UBC and CIFAR AI Chair at the Vector Institute) about the cultural blind spots in today’s large language and vision-language models.
REFERENCES:
🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.
Follow us at:
In this conversation, Jekaterina Novikova and Malikeh Ehgaghi interview Ana Marasović, an expert in AI trustworthiness, focusing on the complexities of explainability, the realities of academic research, and the dynamics of human-AI collaboration. We discuss the importance of intrinsic and extrinsic trust in AI systems, the challenges of evaluating AI performance, and the implications of synthetic evaluations.
REFERENCES:
🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.
Follow us at:
Can open-source large language models really outperform closed ones like Claude 3.5? 🤔
In this episode of the Women in AI Research podcast, Jekaterina Novikova and Malikeh Ehghaghi engage with Valentina Pyatkin, a postdoctoral researcher at the Allen Institute for AI.
We dive deep into the future of open science, LLM research, and extending model capabilities.
🔑 Topics we cover:
REFERENCES:
🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.
How can we go beyond accuracy to truly understand large language models?
In this episode of the Women in AI Research podcast, hosts Jekaterina Novikova and Malikeh Ehghaghi sit down with Simeng Sophia Han (PhD candidate at #yaleuniversity , Research Scientist Intern at #metaai , ex #google #deepmind , ex #aws ) to explore the future of 𝐋𝐋𝐌 𝐫𝐞𝐚𝐬𝐨𝐧𝐢𝐧𝐠, 𝐞𝐯𝐚𝐥𝐮𝐚𝐭𝐢𝐨𝐧, 𝐚𝐧𝐝 𝐞𝐱𝐩𝐥𝐚𝐢𝐧𝐚𝐛𝐥𝐞 𝐀𝐈.
🌟 What you’ll learn in this episode:
REFERENCES:
🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.
Follow us at:
In this episode of the Women in AI Research Podcast, hosts Jekaterina Novikova and Malikeh Ehghaghi engage with Abhilasha Ravichander to discuss the complexities of LLM hallucinations, the development of factuality benchmarks, and the importance of data transparency and machine unlearning in AI. The conversation also delves into personal experiences in academia and the future directions of research in responsible AI.
REFERENCES:
🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.
Follow us at:
#LLMHallucinations #FactualityBenchmarks #MachineUnlearning #DataTransparency #ModelMemorization #ResponsibleAI #GenerativeAI #NLPResearch #WomenInAI #AIResearch #WiAIR #wiairpodcast
A must-listen episode with Dr. Dieuwke Hupkes, a research scientist at #Meta AI Research, where we dive into AI generalization, LLM robustness, and model evaluation in large language models.
We explore how LLMs handle grammar and hierarchy, how they generalize across tasks and languages, and what consistency tells us about AI alignment.
We also talk about Dieuwke’s journey from physics to NLP, the challenges of peer review, and sustaining a career in research—plus, how pole dancing helps with focus 💪
REFERENCES:
Chapters
00:00 Introduction to Dieuwke Hupkes and Her Journey
05:15 Navigating Challenges in Research
07:17 The Peer Review Process: Insights and Frustrations
16:23 Being a Woman in AI: Representation and Challenges
19:57 Balancing Research and Personal Life
23:37 Exploring Consistency and Generalization in Language Models
33:31 Generalization Across Modalities
35:15 Exploring Generalization Taxonomy
40:55 Challenges in Evaluating Generalization
44:12 Data Contamination and Generalization
50:43 Consistency in Language Models
57:23 The Intersection of Consistency and Alignment
01:01:15 Current Research Directions
🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.
WiAIR website:
♾️ https://women-in-ai-research.github.io
Follow us at:
♾️ Bluesky
♾️ X (Twitter)
#LLMs #AIgeneralization #LLMrobustness #AIalignment #ModelEvaluation #MetaAIResearch #WiAIR #WiAIRpodcast
🔍 𝐂𝐚𝐧 𝐭𝐡𝐞 𝐟𝐮𝐭𝐮𝐫𝐞 𝐨𝐟 𝐀𝐈 𝐛𝐞 𝐝𝐞𝐜𝐞𝐧𝐭𝐫𝐚𝐥𝐢𝐳𝐞𝐝? 𝐇𝐨𝐰 𝐝𝐨 𝐰𝐞 𝐬𝐜𝐚𝐥𝐞 𝐛𝐞𝐲𝐨𝐧𝐝 𝐬𝐜𝐚𝐥𝐢𝐧𝐠 𝐥𝐚𝐰𝐬? 𝐀𝐧𝐝 𝐰𝐡𝐚𝐭 𝐝𝐨𝐞𝐬 𝐢𝐭 𝐫𝐞𝐚𝐥𝐥𝐲 𝐭𝐚𝐤𝐞 𝐭𝐨 𝐛𝐮𝐢𝐥𝐝 𝐢𝐧𝐜𝐥𝐮𝐬𝐢𝐯𝐞, 𝐦𝐮𝐥𝐭𝐢𝐥𝐢𝐧𝐠𝐮𝐚𝐥 𝐋𝐋𝐌𝐬?
In this episode of the #WiAIRpodcast, Wanru Zhao discusses decentralized and collaborative AI methods, the limitations of scaling laws, fine-tuning strategies, data attribution challenges in LLMs, and multilingual learning in federated settings—all while reflecting on her experiences across UK and Canadian research ecosystems. She also shares her academic journey, vision for the future of AI, and how she navigates research roadblocks with creativity and curiosity.
🧠 Whether you're building #llms, exploring #FederatedLearning, or just passionate about more inclusive and sustainable AI research—this episode is packed with insights, encouragement, and visionary thinking.
👉 Watch now and be part of the future of AI that’s collaborative, global, and radically inclusive.
🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.
WiAIR website:
Follow us at:
#FederatedLearning #AIResearch #LLM #WomenInAI #ScalingLaws #DecentralizedAI #MultilingualAI #NeurIPS2024 #ICLR2024 #DataCentricAI #ModelMerging #WiAIR #WiAIRpodcast #TechForInclusion #RepresentationMatters #MachineLearning #NLP #WomenInSTEM #OpenScience #VectorInstitute #UniversityOfCambridge #AWS #MicrosoftResearch
How can we build AI systems that are fair, explainable, and truly responsible?
In this episode of the #WiAIR podcast, we sit down with Dr. Faiza Khan Khattak, the CTO of an innovative AI startup, with a rich background in both academia and industry. From fairness in machine learning to the realities of ML deployment in healthcare, this conversation is packed with insights, real-world challenges, and powerful reflections.
REFERENCES:
👉 Whether you're an AI researcher, a developer working on LLMs, or someone passionate about Responsible AI, this episode is for you.
📌 Subscribe to hear more inspiring stories and cutting-edge ideas from women leading the future of AI.
Follow us at:
♾️ Bluesky
♾️ X (Twitter)
#WomenInAI #WiAIR #ResponsibleAI #FairnessInAI #AIHealthcare #ExplainableAI #LLMs #AIethics #BiasMitigation #MachineUnlearning #InterpretableAI #AIstartup #AIforGood
Dr. Angelica Lim, Assistant Professor at Simon Fraser University and Director of the SFU Rosie Lab.
Can robots have feelings? In this episode, we explore the intersection of robotics, machine learning, and developmental psychology, and consider both the technical challenges and philosophical questions surrounding emotional AI. This conversation offers a glimpse into the future of human-robot interaction and the potential for machines to understand and respond to human emotions.
REFERENCES:
WiAIR website:
♾️ https://women-in-ai-research.github.io
FOLLOW US:
♾️ Bluesky
♾️ X (Twitter)
#AI #SocialRobotics #EmpathyInAI #EthicalAI #HumanCenteredDesign #WiAIR
Aparna Balagopalan is a PhD student in the Department of Electrical Engineering and Computer Science (EECS) at the Massachusetts Institute of Technology.
In this episode, we present the intersection of AI and healthcare. Aparna shares her research on developing fair, interpretable, and robust models for healthcare applications. We explore the unique challenges of applying AI in medical contexts, including data quality, collaboration with clinicians, and the critical importance of model transparency. The conversation covers both technical innovations and ethical frameworks necessary for responsible AI deployment in healthcare settings.
REFERENCES:
WiAIR website:
Follow us at:
Dr. Amanda Cercas Curry is a researcher at CENTAI Institute, where she is working on applied NLP, fairness and evaluation.In this episode, we explore the critical issue of bias in AI systems. Amanda shares her expertise on identifying, measuring, and mitigating various forms of bias in language models and other AI applications. We discuss the social and ethical implications of biased AI, and how researchers are working to create more fair and inclusive systems. This episode highlights the importance of diverse perspectives in AI development and the ongoing challenges in the field.
References
WiAIR website
Check out the podcast website for the teasers, episode schedule, and more information
Follow us at
Nouha Dziri is an AI research scientist at the Allen Institute for AI, ex-Google DeepMind, ex-Microsoft Research.In this episode, we dive deep into the limitations of transformer models with Nouha Dziri, a research scientist at Allen Institute for AI. Nouha shares insights from her research on understanding the capabilities and constraints of LLMs. We discuss the challenges in reasoning, factuality, and ethical considerations that come with deploying these powerful AI systems. This conversation explores both technical aspects and broader implications for the future of AI research.
References
WiAIR website
Check out the podcast website for the teasers, episode schedule, and more information
Follow us at
We are starting a new podcast!
It's Women in AI Research, or simply WiAIR. Get ready for inspiring stories of leading women in AI research and their groundbreaking work. Learn from leading women in AI, hear powerful stories and join the community of AI researchers that value diversity.