Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Health & Fitness
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
Loading...
0:00 / 0:00
Podjoint Logo
US
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/0b/d1/cb/0bd1cb6a-7a39-8850-c7ba-956d580de398/mza_4867409286755315131.jpg/600x600bb.jpg
Women in AI Research (WiAIR)
WiAIR
13 episodes
4 days ago
Women in AI Research (WiAIR) is a podcast dedicated to celebrating the remarkable contributions of female AI researchers from around the globe. Our mission is to challenge the prevailing perception that AI research is predominantly male-driven. Our goal is to empower early career researchers, especially women, to pursue their passion for AI and make an impact in this rapidly growing field. You will learn from women at different career stages, stay updated on the latest research and advancements, and hear powerful stories of overcoming obstacles and breaking stereotypes.
Show more...
Education
RSS
All content for Women in AI Research (WiAIR) is the property of WiAIR and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Women in AI Research (WiAIR) is a podcast dedicated to celebrating the remarkable contributions of female AI researchers from around the globe. Our mission is to challenge the prevailing perception that AI research is predominantly male-driven. Our goal is to empower early career researchers, especially women, to pursue their passion for AI and make an impact in this rapidly growing field. You will learn from women at different career stages, stay updated on the latest research and advancements, and hear powerful stories of overcoming obstacles and breaking stereotypes.
Show more...
Education
Episodes (13/13)
Women in AI Research (WiAIR)
Why AI Doesn’t Understand Your Culture? Dr. Vered Shwartz on Cultural Bias in LLMs

Are today’s AI systems truly global — or just Western by design? 🌍


In this episode of Women in AI Research, Jekaterina Novikova and Malikeh Ehgaghi speak with Dr. Vered Shwartz (Assistant Professor at UBC and CIFAR AI Chair at the Vector Institute) about the cultural blind spots in today’s large language and vision-language models.


REFERENCES:

  • Vered Shwartz Google Scholar profile
  • Book "Lost in Automatic Translation"
  • Elevator Recognition, by The Scottish Comedy Channel
  • Locating Information Gaps and Narrative Inconsistencies Across Languages: A Case Study of LGBT People Portrayals on Wikipedia
  • ECLeKTic: a Novel Challenge Set for Evaluation of Cross-Lingual Knowledge Transfer
  • WikiGap: Promoting Epistemic Equity by Surfacing Knowledge Gaps Between English Wikipedia and other Language Editions
  • Is It Bad to Work All the Time? Cross-Cultural Evaluation of Social Norm Biases in GPT-4
  • Towards Measuring the Representation of Subjective Global Opinions in Language Models
  • I'm Afraid I Can't Do That: Predicting Prompt Refusal in Black-Box Generative Language Models
  • From Local Concepts to Universals: Evaluating the Multicultural Understanding of Vision-Language Models
  • CulturalBench: A Robust, Diverse, and ChallengingCultural Benchmark by Human-AI CulturalTeaming


🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.

⁠WiAIR website⁠


Follow us at:

  • ⁠LinkedIn⁠
  • ⁠Bluesky⁠
  • ⁠X (Twitter)
Show more...
4 days ago
1 hour 17 minutes 17 seconds

Women in AI Research (WiAIR)
Can We Trust AI Explanations? Dr. Ana Marasović on AI Trustworthiness, Explainability & Faithfulness

In this conversation, Jekaterina Novikova and Malikeh Ehgaghi interview Ana Marasović, an expert in AI trustworthiness, focusing on the complexities of explainability, the realities of academic research, and the dynamics of human-AI collaboration. We discuss the importance of intrinsic and extrinsic trust in AI systems, the challenges of evaluating AI performance, and the implications of synthetic evaluations.


REFERENCES:

  • On Evaluating Explanation Utility for Human-AI Decision Making in NLP
  • Effective Human-AI Teams via Learned Natural Language Rules and Onboarding
  • Chain-of-Thought Unfaithfulness as Disguised Accuracy
  • On Measuring Faithfulness or Self-consistency of Natural Language Explanations
  • What Has Been Lost with Synthetic Evaluation?



🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.

WiAIR website


Follow us at:

  • LinkedIn
  • Bluesky
  • X (Twitter)
Show more...
3 weeks ago
1 hour 22 minutes 37 seconds

Women in AI Research (WiAIR)
Open Science and LLMs, with Dr. Valentina Pyatkin

Can open-source large language models really outperform closed ones like Claude 3.5? 🤔


In this episode of the Women in AI Research podcast, Jekaterina Novikova and Malikeh Ehghaghi engage with Valentina Pyatkin, a postdoctoral researcher at the Allen Institute for AI.


We dive deep into the future of open science, LLM research, and extending model capabilities.


🔑 Topics we cover:

  • Why open-source LLMs sometimes beat closed models
  • The value of releasing datasets, recipes, and training infrastructure
  • The role of open science in accelerating NLP innovation
  • Insights from Valentina’s award-winning research journey


REFERENCES:

  • ⁠Valentina's Google Scholar profile ⁠
  • ⁠Olmo: Accelerating the science of language models⁠
  • ⁠Tulu 3: Pushing Frontiers in Open Language Model Post-Training⁠
  • ⁠open-instruct⁠
  • ⁠Generalizing Verifiable Instruction Following⁠
  • ⁠RewardBench 2: Advancing Reward Model Evaluation⁠


🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.


  • ⁠WiAIR website⁠
  • ⁠LinkedIn⁠
  • ⁠Bluesky⁠
  • ⁠X (Twitter)
Show more...
1 month ago
1 hour 11 minutes 37 seconds

Women in AI Research (WiAIR)
Unlocking LLM Reasoning, with Simeng Sophia Han

How can we go beyond accuracy to truly understand large language models?


In this episode of the Women in AI Research podcast, hosts Jekaterina Novikova and Malikeh Ehghaghi sit down with Simeng Sophia Han (PhD candidate at #yaleuniversity , Research Scientist Intern at #metaai , ex #google #deepmind , ex #aws ) to explore the future of 𝐋𝐋𝐌 𝐫𝐞𝐚𝐬𝐨𝐧𝐢𝐧𝐠, 𝐞𝐯𝐚𝐥𝐮𝐚𝐭𝐢𝐨𝐧, 𝐚𝐧𝐝 𝐞𝐱𝐩𝐥𝐚𝐢𝐧𝐚𝐛𝐥𝐞 𝐀𝐈.


🌟 What you’ll learn in this episode:

  • Why evaluating reasoning goes beyond correctness
  • How brain teasers uncover hidden strengths and weaknesses of LLMs
  • The importance of symbolic reasoning for complex problem solving
  • The role of mentorship and early research experiences in shaping careers
  • Why consistency in AI outputs is essential for building trust
  • How humans combine brute force and intuition — and what this means for AI


REFERENCES:

  • Simeng Sophia Han - Google Scholar profile
  • Creativity or Brute Force? Using Brainteasers as a Window into the Problem-Solving Abilities of Large Language Models
  • HYBRIDMIND: Meta Selection of Natural Language and Symbolic Language for Enhanced LLM Reasoning
  • P-FOLIO: Evaluating and Improving Logical Reasoning with Abundant Human-Written Reasoning Chains
  • Folio: Natural Language Reasoning with First-Order Logic


🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.

WiAIR website

Follow us at:

  • LinkedIn
  • Bluesky
  • X (Twitter)
Show more...
2 months ago
53 minutes 39 seconds

Women in AI Research (WiAIR)
LLM Hallucinations and Machine Unlearning, with Dr. Abhilasha Ravichander

In this episode of the Women in AI Research Podcast, hosts Jekaterina Novikova and Malikeh Ehghaghi engage with Abhilasha Ravichander to discuss the complexities of LLM hallucinations, the development of factuality benchmarks, and the importance of data transparency and machine unlearning in AI. The conversation also delves into personal experiences in academia and the future directions of research in responsible AI.


REFERENCES:

  • Abhilasha Ravichander -- Google Scholar profile
  • WildHallucinations: Evaluating Long-form Factuality in LLMs with Real-World Entity Queries
  • HALoGEN: Fantastic LLM Hallucinations and Where to Find Them
  • FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation
  • What's In My Big Data?
  • Information-Guided Identification of Training Data Imprint in (Proprietary) Large Language Models
  • RESTOR: Knowledge Recovery in Machine Unlearning
  • Model State Arithmetic for Machine Unlearning


🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.


WiAIR website


Follow us at:

  • LinkedIn
  • Bluesky
  • X (Twitter)


#LLMHallucinations #FactualityBenchmarks #MachineUnlearning #DataTransparency #ModelMemorization #ResponsibleAI #GenerativeAI #NLPResearch #WomenInAI #AIResearch #WiAIR #wiairpodcast

Show more...
2 months ago
1 hour 3 minutes 21 seconds

Women in AI Research (WiAIR)
Generalization in AI, with Dr. Dieuwke Hupkes

A must-listen episode with Dr. Dieuwke Hupkes, a research scientist at #Meta AI Research, where we dive into AI generalization, LLM robustness, and model evaluation in large language models.


We explore how LLMs handle grammar and hierarchy, how they generalize across tasks and languages, and what consistency tells us about AI alignment.


We also talk about Dieuwke’s journey from physics to NLP, the challenges of peer review, and sustaining a career in research—plus, how pole dancing helps with focus 💪


REFERENCES:

  • Dieuwke Hupkes - Google Scholar profile
  • A taxonomy and review of generalization research in NLP
  • What's in My Big Data
  • GenBench workshop ( Youtube, website)
  • Separating form and meaning: Using self-consistency to quantify task understanding across multiple senses
  • From form(s) to meaning: Probing the semantic depths of language models using multisense consistency
  • MultiLoKo: a multilingual local knowledge benchmark for LLMs spanning 31 languages
  • How much do language models memorize?


Chapters

00:00 Introduction to Dieuwke Hupkes and Her Journey

05:15 Navigating Challenges in Research

07:17 The Peer Review Process: Insights and Frustrations

16:23 Being a Woman in AI: Representation and Challenges

19:57 Balancing Research and Personal Life

23:37 Exploring Consistency and Generalization in Language Models

33:31 Generalization Across Modalities

35:15 Exploring Generalization Taxonomy

40:55 Challenges in Evaluating Generalization

44:12 Data Contamination and Generalization

50:43 Consistency in Language Models

57:23 The Intersection of Consistency and Alignment

01:01:15 Current Research Directions



🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.


WiAIR website:

♾️ https://women-in-ai-research.github.io


Follow us at:

♾️ LinkedIn

♾️ Bluesky

♾️ X (Twitter)


#LLMs #AIgeneralization #LLMrobustness #AIalignment #ModelEvaluation #MetaAIResearch #WiAIR #WiAIRpodcast

Show more...
3 months ago
1 hour 9 minutes 21 seconds

Women in AI Research (WiAIR)
Decentralized AI, with Wanru Zhao

🔍 𝐂𝐚𝐧 𝐭𝐡𝐞 𝐟𝐮𝐭𝐮𝐫𝐞 𝐨𝐟 𝐀𝐈 𝐛𝐞 𝐝𝐞𝐜𝐞𝐧𝐭𝐫𝐚𝐥𝐢𝐳𝐞𝐝? 𝐇𝐨𝐰 𝐝𝐨 𝐰𝐞 𝐬𝐜𝐚𝐥𝐞 𝐛𝐞𝐲𝐨𝐧𝐝 𝐬𝐜𝐚𝐥𝐢𝐧𝐠 𝐥𝐚𝐰𝐬? 𝐀𝐧𝐝 𝐰𝐡𝐚𝐭 𝐝𝐨𝐞𝐬 𝐢𝐭 𝐫𝐞𝐚𝐥𝐥𝐲 𝐭𝐚𝐤𝐞 𝐭𝐨 𝐛𝐮𝐢𝐥𝐝 𝐢𝐧𝐜𝐥𝐮𝐬𝐢𝐯𝐞, 𝐦𝐮𝐥𝐭𝐢𝐥𝐢𝐧𝐠𝐮𝐚𝐥 𝐋𝐋𝐌𝐬?



In this episode of the #WiAIRpodcast, Wanru Zhao discusses decentralized and collaborative AI methods, the limitations of scaling laws, fine-tuning strategies, data attribution challenges in LLMs, and multilingual learning in federated settings—all while reflecting on her experiences across UK and Canadian research ecosystems. She also shares her academic journey, vision for the future of AI, and how she navigates research roadblocks with creativity and curiosity.


🧠 Whether you're building #llms, exploring #FederatedLearning, or just passionate about more inclusive and sustainable AI research—this episode is packed with insights, encouragement, and visionary thinking.


👉 Watch now and be part of the future of AI that’s collaborative, global, and radically inclusive.



🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.


WiAIR website:

  • https://women-in-ai-research.github.io


Follow us at:

  • LinkedIn: https://www.linkedin.com/company/women-in-ai-research
  • Bluesky: https://bsky.app/profile/wiair.bsky.social
  • X (Twitter): https://x.com/WiAIR_podcast



#FederatedLearning #AIResearch #LLM #WomenInAI #ScalingLaws #DecentralizedAI #MultilingualAI #NeurIPS2024 #ICLR2024 #DataCentricAI #ModelMerging #WiAIR #WiAIRpodcast #TechForInclusion #RepresentationMatters #MachineLearning #NLP #WomenInSTEM #OpenScience #VectorInstitute #UniversityOfCambridge #AWS #MicrosoftResearch

Show more...
4 months ago
52 minutes 15 seconds

Women in AI Research (WiAIR)
Interpretable AI, with Dr. Faiza Khan Khattak

How can we build AI systems that are fair, explainable, and truly responsible?


In this episode of the #WiAIR podcast, we sit down with Dr. Faiza Khan Khattak, the CTO of an innovative AI startup, with a rich background in both academia and industry. From fairness in machine learning to the realities of ML deployment in healthcare, this conversation is packed with insights, real-world challenges, and powerful reflections.


REFERENCES:

  • MLHOps: Machine Learning Health Operations
  • Using Chain-of-Thought Prompting for Interpretable Recognition of Social Bias
  • Dialectic Preference Bias in Large Language Models
  • The Impact of Unstated Norms in Bias Analysis of Language Models
  • Can Machine Unlearning Reduce Social Bias in Language Models?
  • BiasKG: Adversarial Knowledge Graphs to Induce Bias in Large Language Models

👉 Whether you're an AI researcher, a developer working on LLMs, or someone passionate about Responsible AI, this episode is for you.


📌 Subscribe to hear more inspiring stories and cutting-edge ideas from women leading the future of AI.


WiAIR website.


Follow us at:

♾️ LinkedIn

♾️ Bluesky

♾️ X (Twitter)


#WomenInAI #WiAIR #ResponsibleAI #FairnessInAI #AIHealthcare #ExplainableAI #LLMs #AIethics #BiasMitigation #MachineUnlearning #InterpretableAI #AIstartup #AIforGood

Show more...
5 months ago
58 minutes 17 seconds

Women in AI Research (WiAIR)
Robots with Empathy, with Dr. Angelica Lim

Dr. Angelica Lim, Assistant Professor at Simon Fraser University and Director of the SFU Rosie Lab.

Can robots have feelings? In this episode, we explore the intersection of robotics, machine learning, and developmental psychology, and consider both the technical challenges and philosophical questions surrounding emotional AI. This conversation offers a glimpse into the future of human-robot interaction and the potential for machines to understand and respond to human emotions.


REFERENCES:

  • On Designing User-Friendly Robots: Angelica Lim at TEDxKyoto 2012
  • Systematic Review of Social Robots for Health and Wellbeing: A Personal Healthcare Journey Lens
  • Towards Inclusive HRI: Using Sim2Real to Address Underrepresentation in Emotion Expression Recognition (paper, project website, Github)
  • Contextual Emotion Recognition using Large Vision Language Models (paper, project website)
  • EmoStyle: One-Shot Facial Expression Editing Using Continuous Emotion Parameters (paper, project website)



WiAIR website:

♾️ https://women-in-ai-research.github.io


FOLLOW US:

♾️ LinkedIn

♾️ Bluesky

♾️ X (Twitter)


#AI #SocialRobotics #EmpathyInAI #EthicalAI #HumanCenteredDesign #WiAIR

Show more...
5 months ago
49 minutes 7 seconds

Women in AI Research (WiAIR)
Responsible AI for Health, with Aparna Balagopalan

Aparna Balagopalan is a PhD student in the Department of Electrical Engineering and Computer Science (EECS) at the Massachusetts Institute of Technology.


In this episode, we present the intersection of AI and healthcare. Aparna shares her research on developing fair, interpretable, and robust models for healthcare applications. We explore the unique challenges of applying AI in medical contexts, including data quality, collaboration with clinicians, and the critical importance of model transparency. The conversation covers both technical innovations and ethical frameworks necessary for responsible AI deployment in healthcare settings.


REFERENCES:

  • Aparna's Google Scholar profile
  • Machine learning for healthcare that matters: Reorienting from technical novelty to equitable impact
  • Judging facts, judging norms: Training machine learning models to judge humans requires a modified approach to labeling data
  • LEMoN: Label Error Detection using Multimodal Neighbors


WiAIR website:

  • https://women-in-ai-research.github.io


Follow us at:

  • LinkedIn
  • Bluesky
  • X (Twitter)
Show more...
6 months ago
38 minutes 59 seconds

Women in AI Research (WiAIR)
Bias in AI, with Amanda Cercas Curry

Dr. Amanda Cercas Curry⁠ is a researcher at CENTAI Institute, where she is working on applied NLP, fairness and evaluation.In this episode, we explore the critical issue of bias in AI systems. Amanda shares her expertise on identifying, measuring, and mitigating various forms of bias in language models and other AI applications. We discuss the social and ethical implications of biased AI, and how researchers are working to create more fair and inclusive systems. This episode highlights the importance of diverse perspectives in AI development and the ongoing challenges in the field.


References

  • ⁠Let's Chat Ethics⁠ podcast on Spotify
  • ⁠How We Analyzed the COMPAS Recidivism Algorithm⁠
  • ⁠Impoverished Language Technology: The Lack of (Social) Class in NLP⁠
  • ⁠Signs of Social Class: The Experience of Economic Inequality in Everyday Life⁠
  • ⁠Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution⁠
  • ⁠How a chatbot encouraged a man who wanted to kill the Queen⁠


WiAIR website

Check out the podcast website for the teasers, episode schedule, and more information

  • ⁠⁠https://women-in-ai-research.github.io⁠⁠


Follow us at

  • ⁠⁠Youtube⁠⁠
  • ⁠⁠LinkedIn⁠⁠
  • ⁠⁠Bluesky⁠⁠
  • ⁠⁠X (Twitter)
Show more...
7 months ago
49 minutes 18 seconds

Women in AI Research (WiAIR)
Limits of Transformers, with Nouha Dziri

Nouha Dziri is an AI research scientist at the Allen Institute for AI, ex-Google DeepMind, ex-Microsoft Research.In this episode, we dive deep into the limitations of transformer models with Nouha Dziri, a research scientist at Allen Institute for AI. Nouha shares insights from her research on understanding the capabilities and constraints of LLMs. We discuss the challenges in reasoning, factuality, and ethical considerations that come with deploying these powerful AI systems. This conversation explores both technical aspects and broader implications for the future of AI research.


References

  • Noha's google scholar profile
  • The Generative AI Paradox:" What It Can Create, It May Not Understand"
  • The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A"
  • Faith and Fate: Limits of Transformers on Compositionality


WiAIR website

Check out the podcast website for the teasers, episode schedule, and more information

  • https://women-in-ai-research.github.io


Follow us at

  • Youtube
  • LinkedIn
  • Bluesky
  • X (Twitter)
Show more...
7 months ago
49 minutes 9 seconds

Women in AI Research (WiAIR)
The new WiAIR podcast - Trailer

We are starting a new podcast!

It's Women in AI Research, or simply WiAIR. Get ready for inspiring stories of leading women in AI research and their groundbreaking work. Learn from leading women in AI, hear powerful stories and join the community of AI researchers that value diversity.

Show more...
8 months ago
2 minutes

Women in AI Research (WiAIR)
Women in AI Research (WiAIR) is a podcast dedicated to celebrating the remarkable contributions of female AI researchers from around the globe. Our mission is to challenge the prevailing perception that AI research is predominantly male-driven. Our goal is to empower early career researchers, especially women, to pursue their passion for AI and make an impact in this rapidly growing field. You will learn from women at different career stages, stay updated on the latest research and advancements, and hear powerful stories of overcoming obstacles and breaking stereotypes.