
A must-listen episode with Dr. Dieuwke Hupkes, a research scientist at #Meta AI Research, where we dive into AI generalization, LLM robustness, and model evaluation in large language models.
We explore how LLMs handle grammar and hierarchy, how they generalize across tasks and languages, and what consistency tells us about AI alignment.
We also talk about Dieuwke’s journey from physics to NLP, the challenges of peer review, and sustaining a career in research—plus, how pole dancing helps with focus 💪
REFERENCES:
Chapters
00:00 Introduction to Dieuwke Hupkes and Her Journey
05:15 Navigating Challenges in Research
07:17 The Peer Review Process: Insights and Frustrations
16:23 Being a Woman in AI: Representation and Challenges
19:57 Balancing Research and Personal Life
23:37 Exploring Consistency and Generalization in Language Models
33:31 Generalization Across Modalities
35:15 Exploring Generalization Taxonomy
40:55 Challenges in Evaluating Generalization
44:12 Data Contamination and Generalization
50:43 Consistency in Language Models
57:23 The Intersection of Consistency and Alignment
01:01:15 Current Research Directions
🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.
WiAIR website:
♾️ https://women-in-ai-research.github.io
Follow us at:
♾️ Bluesky
♾️ X (Twitter)
#LLMs #AIgeneralization #LLMrobustness #AIalignment #ModelEvaluation #MetaAIResearch #WiAIR #WiAIRpodcast