Based on the “Machine Learning ” crash course from Google for Developers: https://developers.google.com/machine-learning/crash-course
In this episode, we’ll learn what overfitting is, how to detect it using loss curves, why model complexity matters, and how techniques like L2 regularization help improve generalization to unseen data.
Disclaimer: This podcast is generated using an AI avatar voice. At times, you may notice overlapping sentences or background noise. That said, all content is directly based on the official course material to ensure accuracy and alignment with the original learning experience.
Based on the “Machine Learning ” crash course from Google for Developers: https://developers.google.com/machine-learning/crash-course
In this episode, we break down how machine learning models learn effectively by splitting data into training, validation, and test sets. Understand the purpose of each set, why this separation matters, and how it helps reduce overfitting while improving generalization.
Disclaimer: This podcast is generated using an AI avatar voice. At times, you may notice overlapping sentences or background noise. That said, all content is directly based on the official course material to ensure accuracy and alignment with the original learning experience.
Based on the “Machine Learning ” crash course from Google for Developers: https://developers.google.com/machine-learning/crash-course
This episode explores how categorical data is transformed into usable features for machine learning models. From understanding one-hot encoding to tackling real-world labeling challenges and applying feature crosses, we break down key techniques to handle non-numeric data effectively. Perfect for anyone aiming to build better models using human-defined categories.
Disclaimer: This podcast is generated using an AI avatar voice. At times, you may notice overlapping sentences or background noise. That said, all content is directly based on the official course material to ensure accuracy and alignment with the original learning experience.
Based on the “Machine Learning ” crash course from Google for Developers: https://developers.google.com/machine-learning/crash-course
In this episode, we dive into the world of classification in machine learning—exploring how models make decisions and how we evaluate their performance. You'll learn what a confusion matrix is, how thresholds affect predictions, and what metrics like accuracy, precision, recall, and F1 score really mean in practice. Whether you're new to ML or brushing up on the fundamentals, this episode will give you the clarity you need to confidently interpret model results.
Disclaimer: This podcast is generated using an AI avatar voice. At times, you may notice overlapping sentences or background noise. That said, all content is directly based on the official course material to ensure accuracy and alignment with the original learning experience.
Based on the “Machine Learning ” crash course from Google for Developers: https://developers.google.com/machine-learning/crash-course
Before feeding data into a machine learning model, it’s crucial to understand it. This episode walks you through the essential first steps: visualizing data, calculating basic statistics like mean and percentiles, and spotting outliers that could skew your model. Whether you're using pandas or plotting histograms, these techniques lay the foundation for effective ML pipelines.
Disclaimer: This podcast is generated using an AI avatar voice. At times, you may notice overlapping sentences or background noise. That said, all content is directly based on the official course material to ensure accuracy and alignment with the original learning experience.
Based on the “Machine Learning ” crash course from Google for Developers: https://developers.google.com/machine-learning/crash-courseIn this episode, we go beyond accuracy and dive into how to evaluate model performance across all thresholds using ROC curves and AUC. We also break down prediction bias—why your model might look accurate but still be off-target—and how to detect early signs of bias caused by data, features, or training bugs. Tune in to learn how to make more reliable and fair classification models!
Disclaimer: This podcast is generated using an AI avatar voice. At times, you may notice overlapping sentences or background noise. That said, all content is directly based on the official course material to ensure accuracy and alignment with the original learning experience.
Based on the “Machine Learning ” crash course from Google for Developers: https://developers.google.com/machine-learning/crash-course
In this episode, we break down logistic regression—a core algorithm used for classification. You'll learn how it calculates probabilities instead of direct values, what loss functions are used to measure mistakes, and how regularization helps prevent overfitting. It’s all about teaching machines to say “yes” or “no” with confidence.
Disclaimer: This podcast is generated using an AI avatar voice. At times, you may notice overlapping sentences or background noise. That said, all content is directly based on the official course material to ensure accuracy and alignment with the original learning experience.
Based on the “Machine Learning ” crash course from Google for Developers: https://developers.google.com/machine-learning/crash-course
What drives a machine learning model to learn? In this episode, we explore gradient descent, the optimization engine behind linear regression, and the crucial role of hyperparameters like learning rate, batch size, and epochs. Understand how models reduce error step by step, and why tuning hyperparameters can make or break performance. Whether you're a beginner or reviewing the basics, this episode brings clarity with real-world analogies and practical takeaways.
Disclaimer: This podcast is generated using an AI avatar voice. At times, you may notice overlapping sentences or background noise. That said, all content is directly based on the official course material to ensure accuracy and alignment with the original learning experience.
Based on the “Machine Learning ” crash course from Google for Developers: https://developers.google.com/machine-learning/crash-course
Linear Regression - Yale University
Linear and Logistic Regression - Stanford University
In this episode, we'll demystify Linear Regression, exploring its power in predicting continuous values and understanding its core mechanics, from the "best-fit line" to the critical role of the least squares method. Discover real-world applications where predicting "how much" is key, and learn how to evaluate its performance effectively.
Then, we'll pivot to Logistic Regression, a cornerstone for classification tasks. Understand how it tackles "yes/no" questions by predicting probabilities using the elegant sigmoid function. We'll delve into its distinct mathematical underpinnings and uncover its vital role in scenarios ranging from spam detection to medical diagnostics, alongside its unique evaluation metrics.
Disclaimer: This podcast is generated using an AI avatar voice. At times, you may notice overlapping sentences or background noise. That said, all content is directly based on the official course material to ensure accuracy and alignment with the original learning experience.
Based on the “Machine Learning ” crash course from Google for Developers: https://developers.google.com/machine-learning/crash-course
In this episode, we explore what machine learning (ML) is and why it's at the heart of today’s most innovative technologies from language translation and content recommendation to autonomous vehicles and generative AI.
You’ll learn how ML shifts the problem-solving paradigm: instead of manually programming every rule, we teach software to learn from data. Through relatable examples like predicting rainfall, we compare traditional methods with data-driven ML models that uncover patterns and make smart predictions.
Whether you’re new to AI or brushing up on core concepts, this episode lays a strong foundation for understanding how ML helps answer complex questions and power real-world applications.
Disclaimer: This podcast is generated using an AI avatar voice. At times, you may notice overlapping sentences or background noise. That said, all content is directly based on the official course material to ensure accuracy and alignment with the original learning experience.
This podcast is based on learnings from following resources-
Cuncic, A. (2023, December 6). How to manage public speaking anxiety. Verywell Mind. https://www.verywellmind.com/tips-for-managing-public-speaking-anxiety-3024336
Gershman, S. (2019, September 17). To overcome your fear of public speaking, stop thinking about yourself. Harvard Business Review. https://hbr.org/2019/09/to-overcome-your-fear-of-public-speaking-stop-thinking-about-yourself
Mayo Clinic Staff. (2024, December 20). Fear of public speaking: How can I overcome it? Mayo Clinic. https://www.mayoclinic.org/diseases-conditions/specific-phobias/expert-answers/fear-of-public-speaking/faq-20058416
Montijo, S. (2022, March 8). Public speaking anxiety: What is it and tips to overcome it. Psych Central. https://psychcentral.com/anxiety/public-speaking-anxiety
Moseley, J. (n.d.). How I overcame my fear of public speaking [Video]. TEDx Talks. YouTube. https://www.youtube.com/watch?v=aImrjNPrh30
National Social Anxiety Center. (n.d.). Public speaking anxiety. https://nationalsocialanxietycenter.com/social-anxiety/public-speaking-anxiety/
Reddit user dannyjerome0. (n.d.). Speaking anxiety is killing my career [Online forum post]. Reddit, r/PublicSpeaking. https://www.reddit.com/r/PublicSpeaking/comments/15el5w3/speaking_anxiety_is_killing_my_career/
U.S. National Library of Medicine. (2013). Cognitive behavioral therapy vs. exposure therapy in public speaking anxiety: A comparative clinical trial [PDF]. Neuropsychiatric Disease and Treatment, 9, 609–619. https://pmc.ncbi.nlm.nih.gov/articles/PMC3647380/pdf/ndt-9-609.pdf
The provided sources a comprehensive look at public speaking anxiety, also known as glossophobia, a common social fear. Dr. Justin Moseley's TEDx talk provides a personal narrative of overcoming this fear, emphasizing the importance of sharing one's message, shifting focus from self to audience, and transforming fear into excitement. This personal account is complemented by two medical/psychological articles from Psych Central and Verywell Mind, which define public speaking anxiety as a social anxiety disorder, list its psychological and physical symptoms, discuss potential causes and risk factors, and outline various treatment options, including therapy (CBT, VR exposure) and medication (beta-blockers like Propranolol). Finally, a Reddit thread from r/PublicSpeaking showcases real-world experiences, with individuals sharing their struggles, seeking advice, and discussing the effectiveness of various strategies, including medication, in managing their public speaking anxiety.
What started as a simple Slack bot turned into 8,000+ internal AI assistants—without a single data scientist on the team. In this episode, AI Product Manager Julian Sara Joseph shares how her team quietly scaled generative AI inside a large enterprise, built trust, and let the product speak for itself. No hype. No heavy marketing. Just real usage, smart engineering, and a platform that worked.
If you’ve ever wondered what it takes to build AI people actually use, this one’s for you.
Guest Speaker - Julian Joseph
Host - Priti Y.
AI on Trial is a special episode of Human in the Loop, where we take a deep dive into Model Autophagy Disorder (MAD)—a growing risk in artificial intelligence systems. From feedback loops to synthetic data overload, we unpack how models trained on their own outputs begin to degrade in performance and reliability. With real-world examples, emerging research, and ethical implications, this episode explores what happens when AI starts learning from itself—and what we can do to prevent it.
💡 Whether you're an AI engineer, researcher, or just AI-curious, this episode gives you the tools to recognize, explain, and respond to MAD.
Featured Tool:
Try out the companion tool featured in the episode:
MADGuard – AI Explorer
A lightweight diagnostic app to visualize feedback loops, compare input sources, and score MAD risks.
Read the deeper explainer blog:🔗 What Is Model Autophagy Disorder? – Human in Loop Blog
A plain-language breakdown of the research, risks, and terminology.
Other Detection Tools & Frameworks
DVC – Data Version Control
https://dvc.org/
Label Studio – Open-Source Data Labeling Tool
https://labelstud.io/
DetectGPT – Classify AI-generated Text
https://arxiv.org/abs/2301.11305
Grover – Neural Fake News Detector (Allen AI)
https://rowanzellers.com/grover/
References-
Alemohammad et al. (2023). Self-Consuming Generative Models Go MADPaper introducing MAD and simulating performance collapse in generative models.🔗 arXiv:2307.01850
Yang et al. (2024). Model Autophagy Analysis to Explicate Self-consumptionBridges human-AI interaction with MAD dynamics.🔗 arXiv:2402.11271
UCLA Livescu Initiative – Model Autophagy Disorder (MAD) PortalResearch hub on epistemic risk and feedback loop governance.🔗 https://livescu.ucla.edu/model-autophagy-disorder/
Earth.com (2024) – Could Generative AI Go MAD and Wreck Internet Data?Reports on future data degradation and the "hall of mirrors" risk.🔗 https://www.earth.com/news/could-generative-ai-go-mad-and-wreck-internet-data/
New York Times (2023) – Avianca Airline Lawsuit Involving ChatGPT BriefsLegal case where synthetic text led to real-world sanctions.🔗 https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html
In this episode, we explore the measurable expenses of pushing AI boundaries, from CO2 emissions to workforce stress, drawing on recent research.
Key Learnings:
Speaker - Priti— Founder, Human in Loop Podcasts
Check the references below. Dig deeper if you’d like! We all see things our own way, so feel free to explore.
References:
Discover how Agentic AI moves beyond reactive systems to proactive, goal-driven intelligence.
Host Priti and guest Vino explore real-world uses, ethical challenges, and human-AI collaboration.
Learn how Agentic AI supports productivity, especially for neurodivergent individuals, and what's next in multi-agent systems.
Key Learnings:
What defines Agentic AI and how it differs from traditional AI
Real-world applications and productivity benefits
Guest Speaker - Vinodhini Ravikumar — Engineering Leader at Microsoft & Founder of Mind Mosaic AI
Check the references below. Dig deeper if you’d like! We all see things our own way, so feel free to explore.
References -
ReAct: Synergizing reasoning and acting in language models (ICLR 2023) — Yao et al.
Talk to Right Specialists: Routing and Planning in Multi-agent System for Question Answering (2025) — Wu et al.
Language Models as Zero-Shot Planners (ICML 2022) — Huang et al.
Do As I Can, Not As I Say: Grounding Language in Robotic Affordances (2022) — Ahn et al.
Code as Policies: Language Model Programs for Embodied Control (ICRA 2023) — Liang et al.
BOLAA: Benchmarking and Orchestrating LLM-Augmented Autonomous Agents (ICLR 2024 Workshop) — Liu
Agent AI Towards a Holistic Intelligence (2023) — Huang et al.
Ever Wonder Why Stuff Slips Your Mind So Quick? Seriously, why do we blank on things we just learned? It’s not you—it’s your brain’s quirky way. Microlearning’s here to save the day, and it’s crazy easy. Let’s dig into it on this podcast!
Note: This is a demo episode with voices from an AI avatar, packed with solid research—check the references below. Dig deeper if you’d like! We all see things our own way, so feel free to explore.
References -