Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/b1/34/c6/b134c6ba-c194-57f8-0221-3a3b793e86ca/mza_3056458669303344960.jpg/600x600bb.jpg
Human in loop podcasts
Priti Y.
16 episodes
4 days ago
Educational podcasts blending human research and AI-generated content, promoting curiosity, critical thinking, and lifelong learning.
Show more...
Technology
RSS
All content for Human in loop podcasts is the property of Priti Y. and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Educational podcasts blending human research and AI-generated content, promoting curiosity, critical thinking, and lifelong learning.
Show more...
Technology
Episodes (16/16)
Human in loop podcasts
Demystifying Overfitting & Model Complexity in ML

Based on the “Machine Learning ” crash course from Google for Developers:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠https://developers.google.com/machine-learning/crash-course⁠⁠⁠


In this episode, we’ll learn what overfitting is, how to detect it using loss curves, why model complexity matters, and how techniques like L2 regularization help improve generalization to unseen data.


Disclaimer: This podcast is generated using an AI avatar voice. At times, you may notice overlapping sentences or background noise. That said, all content is directly based on the official course material to ensure accuracy and alignment with the original learning experience.

Show more...
3 months ago
6 minutes 22 seconds

Human in loop podcasts
Data Prep in Machine Learning: Training, Validation & Test Sets Explained

Based on the “Machine Learning ” crash course from Google for Developers:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠https://developers.google.com/machine-learning/crash-course⁠⁠⁠


In this episode, we break down how machine learning models learn effectively by splitting data into training, validation, and test sets. Understand the purpose of each set, why this separation matters, and how it helps reduce overfitting while improving generalization.


Disclaimer: This podcast is generated using an AI avatar voice. At times, you may notice overlapping sentences or background noise. That said, all content is directly based on the official course material to ensure accuracy and alignment with the original learning experience.

Show more...
3 months ago
6 minutes 26 seconds

Human in loop podcasts
Working with Categorial Data in Machine Learning

Based on the “Machine Learning ” crash course from Google for Developers:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠https://developers.google.com/machine-learning/crash-course⁠⁠


This episode explores how categorical data is transformed into usable features for machine learning models. From understanding one-hot encoding to tackling real-world labeling challenges and applying feature crosses, we break down key techniques to handle non-numeric data effectively. Perfect for anyone aiming to build better models using human-defined categories.


Disclaimer: This podcast is generated using an AI avatar voice. At times, you may notice overlapping sentences or background noise. That said, all content is directly based on the official course material to ensure accuracy and alignment with the original learning experience.

Show more...
3 months ago
7 minutes 18 seconds

Human in loop podcasts
Classification Metrics Made Simple: Precision, Recall & More

Based on the “Machine Learning ” crash course from Google for Developers:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://developers.google.com/machine-learning/crash-course

In this episode, we dive into the world of classification in machine learning—exploring how models make decisions and how we evaluate their performance. You'll learn what a confusion matrix is, how thresholds affect predictions, and what metrics like accuracy, precision, recall, and F1 score really mean in practice. Whether you're new to ML or brushing up on the fundamentals, this episode will give you the clarity you need to confidently interpret model results.

Disclaimer: This podcast is generated using an AI avatar voice. At times, you may notice overlapping sentences or background noise. That said, all content is directly based on the official course material to ensure accuracy and alignment with the original learning experience.

Show more...
3 months ago
7 minutes 27 seconds

Human in loop podcasts
First Steps with Numerical Data in Machine Learning

Based on the “Machine Learning ” crash course from Google for Developers:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠https://developers.google.com/machine-learning/crash-course⁠


Before feeding data into a machine learning model, it’s crucial to understand it. This episode walks you through the essential first steps: visualizing data, calculating basic statistics like mean and percentiles, and spotting outliers that could skew your model. Whether you're using pandas or plotting histograms, these techniques lay the foundation for effective ML pipelines.


Disclaimer: This podcast is generated using an AI avatar voice. At times, you may notice overlapping sentences or background noise. That said, all content is directly based on the official course material to ensure accuracy and alignment with the original learning experience.

Show more...
3 months ago
8 minutes 32 seconds

Human in loop podcasts
Understanding Model Confidence: ROC, AUC & Prediction Bias

Based on the “Machine Learning ” crash course from Google for Developers:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠https://developers.google.com/machine-learning/crash-course⁠In this episode, we go beyond accuracy and dive into how to evaluate model performance across all thresholds using ROC curves and AUC. We also break down prediction bias—why your model might look accurate but still be off-target—and how to detect early signs of bias caused by data, features, or training bugs. Tune in to learn how to make more reliable and fair classification models!

Disclaimer: This podcast is generated using an AI avatar voice. At times, you may notice overlapping sentences or background noise. That said, all content is directly based on the official course material to ensure accuracy and alignment with the original learning experience.

Show more...
3 months ago
7 minutes 10 seconds

Human in loop podcasts
Making Predictions with Logistic Regression

Based on the “Machine Learning ” crash course from Google for Developers:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠https://developers.google.com/machine-learning/crash-course⁠

In this episode, we break down logistic regression—a core algorithm used for classification. You'll learn how it calculates probabilities instead of direct values, what loss functions are used to measure mistakes, and how regularization helps prevent overfitting. It’s all about teaching machines to say “yes” or “no” with confidence.

Disclaimer: This podcast is generated using an AI avatar voice. At times, you may notice overlapping sentences or background noise. That said, all content is directly based on the official course material to ensure accuracy and alignment with the original learning experience.

Show more...
3 months ago
11 minutes 18 seconds

Human in loop podcasts
Gradient Descent & Hyperparameters

Based on the “Machine Learning ” crash course from Google for Developers:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠https://developers.google.com/machine-learning/crash-course⁠

What drives a machine learning model to learn? In this episode, we explore gradient descent, the optimization engine behind linear regression, and the crucial role of hyperparameters like learning rate, batch size, and epochs. Understand how models reduce error step by step, and why tuning hyperparameters can make or break performance. Whether you're a beginner or reviewing the basics, this episode brings clarity with real-world analogies and practical takeaways.


Disclaimer: This podcast is generated using an AI avatar voice. At times, you may notice overlapping sentences or background noise. That said, all content is directly based on the official course material to ensure accuracy and alignment with the original learning experience.

Show more...
3 months ago
7 minutes 19 seconds

Human in loop podcasts
Linear vs. Logistic Regression: A Quick Guide

Based on the “Machine Learning ” crash course from Google for Developers:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠https://developers.google.com/machine-learning/crash-course⁠

Linear Regression - Yale University

  • Linear and Logistic Regression - Stanford University


  • In this episode, we'll demystify Linear Regression, exploring its power in predicting continuous values and understanding its core mechanics, from the "best-fit line" to the critical role of the least squares method. Discover real-world applications where predicting "how much" is key, and learn how to evaluate its performance effectively.

    Then, we'll pivot to Logistic Regression, a cornerstone for classification tasks. Understand how it tackles "yes/no" questions by predicting probabilities using the elegant sigmoid function. We'll delve into its distinct mathematical underpinnings and uncover its vital role in scenarios ranging from spam detection to medical diagnostics, alongside its unique evaluation metrics.

    Disclaimer: This podcast is generated using an AI avatar voice. At times, you may notice overlapping sentences or background noise. That said, all content is directly based on the official course material to ensure accuracy and alignment with the original learning experience.

    Show more...
    3 months ago
    7 minutes 20 seconds

    Human in loop podcasts
    Introduction to Machine Learning

    Based on the “Machine Learning ” crash course from Google for Developers:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠https://developers.google.com/machine-learning/crash-course⁠


    In this episode, we explore what machine learning (ML) is and why it's at the heart of today’s most innovative technologies from language translation and content recommendation to autonomous vehicles and generative AI.

    You’ll learn how ML shifts the problem-solving paradigm: instead of manually programming every rule, we teach software to learn from data. Through relatable examples like predicting rainfall, we compare traditional methods with data-driven ML models that uncover patterns and make smart predictions.

    Whether you’re new to AI or brushing up on core concepts, this episode lays a strong foundation for understanding how ML helps answer complex questions and power real-world applications.

    Disclaimer: This podcast is generated using an AI avatar voice. At times, you may notice overlapping sentences or background noise. That said, all content is directly based on the official course material to ensure accuracy and alignment with the original learning experience.

    Show more...
    3 months ago
    8 minutes 21 seconds

    Human in loop podcasts
    Overcome Your Fear of Public Speaking

    This podcast is based on learnings from following resources-


    Cuncic, A. (2023, December 6). How to manage public speaking anxiety. Verywell Mind. https://www.verywellmind.com/tips-for-managing-public-speaking-anxiety-3024336


    Gershman, S. (2019, September 17). To overcome your fear of public speaking, stop thinking about yourself. Harvard Business Review. https://hbr.org/2019/09/to-overcome-your-fear-of-public-speaking-stop-thinking-about-yourself


    Mayo Clinic Staff. (2024, December 20). Fear of public speaking: How can I overcome it? Mayo Clinic. https://www.mayoclinic.org/diseases-conditions/specific-phobias/expert-answers/fear-of-public-speaking/faq-20058416


    Montijo, S. (2022, March 8). Public speaking anxiety: What is it and tips to overcome it. Psych Central. https://psychcentral.com/anxiety/public-speaking-anxiety


    Moseley, J. (n.d.). How I overcame my fear of public speaking [Video]. TEDx Talks. YouTube. https://www.youtube.com/watch?v=aImrjNPrh30


    National Social Anxiety Center. (n.d.). Public speaking anxiety. https://nationalsocialanxietycenter.com/social-anxiety/public-speaking-anxiety/


    Reddit user dannyjerome0. (n.d.). Speaking anxiety is killing my career [Online forum post]. Reddit, r/PublicSpeaking. https://www.reddit.com/r/PublicSpeaking/comments/15el5w3/speaking_anxiety_is_killing_my_career/


    U.S. National Library of Medicine. (2013). Cognitive behavioral therapy vs. exposure therapy in public speaking anxiety: A comparative clinical trial [PDF]. Neuropsychiatric Disease and Treatment, 9, 609–619. https://pmc.ncbi.nlm.nih.gov/articles/PMC3647380/pdf/ndt-9-609.pdf


    The provided sources a comprehensive look at public speaking anxiety, also known as glossophobia, a common social fear. Dr. Justin Moseley's TEDx talk provides a personal narrative of overcoming this fear, emphasizing the importance of sharing one's message, shifting focus from self to audience, and transforming fear into excitement. This personal account is complemented by two medical/psychological articles from Psych Central and Verywell Mind, which define public speaking anxiety as a social anxiety disorder, list its psychological and physical symptoms, discuss potential causes and risk factors, and outline various treatment options, including therapy (CBT, VR exposure) and medication (beta-blockers like Propranolol). Finally, a Reddit thread from r/PublicSpeaking showcases real-world experiences, with individuals sharing their struggles, seeking advice, and discussing the effectiveness of various strategies, including medication, in managing their public speaking anxiety.

    Show more...
    3 months ago
    17 minutes 4 seconds

    Human in loop podcasts
    From Slack Bot to 8,000 Co-Pilots—In 6 Months (Part 1)

    What started as a simple Slack bot turned into 8,000+ internal AI assistants—without a single data scientist on the team. In this episode, AI Product Manager Julian Sara Joseph shares how her team quietly scaled generative AI inside a large enterprise, built trust, and let the product speak for itself. No hype. No heavy marketing. Just real usage, smart engineering, and a platform that worked.

    If you’ve ever wondered what it takes to build AI people actually use, this one’s for you.


    Guest Speaker - Julian Joseph

    Host - Priti Y.

    Show more...
    4 months ago
    13 minutes 49 seconds

    Human in loop podcasts
    AI on Trial: Decoding the Autophagy Disorder

    AI on Trial is a special episode of Human in the Loop, where we take a deep dive into Model Autophagy Disorder (MAD)—a growing risk in artificial intelligence systems. From feedback loops to synthetic data overload, we unpack how models trained on their own outputs begin to degrade in performance and reliability. With real-world examples, emerging research, and ethical implications, this episode explores what happens when AI starts learning from itself—and what we can do to prevent it.

    💡 Whether you're an AI engineer, researcher, or just AI-curious, this episode gives you the tools to recognize, explain, and respond to MAD.

    Featured Tool:
    Try out the companion tool featured in the episode:
    MADGuard – AI Explorer
    A lightweight diagnostic app to visualize feedback loops, compare input sources, and score MAD risks.

    Read the deeper explainer blog:🔗 What Is Model Autophagy Disorder? – Human in Loop Blog
    A plain-language breakdown of the research, risks, and terminology.

    Other Detection Tools & Frameworks

    • DVC – Data Version Control
      https://dvc.org/

    • Label Studio – Open-Source Data Labeling Tool
      https://labelstud.io/

    • DetectGPT – Classify AI-generated Text
      https://arxiv.org/abs/2301.11305

    • Grover – Neural Fake News Detector (Allen AI)
      https://rowanzellers.com/grover/

    • SynthID – AI Watermarking by DeepMind

    References-


    • Alemohammad et al. (2023). Self-Consuming Generative Models Go MADPaper introducing MAD and simulating performance collapse in generative models.🔗 arXiv:2307.01850

    • Yang et al. (2024). Model Autophagy Analysis to Explicate Self-consumptionBridges human-AI interaction with MAD dynamics.🔗 arXiv:2402.11271

    • UCLA Livescu Initiative – Model Autophagy Disorder (MAD) PortalResearch hub on epistemic risk and feedback loop governance.🔗 https://livescu.ucla.edu/model-autophagy-disorder/

    • Earth.com (2024) – Could Generative AI Go MAD and Wreck Internet Data?Reports on future data degradation and the "hall of mirrors" risk.🔗 https://www.earth.com/news/could-generative-ai-go-mad-and-wreck-internet-data/

    • New York Times (2023) – Avianca Airline Lawsuit Involving ChatGPT BriefsLegal case where synthetic text led to real-world sanctions.🔗 https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html




    Show more...
    5 months ago
    6 minutes 14 seconds

    Human in loop podcasts
    𝐁𝐞𝐲𝐨𝐧𝐝 𝐭𝐡𝐞 𝐇𝐲𝐩𝐞: 𝐓𝐡𝐞 𝐈𝐧𝐯𝐢𝐬𝐢𝐛𝐥𝐞 𝐂𝐨𝐬𝐭 𝐨𝐟 𝐀𝐈 𝐂𝐮𝐫𝐢𝐨𝐬𝐢𝐭𝐲

    In this episode, we explore the measurable expenses of pushing AI boundaries, from CO2 emissions to workforce stress, drawing on recent research.

    Key Learnings:

    • The energy cost of LLMs, with training and usage emitting thousands of tons of CO2, comparable to powering entire countries by 2026.
    • Career impacts, including stress and adaptation demands on professionals navigating automation and new roles.
    • Operational challenges, such as mitigating hallucination and bias, requiring resource-intensive techniques like retrieval methods and dataset curation.
    • The financial scale of AI investment, with $1 trillion at risk, contrasted by potential environmental gains if experimentation focuses on efficiency.
    • The importance of evaluating these costs independently to understand their implications for AI/ML, data management, and career trajectories.

    Speaker - ⁠Priti⁠— Founder, Human in Loop Podcasts

    Check the references below. Dig deeper if you’d like! We all see things our own way, so feel free to explore.

    References:

    • LLM Reliability (Rush Shahani).
    • Carbon emissions of the ChatGPT usage: environmental impacts of the ChatGPT in different regions (Scientific Reports, 2024).
    • Explained: Generative AI’s environmental impact (MIT News, January 2025).
    • 9 top AI and machine learning trends to watch in 2025 (TechTarget, January 2025).
    • Energy and Policy Considerations for Deep Learning in NLP (University of Massachusetts, 2019) — Relevant for historical energy context.
    • ChatGPT says our GPUs are melting as it puts limit on image generation requests (The Verge, March 2025) — Relevant for GPU strain from trends.
    Show more...
    6 months ago
    9 minutes 11 seconds

    Human in loop podcasts
    𝐓𝐡𝐞 𝐑𝐢𝐬𝐞 𝐨𝐟 𝐀𝐠𝐞𝐧𝐭𝐢𝐜 𝐀𝐈 𝐰𝐢𝐭𝐡 𝐕𝐢𝐧𝐨𝐝𝐡𝐢𝐧𝐢 𝐑𝐚𝐯𝐢𝐤𝐮𝐦𝐚𝐫

    Discover how Agentic AI moves beyond reactive systems to proactive, goal-driven intelligence.
    Host Priti and guest Vino explore real-world uses, ethical challenges, and human-AI collaboration.
    Learn how Agentic AI supports productivity, especially for neurodivergent individuals, and what's next in multi-agent systems.

    Key Learnings:

    • What defines Agentic AI and how it differs from traditional AI

    • Real-world applications and productivity benefits

    • Ethical concerns: bias, safety, and explainability
    • The role of intelligent routing and agent collaboration
    • Tools and frameworks to start building your own AI agents
  • Guest Speaker - Vinodhini Ravikumar — Engineering Leader at Microsoft & Founder of Mind Mosaic AI

    Check the references below. Dig deeper if you’d like! We all see things our own way, so feel free to explore.

    References -

    • ReAct: Synergizing reasoning and acting in language models (ICLR 2023) — Yao et al.

    • Talk to Right Specialists: Routing and Planning in Multi-agent System for Question Answering (2025) — Wu et al.

    • Language Models as Zero-Shot Planners (ICML 2022) — Huang et al.

    • Do As I Can, Not As I Say: Grounding Language in Robotic Affordances (2022) — Ahn et al.

    • Code as Policies: Language Model Programs for Embodied Control (ICRA 2023) — Liang et al.

    • BOLAA: Benchmarking and Orchestrating LLM-Augmented Autonomous Agents (ICLR 2024 Workshop) — Liu

    • Agent AI Towards a Holistic Intelligence (2023) — Huang et al.

  • Show more...
    6 months ago
    11 minutes 3 seconds

    Human in loop podcasts
    Microlearning: Little Lessons, Big Wins

    Ever Wonder Why Stuff Slips Your Mind So Quick? Seriously, why do we blank on things we just learned? It’s not you—it’s your brain’s quirky way. Microlearning’s here to save the day, and it’s crazy easy. Let’s dig into it on this podcast!


    Note: This is a demo episode with voices from an AI avatar, packed with solid research—check the references below. Dig deeper if you’d like! We all see things our own way, so feel free to explore.


    References -

    • The Effectiveness of Microlearning to Improve Students’ Learning Ability
    • Exploring learner satisfaction and the effectiveness of microlearning in higher education
    • Microlearning in Diverse Contexts: A Bibliometric Analysis
    • Using Micro-learning on Mobile Applications to Increase Knowledge Retention and Work Performance: A Reviewof Literature
    • Why Microlearning Is A Game-changer for Corporate Training
    • Microlearning: Responding to New Learning Styles
    • Seven Statistics that Prove the Value of Microlearning for Corporate Training
    • Knowledge Retention: 8 Main Strategies To Improve It
    • 8 studies that prove microlearning can't be ignored
    • Microlearning in Health Professions Education: Scoping Review
    • Ways Microlearning Increases Attention And Retention
    • Comparing the Effectiveness of Microlearning and eLearning Courses in the Education of Future Teachers
    Show more...
    7 months ago
    11 minutes 29 seconds

    Human in loop podcasts
    Educational podcasts blending human research and AI-generated content, promoting curiosity, critical thinking, and lifelong learning.