
AI is moving fast, but reliable agents are still rare. In this Data Neighbor Podcast, we sit down with Jigyasa Grover, ML Engineer at Uber, author of Sculpting Data for ML: The first act of Machine Learning, and member of Google’s ML Advisory Board, to unpack why most AI agents fail and what it really takes to build ones you can count on.Jigyasa shares how to design, evaluate, and secure reliable agent systems - from memory management and adversarial testing to using human judgment without slowing down innovation.
Connect with the team (tell us YouTube sent you!):- Shane Butler: https://linkedin.openinapp.co/b02fe- Sravya Madipalli: https://linkedin.openinapp.co/9be8c- Hai Guan: https://linkedin.openinapp.co/4qi1rConnect with Jigyasa: https://www.linkedin.com/in/jigyasa-grover/In this episode, Jigyasa explains how agents evolve beyond simple workflows into autonomous systems, why evals are at the heart of reliable AI, and how developers can prevent silent failures through better design, testing, and observability.You'll learn about:-Why most AI agents fail and how to engineer reliability from day one-Workflow agents vs LLM-based agents-How evals, memory hygiene, and adversarial testing improve reliability-When to use traditional ML instead of LLMs-Designing for human judgment, security, and recovery in agent systems
#aipodcast #aiagents #aidevelopment #aiengineering #llm #mlops #datascience #agentdesign #workflowagents #memory #evaluation #productstrategy #aiproductmanagement #autonomousagents #aiethics #aideployments #reliableai #dataneighbor #jigyasagrover #agenticai