
AI is changing product management, from how teams prototype to how they measure success. In this episode of the Data Neighbor Podcast, we’re joined by Aman Khan, Head of Product at Arize AI (LLM evaluation & observability). Aman breaks down the three emerging AI PM archetypes (AI-native PM, AI platform PM, and AI-powered PM), how to move from “vibe coding” to eval-driven development (EDD), and why aligning evals to business outcomes matters more than any single accuracy score. He also shares hard-won tactics for handling subjectivity in LLM outputs, setting user expectations in UX, and deciding when rigor can (and can’t) slow down speed. In this episode, you’ll learn:-The three ways AI shows up in PM work—and how those roles are converging.-A practical ladder from “vibe checks” to EDD (evals in dev & production), including LLM-as-a-judge and when to trust it.-How to tie evals to business metrics (trust, value, speed) and resolve “good eval, bad outcome” conflicts.-UX patterns for long-running agent tasks (progress, ETAs, checkpoints) that preserve trust.-Where AI coding tools help most (and least) across engineers, PMs, and data teams.Connect with Aman Khan:LinkedIn: https://www.linkedin.com/in/amanberkeley/🌐 Website: https://amank.ai🏢 Arize AI: https://arize.com/ Arize AIConnect with Shane, Sravya, and Hai (let us know which platform sent you!):👉 Shane Butler: https://linkedin.openinapp.co/b02fe👉 Sravya Madipalli: https://linkedin.openinapp.co/9be8c👉 Hai Guan: https://linkedin.openinapp.co/4qi1r
#aiproductmanagement #aievals #llmobservability #productmanagement #datascience #mlops #aiagents #evaluation #productstrategy #dataneighbor #arizeai #llms