Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/34/33/b1/3433b13b-d920-2786-844c-4171111fc0db/mza_6347314768096794718.png/600x600bb.jpg
Just Now Possible
Teresa Torres
8 episodes
2 days ago
How AI products come to life—straight from the builders themselves. In each episode, we dive deep into how teams spotted a customer problem, experimented with AI, prototyped solutions, and shipped real features. We dig into everything from workflows and agents to RAG and evaluation strategies, and explore how their products keep evolving. If you’re building with AI, these are the stories for you.
Show more...
Technology
Business,
Entrepreneurship
RSS
All content for Just Now Possible is the property of Teresa Torres and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
How AI products come to life—straight from the builders themselves. In each episode, we dive deep into how teams spotted a customer problem, experimented with AI, prototyped solutions, and shipped real features. We dig into everything from workflows and agents to RAG and evaluation strategies, and explore how their products keep evolving. If you’re building with AI, these are the stories for you.
Show more...
Technology
Business,
Entrepreneurship
https://images.podigee-cdn.net/0x,s21iONhz7Vv6w5DzRUYz-NgRVmj30VOeqodKOjWtoNp8=/https://main.podigee-cdn.net/uploads/u78902/bc6d0b3a-18c1-4122-9e1b-a3ae3f9a79aa.png
Debugging AI Products: From Data Leakage to Evals with Hamel Husain
Just Now Possible
1 hour 26 minutes
1 month ago
Debugging AI Products: From Data Leakage to Evals with Hamel Husain
How do you know if your AI product is actually any good? Hamel Husain has been answering that question for over 25 years. As a former machine learning engineer and data scientist at Airbnb and GitHub (where he worked on research that paved the way for GitHub Copilot), Hamel has spent his career helping teams debug, measure, and systematically improve complex systems. In this episode, Hamel joins Teresa Torres to break down the craft of error analysis and evaluation for AI products. Together, they trace his journey from forecasting guest lifetime value at Airbnb to consulting with startups like Nurture Boss, an AI-native assistant for apartment complexes. Along the way, they dive into: - Why debugging AI starts with thinking like a scientist - How data leakage undermines models (and how to spot it) - Using synthetic data to stress-test failure modes - When to rely on code-based assertions vs. LLM-as-judge evals - Why your CI/CD set should always include broken cases - How to prioritize failure modes without drowning in them Whether you’re a product manager, engineer, or designer, this conversation offers practical, grounded strategies for making your AI features more reliable—and for staying sane while you do it.
Just Now Possible
How AI products come to life—straight from the builders themselves. In each episode, we dive deep into how teams spotted a customer problem, experimented with AI, prototyped solutions, and shipped real features. We dig into everything from workflows and agents to RAG and evaluation strategies, and explore how their products keep evolving. If you’re building with AI, these are the stories for you.