Only 50% of companies monitor their ML systems. Building observability for AI is not simple: it goes beyond 200 OK pings. In this episode, Sylvain Kalache sits down with Conor Brondsdon (Galileo) to unpack why observability, monitoring, and human feedback are the missing links to make large language model (LLM) reliable in production. Conor dives into the shift from traditional test-driven development to evaluation-driven development, where metrics like context adherence, completeness, and ac...
All content for Humans of Reliability is the property of Rootly and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Only 50% of companies monitor their ML systems. Building observability for AI is not simple: it goes beyond 200 OK pings. In this episode, Sylvain Kalache sits down with Conor Brondsdon (Galileo) to unpack why observability, monitoring, and human feedback are the missing links to make large language model (LLM) reliable in production. Conor dives into the shift from traditional test-driven development to evaluation-driven development, where metrics like context adherence, completeness, and ac...
I Want My Shoes Fast! Observability, SRE Burnout, and OTel with Dynatrace’s Adriana Villela
Humans of Reliability
34 minutes
8 months ago
I Want My Shoes Fast! Observability, SRE Burnout, and OTel with Dynatrace’s Adriana Villela
In this episode, we sit down with Adriana Villela, Principal DevRel at Dynatrace and OpenTelemetry contributor to break down how observability impacts reliability. We dive into what contributes to SRE burnout and how managers can create psychologically safer spaces for responders. Adriana also shares her perspective on AI as an observability-buddy to navigate incidents. SHOW LINKS: Video and takeawaysAdriana’s podcast: Geeking Out with AdrianaPodcast with Hazel Weakly mentio...
Humans of Reliability
Only 50% of companies monitor their ML systems. Building observability for AI is not simple: it goes beyond 200 OK pings. In this episode, Sylvain Kalache sits down with Conor Brondsdon (Galileo) to unpack why observability, monitoring, and human feedback are the missing links to make large language model (LLM) reliable in production. Conor dives into the shift from traditional test-driven development to evaluation-driven development, where metrics like context adherence, completeness, and ac...