Join host Rohit (Facets Cloud) in conversation with Sanjeev Ganjihal, Senior Specialist Solutions Architect - Containers at AWS and early Kubernetes expert. They discuss the rapid evolution of AI and DevOps, Kubernetes as the new operating system, generative AI in engineering, and the shifting landscape of roles like DevOps, SRE, and AIOps. Sanjeev shares practical advice on using AI assistants, agentic tools, self-hosted models, and the balancing act between automation, productivity, and upskilling in today’s cloud-native world.
This podcast features a discussion with Nathan Hamiel, Director of Research at Kudelski Security, an expert with 25 years in the cybersecurity space, focusing specifically on AI security.
The conversation centers on navigating the generative AI revolution with a grounded and security-first perspective, particularly for product developers and the security community. Key topics explored include:
Ultimately, the podcast serves as a grounding discussion for product engineers on how to build and integrate AI solutions in a secure and responsible manner, emphasizing that AI tools should be used to solve tasks effectively rather than chasing a path to superintelligence.
In this episode, Facets.cloud co-founders Rohit and Anshul dive deep into Model Context Protocols (MCPs), explaining how they evolved from basic chat assistants to standardized tool connectors for AI-driven DevOps. You’ll learn best practices for designing MCP servers, naming conventions that reduce hallucinations, dry-run workflows for safe automation, and insights on when and why to adopt MCPs within your organization.
In the very first episode of the AI x DevOps Podcast, we dive into how AI is actually changing infrastructure, not hypothetically, but line by line.
Rohit Raveendran, is joined by Vincent De Smet, DevOps engineer at Handshakes.ai, and together, they explore what happens when LLMs start writing Terraform, the difference between deterministic and vibe-coded infra, and why CDK might offer a more AI-friendly future than raw HCL.
They talk about the trade-offs of trust, the future of platform engineering in an AI-powered world, and how inner-sourced guardrails could become the foundation for safe, scalable self-service. And yes, they touch on the scary parts too like what happens when your AI agent starts doing more than you asked.
If you're wondering what it actually looks like to bring AI into DevOps without losing control, this one’s for you.
Wondering how AI-Ready is your DevOps? Take a 2-minute survey here to find out.