Join Florencio Cano Gabarda in Mind the Machine, where we dive into the critical intersection of AI security and safety. Explore how to protect AI systems from cyber threats, use AI to enhance IT security, and tackle the ethical challenges of AI safety—covering issues like ethics, bias, and trustworthiness. Tune in to navigate the complexities of building secure and safe AI.
All content for Mind the Machine is the property of Florencio Cano Gabarda and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Join Florencio Cano Gabarda in Mind the Machine, where we dive into the critical intersection of AI security and safety. Explore how to protect AI systems from cyber threats, use AI to enhance IT security, and tackle the ethical challenges of AI safety—covering issues like ethics, bias, and trustworthiness. Tune in to navigate the complexities of building secure and safe AI.
Welcome everyone to this tenth episode of Mind the Machine, a podcast about AI security and safety. I’m Florencio Cano. Today we are going to talk about the security risks and security controls of LLM code generators.
Mind the Machine
Join Florencio Cano Gabarda in Mind the Machine, where we dive into the critical intersection of AI security and safety. Explore how to protect AI systems from cyber threats, use AI to enhance IT security, and tackle the ethical challenges of AI safety—covering issues like ethics, bias, and trustworthiness. Tune in to navigate the complexities of building secure and safe AI.