All content for Security Insights is the property of securityinsights and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
A podcast that takes a deeper look at today’s most important issues in cyber security, and beyond.
Artificial intelligence is often described as a "black box". We can see what we put in, and what comes out. But not how the model comes to its results.
And, unlike conventional software, large language models are non-deterministic. The same inputs can produce different results.
This makes it hard to secure AI systems, and to assure their users that they are secure.
There is already growing evidence that malicious actors are using AI to find vulnerabilities, carry out reconnaissance, and fine-tune their attacks.
But the risks posed by AI systems themselves could be even greater.
Our guest this week has set out to secure AI, by developing red team testing methods that take into account both the nature of AI, and the unique risks it poses.
Peter Garraghan is professor at Lancaster University, and founder and CEO at Mindgard.
Interview by Stephen Pritchard
Security Insights
A podcast that takes a deeper look at today’s most important issues in cyber security, and beyond.