Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
News
Sports
TV & Film
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/13/43/bd/1343bdee-c3b3-4b88-8841-ff026aeafd6e/mza_12244864361700508791.jpg/600x600bb.jpg
Pop Goes the Stack
F5
17 episodes
1 day ago
Explore the evolving world of application delivery and security. Each episode will dive into technologies shaping the future of operations, analyze emerging trends, and discuss the impacts of innovations on the tech stack.
Show more...
Technology
RSS
All content for Pop Goes the Stack is the property of F5 and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Explore the evolving world of application delivery and security. Each episode will dive into technologies shaping the future of operations, analyze emerging trends, and discuss the impacts of innovations on the tech stack.
Show more...
Technology
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/13/43/bd/1343bdee-c3b3-4b88-8841-ff026aeafd6e/mza_12244864361700508791.jpg/600x600bb.jpg
Crossing the streams
Pop Goes the Stack
20 minutes
4 weeks ago
Crossing the streams

Prompt injection isn't some new exotic hack. It’s what happens when you throw your admin console and your users into the same text box and pray the intern doesn’t find the keys to production. Vendors keep chanting about “guardrails” like it’s a Harry Potter spell, but let’s be real—if your entire security model is “please don’t say ignore previous instructions,” you’re not doing security, you’re doing improv. 


So we're digging into what it actually takes to keep agentic AI from dumpster-diving its own system prompts: deterministic policy engines, mediated tool use, and maybe—just maybe—admitting that your LLM is not a CISO. Because at the end of the day, you can’t trust a probabilistic parrot to enforce your compliance framework. That’s how you end up with a fax machine defending against a DDoS—again.


The core premise here is that prompt injection is not actually injection, it's system prompt manipulation—but it's not a bug, it's by design. There's a GitHub repo full of system prompts extracted by folks and a number of articles on "exfiltration" of system prompts. Join F5's Lori MacVittie, Joel Moses, and Jason Williams as they explain why it's so easy, why it's hard to prevent, and possible mechanisms for constraining AI to minimize damage. Cause you can't stop it. At least not yet. 

Pop Goes the Stack
Explore the evolving world of application delivery and security. Each episode will dive into technologies shaping the future of operations, analyze emerging trends, and discuss the impacts of innovations on the tech stack.