Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
News
Sports
TV & Film
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/13/43/bd/1343bdee-c3b3-4b88-8841-ff026aeafd6e/mza_12244864361700508791.jpg/600x600bb.jpg
Pop Goes the Stack
F5
17 episodes
1 day ago
Explore the evolving world of application delivery and security. Each episode will dive into technologies shaping the future of operations, analyze emerging trends, and discuss the impacts of innovations on the tech stack.
Show more...
Technology
RSS
All content for Pop Goes the Stack is the property of F5 and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Explore the evolving world of application delivery and security. Each episode will dive into technologies shaping the future of operations, analyze emerging trends, and discuss the impacts of innovations on the tech stack.
Show more...
Technology
Episodes (17/17)
Pop Goes the Stack
LLM-as-a-Judge: Bias, Preference Leakage, and Reliability

Here's the newest bright idea in AI: don’t pay humans to evaluate model outputs, let another model do it. This is the “LLM-as-a-judge” craze. Models not just spitting answers but grading them too, like a student slipping themselves the answer key. It sounds efficient, until you realize you’ve built the academic equivalent of letting someone’s cousin sit on their jury. The problem is called preference leakage. Li et al. nailed it in their paper “Preference Leakage: A Contamination Problem in LLM-as-a-Judge.” They found that when a model judges an output that looks like itself—same architecture, same training lineage, or same family—it tends to give a higher score. Not because the output is objectively better, but because it “feels familiar.” That’s not evaluation, that’s model nepotism. 

 

In this episode of Pop Goes the Stack, F5's Lori MacVittie, Joel Moses, and Ken Arora explore the concept of preference leakage in AI judgement systems. Tune in to understand the risks, the impact on the enterprise, and actionable strategies to improve model fairness, security, and reliability.

Show more...
1 day ago
22 minutes

Pop Goes the Stack
We're on a brief hiatus, we'll be back soon

We’re on a brief hiatus. But don’t worry—we’ll be back shortly with more sharp insights, expert takes, and of course Lori's signature snark to help you navigate the ever-evolving world of application delivery and security.

Show more...
2 weeks ago

Pop Goes the Stack
Bots vs Business: AI Fraud & Defending Your Margins

A North Carolina musician was arrested after using AI to generate fake bands and bots to stream their songs—racking up over a billion plays and pocketing $10 million in fraudulent royalties. It’s the first U.S. case of AI-driven music streaming fraud, and it’s less about music than it is about bots exploiting business models. 


For enterprises, the lesson is simple: if you treat all traffic as legitimate, bots will eat your margins. With AI making bot behavior increasingly human-like, traditional defenses like packet filtering or basic behavior analysis are no longer enough.


In this episode, Lori MacVittie is joined by Principal Threat Researcher, Malcolm Heath, to dive into the challenges of defending against AI-driven bots, especially as tools and agentic AI make attacks more sophisticated. They uncover key strategies to identify and neutralize bots while exploring the evolving role of observability and behavioral detection in enterprise security.

Learn how you can stay ahead of the curve and keep your stack whole with additional insights on app security, multicloud, AI, and emerging tech:  https://www.f5.com/company/octo

Read more about the AI Music Fraud case: https://www.wired.com/story/ai-bots-streaming-music/?utm_source=chatgpt.com 

Show more...
3 weeks ago
21 minutes

Pop Goes the Stack
Crossing the streams

Prompt injection isn't some new exotic hack. It’s what happens when you throw your admin console and your users into the same text box and pray the intern doesn’t find the keys to production. Vendors keep chanting about “guardrails” like it’s a Harry Potter spell, but let’s be real—if your entire security model is “please don’t say ignore previous instructions,” you’re not doing security, you’re doing improv. 


So we're digging into what it actually takes to keep agentic AI from dumpster-diving its own system prompts: deterministic policy engines, mediated tool use, and maybe—just maybe—admitting that your LLM is not a CISO. Because at the end of the day, you can’t trust a probabilistic parrot to enforce your compliance framework. That’s how you end up with a fax machine defending against a DDoS—again.


The core premise here is that prompt injection is not actually injection, it's system prompt manipulation—but it's not a bug, it's by design. There's a GitHub repo full of system prompts extracted by folks and a number of articles on "exfiltration" of system prompts. Join F5's Lori MacVittie, Joel Moses, and Jason Williams as they explain why it's so easy, why it's hard to prevent, and possible mechanisms for constraining AI to minimize damage. Cause you can't stop it. At least not yet. 

Show more...
4 weeks ago
20 minutes

Pop Goes the Stack
Agentic APIs Have PTSD

Your APIs were designed for humans and orderly machines: clean request, tidy response, stateless, rate-limited. Then along came agentic AI—recursive, stateful, jittery little things that retry forever, chain calls together, and dream up new query paths at 3 a.m.

 

The result? Your APIs start looking less like infrastructure and more like trauma patients. Rate limits collapse. Monitoring floods. Security controls meant for human logins don’t make sense when the caller is a bot acting on its own intent. 

 

The punchline: enterprises aren’t serving users anymore, they’re serving swarms of other AIs. If you don’t rethink throttling, observability, and runtime policy, your endpoints are going to get steamrolled.

 

Join host Lori MacVittie and F5 guest Connor Hicks to explore how enterprises can adapt and thrive—hit play now to future-proof your APIs!

Read AI Agentic workflows and Enterprise APIs: Adapting API architectures for the age of AI agents: https://arxiv.org/abs/2502.17443

Show more...
1 month ago
22 minutes

Pop Goes the Stack
When Context Eats Your Architecture

Anthropic lobbed a million-token grenade into the coding wars, and suddenly every AI startup with a “clever context management” pitch looks like it’s selling floppy disks in a cloud world. If your entire differentiator was “we chunk code better than the other guy,” congratulations—you’ve been chunked. This is what happens when the model itself shows up to the fight with a bigger backpack. 

 

But here’s the twist—this isn’t just about writing bigger code files without losing track of your variables. For enterprises, context size is an architectural shift. A million-token window means you can shove your entire compliance manual, last year’s customer interactions, and that dusty COBOL spec into one call—no brittle session stitching, no RAG duct tape. It collapses architectural complexity… and replaces it with new headaches: governance of massive payloads, cost blowouts if you treat tokens like they’re free, and rethinking model routing strategies. Context isn’t just memory anymore—it’s a first-class infrastructure decision. 

 

Press play to hear F5 hosts Lori MacVittie and Joel Moses, joined by special guest Vishal Murgai, unravel what's next for enterprise AI.

Show more...
1 month ago
21 minutes

Pop Goes the Stack
The DPU Awakening: Silicon Muscle for AI Mayhem

This week on Pop Goes the Stack, we crack open the next frontier of enterprise infrastructure: DPUs (Data Processing Units). AI factories aren’t just stressing your network—they’re setting it on fire. With east-west traffic exploding and inference storms growing by the day, CPUs and legacy firewalls just can’t keep up. Enter the DPU: purpose-built to offload, secure, and accelerate the chaos. 

 

We break down:

- Why AI workloads are crushing traditional networking and security architectures 

- How DPUs deliver line-rate telemetry, policy enforcement, and microsegmentation 

- Where companies like NVIDIA (BlueField-3), AMD (Pensando), Intel, Marvell, Fungible, Microsoft (Azure Boost), and Cisco (Hypershield) are racing to redefine infrastructure 

- Why financial institutions, hospitals, and hyperscalers are already deploying DPUs at scale 

- What this means for your observability, east-west controls, and AI agent governance 

 

The $5.5B DPU market isn’t a footnote—it’s a warning shot. If your stack isn’t built to segment, inspect, and enforce in real-time, it’s not ready for AI. And the next wave of agentic systems isn’t going to wait.

Show more...
1 month ago
22 minutes

Pop Goes the Stack
Less small talk, more substance

Everyone’s chasing generative AI for flash, but a quiet revolution is happening where the real money is: predictive AI. In this episode, F5's Lori MacVittie, Joel Moses, and Dmitry Kit dig into how a team of researchers used machine learning—not an LLM—to design a paint that passively cools buildings by up to 20 degrees. No prompts. No hallucinations. Just real-world impact through smart pattern recognition. Listen in as we unpack what this means for enterprise leaders chasing efficiency, and why your ops and sales teams should be looking for better recipes—not better word salad. It's not about generating magic. It's about discovering truth at scale.
  
Learn how you can stay ahead of the curve and keep your stack whole with additional insights on app security, multicloud, AI, and emerging tech:  https://www.f5.com/company/octo

Show more...
1 month ago
20 minutes

Pop Goes the Stack
The perimeter has shifted

The perimeter isn’t where you left it. Agents are on the move, APIs are on fire, and your infrastructure is about as ready for this as a fax machine is for a DDoS. In this week's episode, Lori, Joel and F5 Field CISO, Chuck Herrin, are talking guardrails—real ones—for the age of agentic AI.

 

Because while your dashboards were busy sipping metrics, the vendors got serious. Recent product launches show a clear pivot toward AI-specific defenses and infrastructure support like: an AI firewall, AI runtime protection, semantic observability, and AI policy and rule generation. 

 

Turns out, stateless APIs weren’t built for recursive agents with infinite retries and zero chill. If your architecture still thinks AI means ‘autocomplete,’ you’re going to want to tune in for actionable steps to stay ahead in an AI-dominated future. It’s not just about security. It’s survival. Let’s go. 

Show more...
2 months ago
25 minutes

Pop Goes the Stack
AI Joel: Who owns him?

In this episode of Pop Goes the Stack, F5's Lori MacVittie, Joel Moses, and Ken Arora delve into the complex issue of ownership with respect to your AI-driven digital twin. As organizations consider the use of AI avatars and AI twins, explore the nuances of employment contracts, intellectual property, and the potential for creating AI models based on an employee's data. The discussion ranges from corporate IP ownership to legal precedents from the entertainment industry, touching on futuristic concepts like posthumous digital replicas and their ethical implications. Tune in to find out how your everyday work data could be shaping the AI models of tomorrow and who owns the rights in this evolving landscape.

Show more...
2 months ago
22 minutes

Pop Goes the Stack
Old is New Again: Bandwidth will be the AI bottleneck

AI doesn't just chew up compute—it eats your network for breakfast. In this episode of Pop Goes the Stack, F5's Lori MacVittie, Joel Moses, and Ken Arora dig into the pressing issues surrounding AI workloads and networking. Everyone's worried about GPUs and cooling, but nobody’s talking about the lateral east-west traffic explosion, the rise of inter-agent comms, or the operational strain on DCN fabric and interconnects. Our experts discuss the importance of upgrading data center networks to accommodate AI demands, examining the differences between training and inferencing workloads. The conversation also covers the necessity of high-performance networking, the relevance of latency, data gravity, and the potential expansion of data centers. Tune in to get valuable insights into the challenges and solutions shaping the future of AI-driven applications.

Show more...
2 months ago
23 minutes

Pop Goes the Stack
Fine-tuning on a Budget

Big models, tight budgets? No problem. In this episode of Pop Goes the stack, hosts Lori MacVittie and Joel Moses talk with Dmitry Kit from F5's AI Center of Excellence about LoRA (Low-Rank Adaptation), the not-so-secret weapon for customizing LLMs without melting your GPU or your wallet. From role-specific agents to domain-aware behavior, we break down how LoRA lets you inject intelligence without retraining the entire brain. Whether you're building AI for IT ops, customer support, or anything in between, this is fine-tuning that actually scales. Learn about the benefits, risks, and practical applications of using LoRA to target specific model behavior, reduce latency, and optimize performance, all for under $1,000. Tune in to understand how LoRA can revolutionize your approach to AI and machine learning.

Show more...
2 months ago
20 minutes

Pop Goes the Stack
Now Streaming: Your Status Updates

Google decided what you really wanted wasn’t answers, it was a podcast about your question. In this episode of Pop Goes the Stack, Lori MacVittie, Joel Moses, and F5 Community Evangelist, Aubrey King, discuss Google's new Search Labs project featuring AI-generated audio overviews. They dive into the implications of this technology, the evolution of dashboards, and the potential of narrative-driven interfaces. The hosts explore how AI and voice interactions are shaping the future, despite the initial hiccups like a 40-second response time. Tune in to understand how narrative explanations might transform the way we interact with technology and data visualization.

Show more...
3 months ago
21 minutes

Pop Goes the Stack
Securing AI Agents: Tackling the Non-Human Identity Crisis

In this episode of 'Pop Goes the Stack,' host Lori MacVittie and co-host Joel Moses are joined by F5 Sr. Solution Architect Peter Scheffler to delve into the pressing issue of securing AI agents. The episode highlights emerging vulnerabilities as AI agents enter enterprise environments, especially in light of poor security practices like hard-coded credentials. They discuss the dynamic nature of agent identity and authorization protocols and propose potential solutions including ephemeral credentials and strict boundaries. Tune in to learn how AI agents are rewriting security rules and what you can do to protect your stack.

Show more...
3 months ago
23 minutes

Pop Goes the Stack
Chasing Logic Chains: Inference tracing

​Dive into the intricacies of AI observability and decision-making with host Lori MacVittie and special guest Chris Hain. Lori and Chris discuss Anthropic’s open-sourced circuit tracing Python library tool and recent studies analyzing the internal workings of large language models (LLMs) during inference. They explore the growing need for advanced observability tools and the operational challenges involved in managing AI systems. From AI's decision-making complexities to the future of semantic observability, this episode is a deep dive into the often chaotic world of emerging tech.

 

Paper’s referenced in this episode:

  • https://transformer-circuits.pub/2025/attribution-graphs/biology.html
  • https://www.anthropic.com/research/agentic-misalignment
Show more...
3 months ago
21 minutes

Pop Goes the Stack
AI Attacks: The App Security Arms Race

Hosts Lori MacVittie and Joel Moses are joined by F5’s Field CISO Chuck Herrin to dive deep into the implications of artificial intelligence on cybersecurity. They analyze the surge in AI-driven attacks, the challenges of defending against them, and the crucial role of fundamentals and observability in modern application security strategies. Learn about the democratization of AI, the evolution of intelligent threat vectors, and the importance of integrating AI native defense in your security stack. Don't miss their insights on preparing for the future landscape of cyber threats. 

Show more...
3 months ago
26 minutes

Pop Goes the Stack
Multimodal Madness: The Darth Vader Debacle

In our inaugural episode of Pop Goes the Stack, Lori MacVittie and occasional co-host, Joel Moses, dive into the wild world of multimodal AI with guest Aubrey King. They discuss Fortnite's fascinating yet problematic AI-powered Darth Vader, a multimodal NPC driven by Google’s Gemini 2.0 Flash and ElevenLabs’ Flash v2.5, highlighting the technical mishaps and security pitfalls. The conversation explores the unique risks of multimodal AI in application delivery, where voice and text inputs create complex attack surfaces compared to traditional prompt injection vulnerabilities. Tune in for a blend of tech insights and entertaining anecdotes that reveal the lessons developers and companies can learn to avoid similar issues in the future when deploying cutting-edge AI technology.

Show more...
4 months ago
23 minutes

Pop Goes the Stack
Explore the evolving world of application delivery and security. Each episode will dive into technologies shaping the future of operations, analyze emerging trends, and discuss the impacts of innovations on the tech stack.