Episode Summary
In this episode, Ira Goldstein, Executive Chair and CEO of Ultraviolet Cyber, shares insights on the company's acquisition of Black Duck's application security testing business and explains how CISOs can drive value and manage risk during cybersecurity M&A.
Sponsors
Thank you to our sponsors who make this show possible.
→ Hampton North. Hampton North is the premium US based cybersecurity search firm.
→ Sysdig. Secure the cloud the right way with agentic AI.
Guest Bio
Ira Goldstein is the Executive Chair and CEO of Ultraviolet Cyber and the Founder and CEO of Kernel Advisory. He has scaled global operations at Herjavec Group as SVP and COO. Ira also serves on boards, including Rogers CyberSecure Catalyst.
LinkedIn: https://www.linkedin.com/in/goldsteinira/
Website: https://www.uvcyber.com/
Episode Breakdown
00:00 Banter
02:33 Guest Introduction: Ira Goldstein
03:41 Exploring Cyber M&A Trends
10:57 The Role of Security Leaders in M&A
18:08 Ultraviolet Cyber's Acquisition of Black Duck
21:13 The Impact of AI on Code Quality
28:26 Navigating the Cybersecurity Market Landscape
31:09 Building Trust in Cybersecurity Partnerships
41:11 Monday Morning Advice for Security Leaders
44:25 Outro
Referenced Resources
Follow and Subscribe
→ Spotify.
→ YouTube
Episode Summary
Karl Mattson shares his journey from CISO to venture investor, offering practical advice on what makes founders successful in cybersecurity and how AI is rapidly changing the field. If you want to understand career transitions and what it takes to thrive in today's security landscape, this episode gives you direct insights from someone who's done it all.
Sponsors
Thank you to our sponsors who make this show possible.
→ Hampton North. Hampton North is the premium US based cybersecurity search firm.
→ Sysdig. Secure the cloud the right way with agentic AI.
About the Guest
Karl Mattson is a cybersecurity leader turned venture investor, known for his journey from operating as a CISO at a bank to field CISO roles and now founding his own venture fund. He is recognized for his hands-on approach, deep industry insight, and commitment to backing exceptional founders in AI and security. Connect with Karl to follow his work and insights:
LinkedIn: https://www.linkedin.com/in/karlmattson1/
Website: https://squaredcircle.vc/
Episode Chapters
00:00 The Journey to Venture Capital
05:01 Assessing Founders and Companies
08:40 The Role of AI in Security
16:22 Characteristics of Successful Startups
22:10 The CISO's Transition to Vendor Roles
30:39 The Reality of the CISO Role
33:08 AI's Impact on Security and Staffing
38:29 Advice for CISOs in a Rapidly Changing Environment
42:00 Embracing Strengths and Taking Risks
Subscribe & Follow
If you found this episode useful, please share it and subscribe!
→ Spotify.
→ YouTube
→ Website
Follow You Hosts:
→ Conor Sherman: LinkedIn
→ Stuart Mitchell: LinkedIn
Episode Summary
In this episode, Conor and Stuart break down the risks of new tech like OpenAI's Atlas browser, the F5 source code breach, AWS outages, and deepfakes, showing you why resilience and clear risk management matter more than ever. You'll get practical advice on handling third- and fourth-party risk, understanding the real cost of outages, and preparing your business for today's cybersecurity threats.
Sponsors
Thank you to our sponsors who make this show possible.
→ Hampton North. Hampton North is the premium US based cybersecurity search firm.
→ Sysdig. Secure the cloud the right way with agentic AI.
Episode Chapters
00:00 Banter
01:19 OpenAI's Atlas Browser
04:34 The Implications of F5 Source Code Theft
10:53 AWS Outage and Business Resilience
18:04 The Real Cost of Service Outages
23:42 The FTC's Stance on AI Marketing and Truthfulness
30:08 The Rise of Deepfakes and Their Implications
43:45 Actionable Insights for Business Leaders
45:16 Intro Long
Referenced Links & Resources
Call to Action
If you found this episode useful, please share it and subscribe!
→ Spotify.
→ YouTube
→ Website
Follow You Hosts:
→ Conor Sherman: LinkedIn
→ Stuart Mitchell: LinkedIn
Episode Summary
In this episode, Richard Bird, Chief Information Security Officer at Singular AI, explains why the rush to adopt AI is creating new security risks and why getting the basics right is more important than ever. If you want to understand how AI is changing security and what you need to do about it, this conversation is essential.
Sponsors
Thank you to our sponsors who make this show possible.
→ Hampton North. Hampton North is the premium US based cybersecurity search firm.
→ Sysdig. Secure the cloud the right way with agentic AI.
Guest Bio
Richard Bird is the Chief Information Security Officer at Singular AI and an industry veteran with over 30 years of experience. He has held key roles at JP Morgan, Chase, Ping Identity, and Traceable, and recently launched the podcast Yippee-ki-ai, focused on operationalizing AI in the real world. Connect with Richard on LinkedIn to follow his latest work and insights.
Episode Timestamps
00:00 The AI Adoption Crisis and API Security
11:41 Corporate Showmanship and the Reality of Layoffs
15:11 The Role of the Chief AI Officer: A Critical Examination
20:11 AI's Impact on Security Dynamics
26:10 The Dangers of AI in Security
30:50 Economic Sustainability of AI Technologies
41:40 AI Ethics: Real-World Implications
45:58 The Future of AI: Optimism and Caution
48:03 The Evolution of Security Landscape: AI's Role
52:08 Intro Long - Final.mp4
Referenced Thought Leaders & Articles
Subscribe & Follow
If you found this episode useful, please share it and subscribe!
→ Spotify.
→ YouTube
→ Website
Follow You Hosts:
→ Conor Sherman: LinkedIn
→ Stuart Mitchell: LinkedIn
Episode Summary
Walter Haydock shares practical strategies for navigating the complex landscape of AI governance, risk management, and compliance, especially in regulated sectors.
Sponsors
Thank you to our sponsors who make this show possible.
→ Hampton North. Hampton North is the premium US based cybersecurity search firm.
→ Sysdig. Secure the cloud the right way with agentic AI.
Guest Bio
Walter Haydock is the Founder and CEO of StackAware, where he helps organizations operationalize AI governance through frameworks like ISO/IEC 42001 and the NIST AI RMF.
→ Connect with Walter on LinkedIn
→ Subscribe to his newsletter, Deploy Securely
Referenced Laws, Frameworks, and Papers
Call to Action
If you found this episode useful, please share it and subscribe!
→ Spotify.
→ YouTube
→ Website
Follow You Hosts:
→ Conor Sherman: LinkedIn
→ Stuart Mitchell: LinkedIn
Episode Summary
In this episode, Jake Bernardes, CISO at Anecdotes, joins to break down the risks and opportunities of OpenAI's AgentKit, vendor lock-in, and the real impact of AI on enterprise security and jobs.
Sponsors
Thank you to our sponsors who make this show possible.
→ Hampton North. Hampton North is the premium US based cybersecurity search firm.
→ Sysdig. Secure the cloud the right way with agentic AI.
Guest Details
Jake Bernardes is the Chief Information Security Officer at Anecdotes, a top GRC platform.
Referenced Links & Research
Call to Action
If you found this episode useful, please share it and subscribe!
→ Spotify.
→ YouTube
→ Website
Follow You Hosts:
→ Conor Sherman: LinkedIn
→ Stuart Mitchell: LinkedIn
Episode Summary
In this episode, AI architect and security researcher Disesdi Susanna Cox explains the vast and complex attack surface of AI systems, highlighting the need for new security approaches like purple teaming and MLSecOps. Her insights help security leaders understand the unique risks and ethical challenges of AI, making this a must-listen for anyone responsible for securing modern AI-driven organizations.
Sponsors
Thank you to our sponsors who make this show possible.
→ Hampton North. Hampton North is the premium US based cybersecurity search firm.
→ Sysdig. Secure the cloud the right way with agentic AI.
About the Guest
Disesdi Susanna Cox is an AI architect, patent holder, and consulting security researcher recognized for her work with the OWASP AI Exchange. Her frameworks and research have been adopted globally to help organizations understand and address the evolving security landscape in AI. Connect with Susanna to follow her latest insights and contributions:
LinkedIn: https://www.linkedin.com/in/disesdi/
Newsletter: https://disesdi.substack.com/
OWASP AI Exchange: https://owasp.org/www-project-ai-exchange/
Episode Breakdown
00:00 Navigating the AI Security Landscape
03:30 Understanding Adversarial Attacks in AI
06:06 The Importance of Purple Teaming in AI Security
08:49 Establishing MLSecOps for AI Systems
11:40 The Role of Chief AI Security Officer
13:03 Ethics and Risks of AI in Decision Making
26:07 The Future of Red Teaming in AI Security
35:33 Intro Long - Final.mp4
Referenced Resources
Subscribe & Share
If you found this episode useful, please share it and subscribe!
→ Spotify.
→ YouTube
→ Website
Follow You Hosts:
→ Conor Sherman: LinkedIn
→ Stuart Mitchell: LinkedIn
Overview
Today's episode features Keith Hoodlet from Trail of Bits. We discuss how AI is rapidly accelerating both cyber threats and defenses, shrinking the time to exploit vulnerabilities and reshaping cybersecurity job requirements.
Sponsors
Thank you to our sponsors who make this show possible.
→ Hampton North. Hampton North is the premium US based cybersecurity search firm.
→ Sysdig. Secure the cloud the right way with agentic AI.
Guest Bio
That was Keith Hoodlet, Engineering Director at Trail of Bits, former Code Security Architect at GitHub, and winner of the DoD’s inaugural AI Bias Bounty.
Referenced Links & Resources
Subscribe & Follow
If you found this episode useful, please share it and subscribe!
→ Spotify
→ YouTube
→ Website
Follow You Hosts:
→ Conor Sherman: LinkedIn
→ Stuart Mitchell: LinkedIn
Quick Take (TL;DR)
AI is rapidly transforming cybersecurity, from automating penetration testing to reshaping how security teams and developers work. This episode examines the practical implications, risks, and future prospects of AI in security, offering actionable insights for leaders and practitioners.
Guest Spotlight
Clint Gibler is Head of Security Research at Semgrep, creator of the TLDRsec newsletter, and host of the Modern Security Podcast.
Connect:
Key Topics & Timestamps
00:00 AI's Impact on Penetration Testing
03:19 The Future of Junior Pen Testers
05:42 Working with AI: A New Paradigm
10:31 Trusting AI Outputs
12:31 Shifting Down: A New Security Approach
15:20 Making Security Invisible for Developers
16:44 The Role of AI in Security and Development
19:04 Integrating Security into Vibe Coding
21:21 Human in the Loop: Balancing Automation and Oversight
23:04 Model Dependency and Cost Considerations
25:27 Emerging Security Risks in AI Infrastructure
29:41 Understanding Prompt Injection Challenges
31:05 Innovative Solutions in AI Security
32:28 Risks of Model Integration and Code Execution
34:14 Navigating AI Model Adoption in Organizations
34:42 The Future of AI in Security
38:52 Career Pathways in Cybersecurity
Resources & References
Quick Take (TL;DR)
This episode explores the evolving risks and opportunities at the intersection of AI, security, and leadership, featuring insights from instant response veteran Jason Rebholz. The conversation highlights why AI safety and agentic systems matter for CISOs and security teams today.
Key Topics & Timestamps
Guest Spotlight
Jason Rebholz is the co-founder of Evoke Security and former CISO at Corvus Insurance. He previously led incident response at Mandiant, handling nation-state threats and major breaches. Jason is a leading voice on AI security, agentic systems, and practical risk management. Connect: LinkedIn | Website | Newsletter | Twitter/X
Resources & References
Books
Articles / Studies
Tools / Frameworks
Subscribe: Apple Podcasts | Spotify | YouTube | Website
Quick Take (TL;DR)
This episode examines how AI is transforming the cybersecurity landscape, with Sandy Dunn discussing why security leaders must reassess risk, trust, and business alignment in the era of agentic AI. Essential listening for anyone navigating the intersection of AI, security, and executive decision-making.
Guest Spotlight
Sandy Dunn is the Chief Information Security Officer (CISO) at SPLX, where she leads AI-driven security strategy and advises executive teams on risk and defense alignment. A 20-year cybersecurity veteran, Sandy is the creator and project leader of the OWASP Top 10 for LLM Applications and the GenAI Compass, and serves as an adjunct professor at Boise State University and board member at Agentic.org.
LinkedIn | SPLX | Agentic.org
Resources & References
Books
Articles / Studies
Tools / Frameworks
Call to Action
If you found this episode useful, please share it and subscribe!
AI is redrawing the economic map while vendors rush to “platformize” and attackers weaponize LLMs. Leaders must push for real platforms (shared data planes + policy layers), avoid “platform-in-name-only” lock-in, and prepare for agentic threats like PromptLock.
(00:00) Introduction — Why this week matters: AI divide, platformization reality check, agentic ransomware.
(02:10) Topic 1 — The AI Divide; Anthropic’s index shows productivity clustering in high-adoption regions; implications for hiring, policy, and multi-national execution.
(12:00) Topic 2 — Platformization & Consolidation; CrowdStrike–Pangea and Check Point–Lakera signal AI-security land grab; what “true platform” means; buyer guardrails.
(22:40) Topic 3 — PromptLock & Agentic Threats; ransomware that personalizes and negotiates; how to update IR/comms playbooks.
(31:30) Closing — Play offense: evidence-based platformization, workforce redesign, agentic blue-team prep.
NIST AI RMF — governance + risk controls: https://www.nist.gov/itl/ai-risk-management-framework
OWASP GenAI / LLM Top 10 — threat categories: https://genai.owasp.org/llm-top-10/
Quick Take (TL;DR)
This episode examines the evolving cybersecurity economy, the impact of AI on security roles and investments, and why trust, adaptability, and community are more crucial than ever for security leaders.
Key Topics & Timestamps
Guest Spotlight
Mike Privette is the founder of Return on Security, recognized as the industry’s first cybersecurity economist. He’s known for his in-depth analysis of funding trends, M&A, and the shifting landscape of security and AI. Mike’s work has been featured at B-Sides and followed by thousands of industry leaders.
Connect with Mike: LinkedIn | Newsletter.
Resources & References
Articles / Studies
Tools / Frameworks
Call to Action
Summary
In this episode, Conor Sherman and Stuart Mitchell discuss the evolving landscape of education, job markets, and AI regulation. They explore the implications of Gen Z's shifting attitudes towards college, the impact of AI on job security, and the recent endorsement of AI safety legislation by Anthropic. The conversation also delves into the current job market trends, the integration of AI in security teams, and the alarming advancements in exploit development through tools like CVE Genie.
Articles
Follow for More
Conor Sherman — LinkedIn | Website | Sysdig;
Stuart Mitchell — LinkedIn | Website.
Add subscription links: Apple Podcasts | Spotify | YouTube | Website.
Quick Take (TL;DR)
Daniel Miessler, cybersecurity veteran and creator of Unsupervised Learning, explores the future of work in an AI-driven world—why the ideal number of employees might be zero, and what that means for society, security, and meaning.
Key Topics & Timestamps
Guest Spotlight
Daniel Miessler is a cybersecurity expert, writer, and creator of the Unsupervised Learning newsletter and podcast.
Connect: LinkedIn | Website | Newsletter | Twitter/X
Resources & References
Books
Articles / Studies
Tools / Frameworks
If you found this episode useful, please share and subscribe!
Connect with the Hosts:
Subscribe: Apple Podcasts | Spotify | YouTube | Website
Conor Sherman and Stuart Mitchell dive into the intersection of AI, coding, security, and leadership. They discuss the “September Surge” in hiring, the evolving role of AI in software development, and the critical need for strong security fundamentals as organizations accelerate their adoption of AI technologies. The conversation covers the risks and rewards of AI-driven coding, the responsibilities of security teams, and the importance of leadership and organizational change in navigating this new landscape.
4x Velocity, 10x Vulnerabilities: AI Coding Assistants Are Shipping More Risks: Read the Apiiro blog
Sysdig 2025 Cloud-Native Security Report. Read the Sysdig report
Cisco: Detecting Exposed LLM Servers (Ollama/Shodan Study). Read the Cisco blog
Brave Research: Indirect Prompt Injection in Perplexity Comet: Read the Brave blog
NIST CSRC: Control Overlays for Securing AI Systems (COSAIS) – Concept Paper: Read the NIST concept paper
Quick Take (TL;DR)
LLMs don’t think—they predict. Keith Hoodlet shows what this means for CISOs facing bias, slopsquatting, MCP risks, and burnout.
Guest Spotlight
Keith Hoodlet is Engineering Director at Trail of Bits. He previously led at GitHub and Rapid7, co-founded Application Security Weekly, and launched the InfoSec Mentors Project.
LinkedIn | Website | Newsletter
Resources & References
Books
Call to Action
If this episode reshaped how you think about AI security, share it. Connect with your hosts:
Conor Sherman — LinkedIn | Website | Sysdig;
Stuart Mitchell — LinkedIn | Website.
Subscribe to Zero Signal: Apple | Spotify | YouTube | Website
Quick Take (TL;DR)
AI is rapidly transforming cybersecurity, demanding new frameworks for trust, leadership, and risk. Olivia Phillips shares why integrating security and ethics from the ground up is essential as organizations re-platform for an AI-driven future.
Guest Spotlight
Olivia Phillips is Vice President and US Chapter Chair of the Global Council of Responsible AI and founder of Wolf by Technology. With over 20 years in cybersecurity, she began in malware analysis and forensics and is now a leading voice on AI ethics, risk, and leadership. Connect with Olivia on LinkedIn.
Call to Action
If you found this episode useful, please share it and subscribe!
In this conversation, Ashish Rajan, the founder of TechRiot.io discusses the evolving landscape of AI security, emphasizing the challenges faced by security leaders as AI technologies rapidly advance. He highlights the need for CISOs to balance innovation with security, the importance of trust in AI systems, and the frameworks that can guide organizations in navigating these changes. The discussion also covers the layered security approach necessary for AI applications and the role of human oversight in AI decision-making.
Takeaways