Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/7d/72/bc/7d72bc80-f362-4c4c-9626-2b0182d267f9/mza_13481074286443666464.jpg/600x600bb.jpg
Future of Threat Intelligence
Team Cymru
100 episodes
1 month ago
Welcome to the Future of Threat Intelligence podcast, where we explore the transformative shift from reactive detection to proactive threat management. Join us as we engage with top cybersecurity leaders and practitioners, uncovering strategies that empower organizations to anticipate and neutralize threats before they strike. Each episode is packed with actionable insights, helping you stay ahead of the curve and prepare for the trends and technologies shaping the future.
Show more...
Business
RSS
All content for Future of Threat Intelligence is the property of Team Cymru and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Welcome to the Future of Threat Intelligence podcast, where we explore the transformative shift from reactive detection to proactive threat management. Join us as we engage with top cybersecurity leaders and practitioners, uncovering strategies that empower organizations to anticipate and neutralize threats before they strike. Each episode is packed with actionable insights, helping you stay ahead of the curve and prepare for the trends and technologies shaping the future.
Show more...
Business
Episodes (20/100)
Future of Threat Intelligence
Safebooks AI’s Ahikam Kaufman on Why CFOs Need Company-Specific AI Models for Fraud Detection
Unlike CISOs who work with consistent vulnerabilities across cloud environments, CFOs face company-specific financial processes that change constantly, making automation historically complex to solve before the AI era. Ahikam Kaufman, CEO & CFO of Safebooks AI, explains why machine learning is the only viable solution to detect sophisticated embezzlement schemes that regulatory compliance demands every public company address — with no materiality threshold.  His background building fraud prevention systems at Intuit and Check has taught him how graph technology can link seemingly unrelated financial transactions to expose coordinated internal fraud attempts that would be impossible for humans to catch at scale. The challenge is compounded by the fact that most finance staff are accountants, not technologists, requiring AI tools that bridge data complexity without demanding high technical skill levels. Topics discussed: Sarbanes-Oxley requires fraud protection programs with no materiality thresholds, yet most organizations lack systematic detection across payroll, vendor, and expense systems. Financial fraud detection requires unique AI models for each company using historical data, unlike consistent threats across organizations. Advanced fraud schemes link multiple transaction types requiring graph technology to connect disparate activities that individual monitoring would miss. Fraudsters use AI for parallel attacks, fake invoices, vendor manipulation, and executive impersonation, requiring automated defense systems for real-time processing. Achieving 99.9% accuracy through structured enterprise data and rule-based controls where financial precision is non-negotiable. Financial AI platforms integrate with existing systems without replacements or workflow changes, providing immediate automation value. Key Takeaways:  Implement AI-powered fraud detection systems that monitor vendor account changes, payroll additions, and journal entry anomalies. Build company-specific AI models using 1-2 years of historical financial data to learn unique business processes, data structures, and transaction patterns. Deploy graph technology to link related financial transactions across different systems to identify coordinated fraud attempts. Establish partnerships between CFOs and CISOs to combine external cybersecurity threat detection with internal financial fraud monitoring. Focus on AI platforms that integrate with existing financial technology stacks without requiring system replacements. Create rule-based governance frameworks for financial AI systems to eliminate hallucinations and maintain accuracy levels. Monitor AI-amplified fraud techniques, such as sophisticated fake invoices, manipulated vendor banking information, and executive impersonation. Develop automated systems that can demonstrate reasonable effort for fraud prevention to satisfy regulatory requirements and insurance protections. Listen to more episodes:  Apple  Spotify  YouTube Website
Show more...
1 month ago
27 minutes

Future of Threat Intelligence
Marsh's Sjaak Schouteren on the Golden Rule of Risk Assessment
Cyber insurance has transformed from a liability-focused niche product into a comprehensive business continuity tool, but widespread misconceptions continue to prevent organizations from maximizing its strategic value. Sjaak Schouteren, Cyber Growth Leader - Europe at Marsh, offers David how they combine risk quantification with business-focused communication strategies that give security leaders the tools to speak board language about cyber threats. Rather than the complex audit processes, modern cyber insurance acquisition can be remarkably streamlined. Sjaak's experience managing real-world incident response highlights how proper coverage creates strategic advantages beyond simple risk transfer, including immediate access to specialized negotiation teams and forensics experts who can extend decision timeframes during crisis situations. Topics discussed: How the 2020-2022 ransomware surge taught insurers that mid-cap companies were primary targets requiring comprehensive coverage. The three-pillar structure of modern cyber insurance covering first-party losses, third-party liability, and immediate incident response services without deductibles for initial crisis management. Why risk quantification through scenario analysis and financial impact modeling provides CISOs with the business language needed to communicate effectively with boards and C-suite executives. How risk engineers from security backgrounds have eliminated technical translation barriers between IT teams and underwriters. The strategic advantage of immediate incident response coverage that provides access to specialized forensics, legal, and negotiation teams within 48-72 hours of an incident. Why organizations with cyber insurance actually pay ransomware demands less frequently due to professional negotiation teams and comprehensive recovery support. The evolution from narrow data breach coverage to comprehensive business protection across all organization sizes. The distinction between risk mitigation through security controls and risk transfer through insurance as complementary rather than competing strategies. Key Takeaways:  Conduct cross-functional scenario planning to identify business-critical cyber risks before evaluating insurance coverage options. Map potential cyber incidents on a risk heat map measuring probability and impact to distinguish between minor inconveniences and threats that could damage business operations. Quantify average and maximum financial losses for each business-critical scenario to make data-driven decisions about risk. Leverage specialized risk engineers from security backgrounds during the underwriting process to eliminate technical translation barriers. Engage professional ransomware negotiators rather than attempting internal negotiations. Position cyber insurance as business enablement rather than just risk transfer by demonstrating how coverage strengthens overall cyber resilience. Listen to more episodes:  Apple  Spotify  YouTube Website
Show more...
1 month ago
35 minutes

Future of Threat Intelligence
SIG's Rob van der Veer on Why "Starting Small" with AI Security Might Fail
What happens when someone who's been building AI systems for 33 years confronts the security chaos of today's AI boom? Rob van der Veer, Chief AI Officer at Software Improvement Group (SIG), spotlights how organizations are making critical mistakes by starting small with AI security — exactly the opposite of what they should do. From his early work with law enforcement AI systems to becoming a key architect of ISO 5338 and the OWASP AI Security project, Rob exposes the gap between how AI teams operate and what production systems actually need. His insights on trigger data poisoning attacks and why AI security incidents are harder to detect than traditional breaches offer a sobering reality check for any organization rushing into AI adoption. The counterintuitive solution? Building comprehensive AI threat assessment frameworks that map the full attack surface before focused implementation. While most organizations instinctively try to minimize complexity by starting small, Rob argues this approach creates dangerous blind spots that leave critical vulnerabilities unaddressed until it's too late. Topics discussed: Building comprehensive AI threat assessment frameworks that map the full attack surface before focused implementation, avoiding the dangerous "start small" security approach. Implementing trigger data poisoning attack detection systems that identify backdoor behaviors embedded in training data. Addressing the AI team engineering gap through software development lifecycle integration, requiring architecture documentation and automated testing before production deployment. Adopting ISO 5338 AI lifecycle framework as an extension of existing software processes rather than creating isolated AI development workflows. Establishing supply chain security controls for third-party AI models and datasets, including provenance verification and integrity validation of external components. Configuring cloud AI service hardening through security-first provider evaluation, proper licensing selection, and rate limiting implementation for attack prevention. Creating AI governance structures that enable innovation through clear boundaries rather than restrictive bureaucracy. Developing organizational AI literacy programs tailored to specific business contexts, regulatory requirements, and risk profiles for comprehensive readiness assessment. Managing AI development environment security with production-grade controls due to real training data exposure, unlike traditional synthetic development data. Building "I don't know" culture in AI expertise to combat dangerous false confidence and encourage systematic knowledge-seeking over fabricated answers.   Key Takeaways:    Don't start small with AI security scope — map the full threat landscape for your specific context, then focus implementation efforts strategically. Use systematic threat modeling to identify AI-specific attack vectors like input manipulation, model theft, and training data reconstruction. Create processes to verify provenance and integrity of third-party models and datasets. Require architecture documentation, automated testing, and code review processes before AI systems move from research to production environments. Treat AI development environments as critical assets since they contain real training data. Review provider terms carefully, implement proper hardening configurations, and use appropriate licensing to mitigate data exposure risks. Create clear boundaries and guardrails that actually increase team freedom to experiment rather than creating restrictive bureaucracy. Implement ongoing validation that goes beyond standard test sets to detect potential backdoor behaviors embedded in training data. Listen to more episodes:  Apple  Spotify  YouTube Website
Show more...
1 month ago
34 minutes

Future of Threat Intelligence
Vigilocity's Karim Hijazi on Supply Chain Threat Intelligence
Karim Hijazi’s approach to threat hunting challenges conventional wisdom about endpoint security by proving that some of the most critical intelligence exists outside organizational networks. As Founder & CEO of Vigilocity, his 30-year journey from the legendary Mariposa botnet investigation to building external monitoring capabilities demonstrates why DNS analysis remains foundational to modern threat detection, even as AI transforms both offensive and defensive capabilities. In his chat with David, Karim explores how threat actors continue to rely on command and control infrastructure as their operational lifeline. His insights into supply chain threats, "low and slow" reconnaissance campaigns, and the evolution of domain generation algorithms provide security leaders with a unique perspective on proactive defense strategies that complement traditional security controls. Topics discussed: External DNS monitoring approaches that identify threat actor infrastructure before weaponization. How AI has fundamentally disrupted domain generation algorithm prediction, creating new blind spots for traditional threat intelligence. Supply chain threat intelligence methodologies that identify compromised partners and assess contagion risks. The evolution of command and control infrastructure from cleartext to encrypted communications and back. "Low and slow" reconnaissance patterns that precede ransomware attacks, operating with months-long dormancy periods. Strategies for communicating threat intelligence value to business stakeholders without creating defensive reactions from security teams. The limitations of current AI applications in security, particularly around nuanced threat analysis requiring human experience and pattern recognition. Board-level cybersecurity education requirements for organizations to survive sophisticated attacks in the next 5 years. Innovation challenges in cybersecurity where rebranding existing solutions prevents breakthrough defensive capabilities. Non-invasive threat hunting philosophies that deliver forensic-level detail without deploying endpoint agents. Key Takeaways:  Monitor external DNS communications to identify command and control infrastructure before threat actors weaponize domains against your organization. Assess supply chain partners through external threat intelligence lenses to identify compromised third parties that represent contagion risks. Develop detection capabilities for "low and slow" reconnaissance campaigns that operate with extended dormancy periods between communications. Implement AI as a noise reduction tool rather than a primary decision maker, maintaining human oversight for nuanced threat analysis. Establish board-level cybersecurity expertise to ensure adequate understanding and support for advanced threat hunting investments. Focus security innovation efforts on breakthrough capabilities rather than rebranding existing solutions with new acronyms. Correlate external threat intelligence with internal security data to validate threats and reduce false positive rates. Build threat hunting capabilities that can operate at machine speeds to handle increasing volumes of AI-generated attacks. Create communication strategies that present external threat intelligence as validation tools rather than indictments of existing security programs. Maintain expertise in DNS analysis and network fundamentals as core competencies, regardless of technological advances. Listen to more episodes:  Apple  Spotify  YouTube Website
Show more...
2 months ago
31 minutes

Future of Threat Intelligence
CyberHoot's Craig Taylor on Why Fear-Based Phishing Training Fails
Psychology beats punishment when building human firewalls. Craig Taylor, CEO & Co-founder of CyberHoot, brings 30 years of cybersecurity experience and a psychology background to challenge the industry's fear-based training approach. His methodology replaces "gotcha" phishing simulations with positive reinforcement systems that teach users to identify threats through skill-building rather than intimidation. Craig also touches on how cybersecurity is only 25 years old compared to other fields, like medicine's centuries of development, leading to significant industry mistakes. NIST's 2003 password requirements, for example, were completely wrong and took 14 years to officially retract. Craig's multidisciplinary approach combines psychology with security practice, recognizing that the industry's single-focus mindset contributed to these fundamental errors that organizations are still correcting today. Topics discussed: Replacing fear-based phishing training with positive reinforcement systems that teach threat identification through skill-building. Implementing seven-point email evaluation frameworks covering sender domain verification, emotional manipulation detection, and alternative communication verification protocols. Developing 3- to 5-minute gamified training modules that reward correct threat identification across specific categories. Correcting cybersecurity industry misconceptions through multidisciplinary approaches. Evaluating emerging security technologies like passkeys through industry backing analysis. Building human firewall capabilities through psychological understanding of manipulation tactics. Implementing pause-and-verify protocols to confirm unusual requests that pass technical email verification checks. Key Takeaways:  Replace punishment-based phishing simulations with positive reinforcement training that rewards users for correctly identifying threat indicators. Implement gamified security training modules instead of lengthy video sessions to maintain user engagement. Establish pause-and-verify protocols requiring alternative communication channels to confirm unusual requests that pass technical email verification checks. Evaluate emerging security technologies by examining industry backing and major sponsor adoption before incorporating them into training programs. Calibrate reward systems to provide minimal incentives (like monthly lunch gift cards) that drive engagement without creating external dependency. Train users to identify the seven key phishing indicators: sender domain accuracy, suspicious subject lines, inappropriate greetings, poor grammar, external links, questionable attachments, and emotional urgency tactics. Build internal locus of control in security training by focusing on skill mastery rather than fear-based compliance, ensuring users understand why security practices protect them personally. Deploy fully automated security training systems that eliminate administrative overhead while maintaining month-to-month flexibility and offering discounts to educational and nonprofit organizations. Listen to more episodes:  Apple  Spotify  YouTube Website
Show more...
2 months ago
32 minutes

Future of Threat Intelligence
The Futurum Group's Fernando Montenegro on the OODA Loop Approach to Security Strategy
What happens when you apply economic principles like opportunity cost and comparative advantage to cybersecurity decision-making? Fernando Montenegro, VP & Practice Lead of Cybersecurity at The Futurum Group, demonstrates how viewing security through an economics lens reveals critical blind spots most practitioners miss. His approach transforms how organizations evaluate cloud migrations, measure program success, and allocate security resources. Fernando also explains why cybersecurity has evolved from a technical discipline into a socioeconomic challenge affecting society at large. His three-part framework for AI implementation — understanding the technology, mapping business needs, and assessing threat environments — offers security leaders a structured approach to cutting through hype and making strategic decisions.  Topics discussed: How security economics and opportunity cost analysis reshape cloud migration decisions and resource allocation strategies The National Academies' 2025 "Cyber Hard Problems" report and its implications for cybersecurity's expanding societal impact A three-part framework for AI implementation: technology comprehension, business alignment, and threat environment assessment Why understanding organizational business operations eliminates the biggest blind spot in threat intelligence programs Multi-layered professional networking strategies for separating signal from noise in threat intelligence analysis How cloud environments fundamentally change threat intelligence workflows from IP-based to identity and architecture-focused approaches Key Takeaways:  Apply economic opportunity cost analysis to security decisions by evaluating what you give up versus what you gain from each security investment. Map your organization's business operations across marketing, sales, and product development to provide crucial context for technical threat intelligence. Assess AI implementations through a three-part framework: technology limitations, business use cases, and specific threat considerations. Measure security program success by evaluating alignment with organizational goals and influence on non-security business decisions. Run intentional OODA loops on your security program to maintain strategic direction and continuous improvement. Listen to more episodes:  Apple  Spotify  YouTube Website
Show more...
2 months ago
29 minutes

Future of Threat Intelligence
T. Rowe Price’s PJ Asghari’s "What, So What, Now What" Framework for Threat Intel
What does it take to transform a traditional event-driven SOC into an intelligence-driven operation that actually moves the needle? At T. Rowe Price, it meant abandoning the "spray and pray" approach to threat detection and building a systematic framework that prioritizes threats based on actual business risk rather than industry hype. PJ Asghari, Team Lead for Cyber Threat Intelligence Team, walked David through their evolution from a one-person intel operation to a program that directly influences detection engineering, fraud prevention, and executive decision-making. His approach centers on the "what, so what, now what" framework for intelligence reporting — a simple but powerful structure that bridges the gap between technical analysis and business action. Topics discussed: Moving beyond event-based monitoring to prioritize threats based on sector-specific risk profiles and threat actor targeting patterns rather than generic threat feeds. Focusing on financially-motivated actors, initial access brokers, and PII theft rather than nation-state activities that rarely target mid-tier financial firms directly. Addressing the cross-functional challenge that spans HR, talent acquisition, insider threat, and CTI teams. Using mise en place principles from culinary backgrounds to establish clear PIRs that align team focus with organizational needs. Creating trackable deliverables through ticket systems, RFI responses, and cross-team support that translates intelligence work into measurable business impact. Maintaining critical thinking and media literacy skills while leveraging automation for administrative tasks and threat feed processing. Key Takeaways:  Implement the "what, so what, now what" reporting structure to ensure intelligence reaches appropriate audiences with clear business implications and recommended actions. Build cross-functional relationships with fraud, insider threat, and vulnerability management teams to create measurable value through ticket creation and support requests rather than standalone reporting. Establish sector-specific threat prioritization by mapping threat actors to your actual business model rather than following generic industry threat landscapes. Create trackable metrics through service delivery, including RFI responses, expedited patching recommendations, and credential compromise notifications to demonstrate concrete value. Focus hiring on inquisitive mindset and communication skills over certifications, using interviews to assess critical thinking and ability to dig deeper into investigations. Map threat actor TTPs to MITRE framework to identify defense stack gaps and provide actionable detection engineering guidance rather than just IOC sharing. Invest in dark web monitoring and external attack surface management for financial services to catch credential compromises and brand abuse before they impact customers. Establish regular threat actor recalibration cycles to ensure prioritization remains aligned with current threat landscape rather than outdated assumptions. Listen to more episodes:  Apple  Spotify  YouTube Website
Show more...
2 months ago
25 minutes

Future of Threat Intelligence
Transcend's Aimee Cardwell on Turning Security into a Growth Driver
Most security leaders position themselves as guardians against risk, but Aimee Cardwell, CISO in Residence at Transcend and Board Member at WEX, built her reputation on a different approach: balancing risk to accelerate business growth. Her unconventional path from Fortune 5 CIO to CISO of a 1,200-person security team at UnitedHealth Group showcases how technical leaders can become true business partners rather than obstacles. Managing two company acquisitions every month, Aimee tells David how she developed a shifted-left security integration process that actually accelerated deal timelines while improving security outcomes. Her framework for risk appetite conversations moves executives beyond fear, uncertainty and doubt into productive discussions about cyber resilience, changing how organizations think about security investment and business enablement.   Topics discussed: How healthcare data regulations create complex compliance frameworks where companies must selectively forget customer information based on overlapping regulatory requirements. The transferable advantages CIOs bring to CISO roles, particularly in software development lifecycle security and communicating complex technical concepts to non-technical stakeholders. Shifting security strategy from risk prevention to intelligent risk balancing, enabling business growth while maintaining appropriate protection levels. Managing large-scale acquisition security integration through pre-closing requirements that accelerate post-acquisition security improvements. Establishing organizational risk appetite through worst-case scenario planning that moves leadership past emotional responses into rational decision-making frameworks. Developing cyber resilience strategies that assume incident occurrence and focus on recovery speed and impact minimization rather than just prevention. Scaling security controls based on business growth milestones, avoiding upfront overinvestment while ensuring appropriate protection as companies expand. Building consensus-driven risk acceptance frameworks while managing competing perspectives from multiple C-level executives and board members. Key Takeaways:  Implement pre-closing security requirements for acquisitions, shifting security integration 45 days before deal completion to accelerate post-acquisition timelines. Frame risk conversations around worst-case scenario analysis, using real examples and stock performance data to move executives past emotional responses and build resiliency. Develop tiered security controls that scale with business growth, implementing basic protections early and adding complexity as revenue and user bases expand. Position regulatory compliance as a competitive advantage and trust-building mechanism rather than a business constraint. Create "how do we get to yes" frameworks that start with business objectives and work backward to appropriate risk mitigation strategies. Use customer trust metrics and retention data to demonstrate security's direct contribution to business growth and competitive positioning. Leverage software development lifecycle experience to integrate security into engineering processes rather than treating it as an external validation step. Listen to more episodes:  Apple  Spotify  YouTube Website
Show more...
3 months ago
27 minutes

Future of Threat Intelligence
Digital Asset Redemption's Steve Baer on Criminal Business Models
The economics of ransomware reveal a sophisticated criminal enterprise that most security leaders dramatically underestimate. Steve Baer, Field CISO at Digital Asset Redemption, operates at the intersection of cybercrime and legitimate business, where his team's human intelligence gathering in Dark Web communities provides early warning systems that traditional security infrastructure cannot match. His insights into criminal business models, negotiation psychology, and the financial flows funding modern cybercrime offer a perspective rarely available to security practitioners. Steve walks David through Digital Asset Redemption's evolution from facilitating compliant cryptocurrency payments to building comprehensive threat intelligence capabilities using native speakers who maintain long-term relationships with criminal actors. His team's approach has enabled them to identify targeting intelligence before attacks occur and, in one notable case, leverage personal information about an attacker to secure free decryption keys for a nonprofit organization. Topics discussed: The ransomware-as-a-service ecosystem where criminal affiliates can launch operations for $40-200 monthly subscriptions and achieve 10% success rates, generating millions in revenue. How Dark Web markets extend beyond stolen credentials to include zero-day vulnerabilities starting at $100,000, access broker services targeting specific organizations, and complete compromise kits for enterprise security tools. The organizational structures of criminal enterprises that mirror RICO-era mafia operations through loose affiliations rather than hierarchical control, making traditional law enforcement approaches ineffective. Negotiation psychology and tactics used in ransom discussions, including the business incentives that motivate threat actors to provide working decryption keys and maintain operational reputation. Financial models underlying cybercrime operations, including revenue sharing with affiliate programs, bonus structures for successful targeting, and the necessity of cryptocurrency laundering services. Market indicators for measuring criminal enterprise growth, including quarterly analysis of unique threat actor groups, highest ransom demands, and seasonal patterns in retail-focused attacks. Human intelligence gathering techniques using multiple personas and native language speakers to build long-term relationships within criminal communities for early warning capabilities. The economic realities that enable small criminal teams to generate substantial revenue while operating from countries where attacking American institutions is legally encouraged rather than prosecuted. Why technical compliance frameworks provide insufficient protection against adversaries who can purchase complete compromise capabilities for mainstream security technologies. Key Takeaways:  Implement human intelligence capabilities to complement technical security controls, recognizing that criminal innovation often outpaces defensive technology deployment. Understand the true economics of ransomware operations, where criminal affiliates can achieve substantial returns with minimal upfront investment through established service models. Prepare comprehensive incident response plans that include professional negotiation capabilities, legal frameworks for attorney-client privilege, and understanding of criminal psychology. Monitor Dark Web markets not just for credential exposure but for targeting intelligence, access broker activity, and the availability of compromise kits specific to your security stack. Establish relationships with specialized incident response firms before needing them, understanding that ransom negotiations require specific expertise and cannot be effectively handled internally. Focus security education on understanding adversarial capabilities and business models rather than solely on compliance requirements or singular technology solutions. Listen to
Show more...
3 months ago
24 minutes

Future of Threat Intelligence
McAfee's Manisha Agarwal-Shah on Testing Ransomware Plans Before You Need Them
Most security leaders are fighting yesterday's ransomware war while today's attackers have moved to data exfiltration and reputation destruction. Manisha Agarwal-Shah, Deputy CISO at McAfee, brings 18 years of cybersecurity experience from consulting through AWS to explore why traditional ransomware defenses miss the mark against modern threat actors. Her framework for building security teams prioritizes functional coverage over deep expertise, ensuring organizations can respond to crises even when leadership transitions occur. Manisha tells David how privacy regulations like GDPR actually strengthen security postures rather than create compliance burdens. She also shares practical strategies for communicating technical threats to C-suite executives and explains why deputy CISO roles serve organizational continuity rather than ego management. Her insights into ransomware evolution trace the path from early scareware through encryption-based attacks to today's supply chain infiltration and data theft operations.   Topics discussed: The evolution of ransomware from opportunistic scareware to sophisticated supply chain attacks targeting high-value organizations through trusted vendor relationships. Building security team structures that prioritize functional coverage across cyber operations, GRC, and product security rather than pursuing deep expertise in every domain. The strategic role of deputy CISO positions for organizational continuity and crisis leadership when primary security executives are unavailable or in transition. How privacy regulations like GDPR, HIPAA, and PCI DSS create security baselines that complement rather than conflict with proactive defense strategies. Communicating technical ransomware risks to non-technical executives through business impact frameworks and regular steering committee discussions. AI-driven behavioral anomaly detection capabilities for identifying unusual file encryption patterns and suspicious process activities before damage occurs. Comprehensive ransomware response planning including executive battle cards, offline playbook storage, and tested communication channels for network-down scenarios. The shift from encryption-based ransomware to data exfiltration and reputation damage attacks that bypass traditional backup and recovery strategies. Cloud security posture management implementations for organizations operating in hybrid on-premises and cloud environments. Data retention and minimization strategies that reduce blast radius during security incidents while maintaining regulatory compliance requirements.   Key Takeaways:  Document a comprehensive ransomware response plan that includes executive battle cards for each C-suite role and store it in offline, restricted locations accessible when networks are compromised. Test your ransomware playbook regularly with all key decision makers in simulated scenarios to ensure everyone understands their roles and responsibilities during actual incidents. Build security teams with functional coverage across cyber operations, GRC, and product security rather than pursuing deep expertise in every domain when resources are limited. Establish deputy CISO roles for organizational continuity and crisis leadership, ensuring someone can engage executives and coordinate incident response when primary leadership is unavailable. Communicate technical ransomware threats to non-technical executives through business impact frameworks that translate technical risks into financial and reputational consequences. Implement AI-driven behavioral anomaly detection systems that can identify unusual file encryption patterns and suspicious process activities before ransomware damage occurs. Deploy immutable backup solutions as one layer of defense, but recognize they won't protect against data exfiltration and reputation-based ransomware attacks. Leverage privacy regulations like GDPR, HIPAA, and PCI DSS as security baselines that provide data minimizat
Show more...
3 months ago
20 minutes

Future of Threat Intelligence
Team Cymru's Threat Researchers on Operation Endgame Intelligence
Team Cymru's threat researchers have spent years developing an almost psychological understanding of cybercriminals, tracking their behavioral patterns alongside technical infrastructure to predict where attacks will emerge before they happen. Josh and Abigail share with David how their multi-year tracking of Russian cybercrime groups enabled critical contributions to Operation Endgame. Their work demonstrates how sustained intelligence gathering creates opportunities for law enforcement victories that reactive security cannot achieve. Drawing from Josh's eight years at Team Cymru and background in law enforcement national security investigations, and Abigail's specialization in Russian cybercrime tracking, they reveal how NetFlow telemetry provides unprecedented visibility into criminal operations. Their approach goes far beyond traditional indicator-based threat intelligence, focusing instead on understanding the human patterns that drive criminal infrastructure deployment and management. Topics discussed: The evolution of Team Cymru's threat research mission from ad hoc investigations to formalized self-tasking teams. How NetFlow telemetry enables upstream infrastructure mapping that reveals criminal backend systems invisible to traditional security tools. The behavioral analysis techniques that distinguish between different criminal operators based on work schedules, personal browsing habits, and infrastructure access patterns. Why collaboration between private sector researchers and law enforcement requires transparency and trust-building rather than hoarding intelligence behind restrictive sharing classifications. How Operation Endgame demonstrated the effectiveness of combining multiple organizational perspectives on the same threats, with each contributor providing unique visibility into different attack components. The measurement challenges in threat research success when outcomes depend on external decision-makers and sensitive operations may not publicly acknowledge private sector contributions. Why financially motivated threat actors are shifting from mass spray-and-pray campaigns to more targeted, higher-payout operations. How click-fix attacks exploit human psychology by convincing victims to execute malicious commands themselves. The dual-edged impact of AI on cybercrime, lowering barriers to entry for malicious actors while simultaneously enabling more sophisticated social engineering and automation capabilities. Why security awareness training must evolve beyond identifying typos and obvious phishing indicators to address AI-generated content and sophisticated impersonation techniques. Key Takeaways:  Build long-term tracking capabilities that focus on understanding threat actor behavior patterns rather than chasing individual indicators or campaigns. Implement NetFlow telemetry analysis to identify upstream infrastructure connections that reveal criminal backend systems before they're deployed operationally. Develop collaborative relationships with law enforcement and private sector partners based on transparency and shared mission objectives. Create threat research teams with self-tasking authority to focus on societally important threats rather than customer-driven priorities that may miss critical criminal activity. Establish behavioral profiling techniques that distinguish between different criminal operators based on work patterns, personal interests, and infrastructure access methods. Invest in sustained intelligence gathering capabilities that track threat actors across multiple campaigns and infrastructure changes over extended periods. Prepare for the increasing sophistication of click-fix attacks by educating users about command execution risks and implementing controls that detect suspicious copy-paste activities. Develop AI-aware security awareness training that addresses deepfake voice calls, sophisticated impersonation techniques, and realistic-looking malicious websites. Build m
Show more...
3 months ago
27 minutes

Future of Threat Intelligence
Lemonade's Jonathan Jaffe on Trading Feedback for Security Technology
Jonathan Jaffe, CISO at Lemonade, has built what he predicts will be "the perfect AI system" using agent orchestration to automate vulnerability management at machine speed, eliminating the developer burden of false positive security alerts. His unconventional approach to security combines lessons learned from practicing law against major tech companies with a systematic strategy for partnering with security startups to access cutting-edge technology years before competitors. Jonathan tells David a story that showcases how even well-intentioned people will exploit systems if they believe they won't get caught or cause harm, which has shaped his approach to insider threat detection and the importance of maintaining skeptical oversight of automated security controls. His team leverages AI agents that automatically analyze GitHub Dependabot vulnerabilities, determine actual exploitability by examining entire code repositories, and either dismiss false positives or generate proof-of-concept explanations for developers. Topics discussed: The evolution from traditional security approaches to AI-powered agent orchestration that operates at machine speed to eliminate false positive vulnerability alerts. Strategic partnerships with security startups as design partners, trading feedback and data for free access to cutting-edge technology while helping shape market-ready products. Policy-based security enforcement for cloud-native environments that prevents the need to manage individual pods, containers, or microservices through automated compliance checks. How legal experience prosecuting tech companies provides unique insights into adversarial thinking and the psychology behind insider threats and system exploitation. Implementation of AI vulnerability management systems that automatically ingest CVEs, analyze code repositories for exploitable methods, and generate proof-of-concept explanations for developers. Risk management strategies for adopting startup technology by starting small in non-impactful areas and gradually building trust through demonstrated value and reliability. Transforming security operations from reactive vulnerability patching to proactive automated threat prevention through intelligent agent-based systems. Key Takeaways:  Implement policy-based security enforcement for cloud environments to automate compliance across all deployments rather than managing individual pods or containers manually. Partner with security startups as design partners by trading feedback data for free access to cutting-edge technology while helping them develop market-ready products. Build AI agent orchestration platforms that automatically ingest GitHub Dependabot CVEs, analyze code repositories for exploitable methods, and dismiss false positive vulnerability alerts. Begin startup technology adoption in low-risk or non-impactful areas to build trust and demonstrate value before expanding to critical security functions. Establish relationships with venture capital communities to gain early access to portfolio companies and emerging security technologies before mainstream adoption. Apply healthy skepticism to security controls by recognizing that even well-intentioned employees may exploit systems if they believe they won't cause harm or get caught. Focus AI development efforts on automating time-intensive security tasks that typically require many days of manual developer work into machine-speed operations. Evaluate business risk first before pursuing legal or compliance actions by calculating whether the effort investment justifies potential outcomes and settlements. Listen to more episodes:  Apple  Spotify  YouTube Website
Show more...
4 months ago
20 minutes

Future of Threat Intelligence
Digital Turbine's Vivek Menon on AI Acceleration vs Attack Expansion
The security industry's obsession with cutting-edge threats often overshadows a more pressing reality: the vast majority of organizations are still mastering basic AI implementation. Vivek Menon, CISO & Head of Data at Digital Turbine, brings his insights from the RSA expo floor to share why the agentic AI security rush may be premature, while highlighting the genuine opportunities AI presents for resource-constrained security teams. Vivek shares with David how smaller organizations can leverage AI automation to achieve enterprise-level security capabilities without corresponding budget increases. His balanced approach to AI security threats demonstrates why defenders maintain strategic advantages over attackers, despite the expanded attack surface that dominates industry discussions.   Topics discussed: Why the agentic AI security market represents a classic "horse before the cart" scenario, with vendors solving problems for the 1% of enterprises building agents while 99% are still evaluating basic AI adoption. How the rush toward AI agents is forcing long-overdue conversations about non-human identity management, which lacks pace and scale in implementation. The strategic advantage defenders maintain in AI-powered security conflicts, leveraging time-based preparation capabilities while attackers face immediate success requirements with limited development windows. The dual nature of AI security impact, balancing genuine attack surface expansion against significantly enhanced defensive capabilities. Distinguishing between legitimate security innovation and buzzword-driven marketing, focusing on practical implementation readiness over theoretical capability demonstrations. How programmatic advertising technology companies navigate unique security challenges while maintaining operational efficiency in highly automated, data-driven business environments.   Key Takeaways:  Evaluate vendor AI solutions by asking what percentage of your industry actually uses the underlying technology before investing in security tools for emerging threats. Prioritize non-human identity management initiatives now, as the shift toward AI agents will expose existing gaps in identity governance at scale. Leverage AI automation to achieve enterprise-level security capabilities without proportional budget increases, especially for resource-constrained organizations. Adopt AI as a defensive accelerator rather than viewing it primarily as an attack surface expansion problem. Invest time in comprehensive threat protection strategies, capitalizing on defenders' advantage over attackers who must succeed immediately. Assess your organization's AI maturity before implementing agentic AI security solutions, ensuring you're solving actual rather than theoretical problems. Focus security budgets on mainstream technology threats affecting 99% of enterprises rather than cutting-edge solutions for the 1%. Listen to more episodes: Apple  Spotify  YouTube Website
Show more...
4 months ago
5 minutes

Future of Threat Intelligence
Digital Asset Redemption's Steve Baer on Why Half of Ransomware Victims Shouldn't Pay
Most organizations approach ransomware as a technical problem, but Steve Baer, Field CISO at Digital Asset Redemption, has built his career understanding it as fundamentally human. His team's approach highlights why traditional cybersecurity tools fall short against motivated human adversaries and how proactive intelligence gathering can prevent incidents before they occur. Steve's insights from the ransomware negotiation business challenge conventional wisdom about cyber extortion. Professional negotiators consistently achieve 73-75% reductions in ransom demands through skilled human interaction, while many victims discover their "stolen" data is actually worthless historical information that adversaries misrepresent as current breaches. Digital Asset Redemption's unique position allows them to purchase stolen organizational data on dark markets before public disclosure, effectively preventing incidents rather than merely responding to them. Topics discussed: Building human intelligence networks with speakers of different languages who maintain authentic personas and relationships within dark web adversarial communities. Professional ransomware negotiation techniques that achieve consistent 73-75% reductions in extortion demands through skilled human interaction rather than automated responses. The reality that less than half of ransomware victims require payment, as many attacks involve worthless historical data misrepresented as current breaches. Proactive data acquisition strategies that purchase stolen organizational information on dark markets before public disclosure to prevent incident escalation. Why AI serves as a useful tool for maintaining context and personas but cannot replace human intelligence when countering human adversaries. Key Takeaways:  Investigate data value before paying ransoms — many attacks involve worthless historical information that adversaries misrepresent as current breaches. Engage professional negotiators rather than attempting DIY ransomware negotiations, as specialized expertise consistently achieves 73-75% reductions in demands. Build relationships within the cybersecurity community since the industry remains small and professionals freely share valuable threat intelligence. Deploy human intelligence networks with diverse language capabilities to gather authentic threat intelligence from adversarial communities. Assess AI implementation as a useful tool for maintaining context and personas while recognizing human adversaries require human intelligence to counter. Listen to more episodes:  Apple  Spotify  YouTube Website
Show more...
4 months ago
7 minutes

Future of Threat Intelligence
Cybermindz’s Mark Alba on Military PTSD Protocols to Treat Security Burnout
The cybersecurity industry has talked extensively about burnout, but Mark Alba, Managing Director of Cybermindz, is taking an unprecedented scientific approach to both measuring and treating it. In this special RSA episode, Mark tells David how his team applies military-grade psychological protocols originally developed for PTSD treatment to address the mental health crisis in security operations centers. Rather than relying on anecdotal evidence of team fatigue, they deploy clinical psychologists to measure resilience through validated psychological assessments and deliver interventions that can literally change how analysts' brains process stress. Mark walks through their use of the iRest Protocol, a 20-year-old treatment methodology from Walter Reed Hospital that shifts brain activity from amygdala-based fight-or-flight responses to prefrontal cortex logical thinking. Their team of five PhDs works directly within enterprise SOCs to establish baseline psychological metrics and track improvement over time, giving security leaders unprecedented visibility into their team's actual capacity to handle high-stress incident response. Topics discussed: Clinical measurement of cybersecurity burnout through validated psychological assessments including the MASLAC sleep index and psychological capital evaluations. Implementation of the iRest Protocol, a military-developed meditative technique used at Walter Reed Hospital for PTSD treatment. Real-time resilience scoring through the Cybermindz Resilience Index that combines sleep quality, psychological capital, burnout indicators, and stress response metrics. Research methodology to establish causation versus correlation between psychological state and SOC performance metrics like mean time to respond and incident response rates. Neuroscience of cybersecurity roles, including how threat intelligence analysts perform optimally at alpha brain wave levels while incident responders need beta wave states. Strategic staff rotation based on psychological state rather than just skillset, moving analysts between different cognitive roles to optimize both performance and mental health. Key Takeaways:  Implement clinical burnout measurement using validated tools like the MASLAC sleep index and psychological capital assessments rather than relying on subjective burnout indicators in your SOC operations. Deploy psychometric testing within security operations centers to establish baseline resilience metrics before incidents occur, enabling proactive team management strategies. Establish brainwave optimization protocols by moving threat intelligence analysts to alpha wave states for creative pattern recognition and incident responders to beta wave states for rapid decision-making. Correlate psychological metrics with traditional SOC performance indicators like mean time to respond and incident response rates to identify causation patterns. Rotate staff assignments based on real-time psychological capacity assessments rather than just technical skills, optimizing both performance and mental health outcomes. Measure psychological capital within your security team to understand cognitive capacity for handling high-stress cyber incidents and threat analysis workloads. Establish post-incident psychological protocols using clinical psychology techniques to prevent long-term burnout and retention issues following major security breaches. Create predictive analytics models that combine resilience scoring with operational metrics to forecast SOC team performance and proactively address capacity issues. Listen to more episodes:  Apple  Spotify  YouTube Website
Show more...
5 months ago
7 minutes

Future of Threat Intelligence
GigaOm’s Howard Holton on Why AI Will Be the OS of Security Work
The cybersecurity industry has witnessed numerous technology waves, but AI's integration at RSA 2025 signals something different from past hype cycles. Howard Holton, Chief Technology Officer at GigaOm, observed AI adoption across virtually every vendor booth, yet argues this represents genuine transformation rather than superficial marketing. His analyst perspective, backed by GigaOm's practitioner-focused research approach, reveals why AI will become the foundational operating system of security work rather than just another tool in an already crowded stack. Howard's insights challenge conventional thinking about human-machine collaboration in security operations. He explains how natural language understanding finally bridges the gap between human instruction variability and machine execution consistency, solving a problem that has limited automation effectiveness for decades. Howard also explores practical applications where AI handles repetitive security tasks that exhaust human analysts, while humans focus on curiosity-driven investigation and strategic analysis that machines cannot replicate. Topics discussed: The fundamental differences between AI's practical applicability and blockchain's limited use cases, despite similar initial hype cycles and market positioning across cybersecurity vendors. How natural language understanding creates breakthrough human-machine collaboration by allowing AI systems to execute consistent tasks regardless of instruction variability from different analysts. The biological metaphor for human versus machine intelligence, where humans operate as "chaos machines" with independent processes driven by curiosity rather than single-objective optimization. GigaOm's practitioner-focused approach to security maturity modeling that measures actual organizational capability rather than vendor feature adoption or platform configuration levels. Why AI will become the operating system of security work, following the evolution from Microsoft Office to SaaS as foundational business operation layers. The strategic advantage of AI handling hyper-repetitive security processes that traditionally drive human analysts to inefficiency while preserving human focus for curiosity-driven investigation. How enterprise security teams can identify the optimal intersection between AI's computational strengths and human analytical capabilities within their specific organizational contexts and threat landscapes. Key Takeaways:  Evaluate your security maturity models to ensure they measure organizational capability and adaptability rather than vendor feature adoption or platform configuration levels. Identify repetitive security processes that exhaust human analysts and prioritize these for AI automation while preserving human focus for curiosity-driven investigation. Leverage natural language understanding in AI tools to standardize security process execution despite instruction variability from different team members. Audit your current technology stack to distinguish between genuinely applicable AI solutions and superficial AI marketing similar to the blockchain hype cycle. Create practitioner-focused assessment criteria when evaluating security vendors to ensure solutions address real-world enterprise implementation challenges. Develop language-agnostic security procedures that AI systems can interpret consistently regardless of how different analysts explain the same operational requirements. Listen to more episodes:  Apple  Spotify  YouTube Website
Show more...
5 months ago
8 minutes

Future of Threat Intelligence
Online Business Systems' Jeff Man on PCI 4.0's Impact
The cybersecurity industry has long operated on fear-based selling and vendor promises that rarely align with practical implementation needs. Jeff Man, Sr. Information Security Evangelist at Online Business Systems, brings a pragmatic perspective after years of navigating compliance requirements and advising organizations from Fortune 100 enterprises to small e-commerce operators. His cautious optimism about the industry's current trajectory stems from witnessing a fundamental shift in how vendors understand and communicate compliance requirements, particularly around PCI DSS 4.0's recent implementation.   Jeff's extensive conference speaking experience and hands-on consulting work reveal critical disconnects between security marketing rhetoric and operational reality. His observation that security presentation slides from 1998 remain almost entirely relevant today underscores both the persistence of fundamental security challenges and the industry's slow evolution beyond superficial solutions toward meaningful risk management frameworks.    Topics discussed:   The transformation of vendor compliance conversations from generic marketing responses to specific requirement understanding, particularly around PCI DSS 4.0 implementation strategies. Why speaking "compliance language" with clients proves more effective than traditional security-focused approaches, as organizations prioritize mandatory requirements over theoretical security improvements. The reality that 99% of companies fall into small business security categories rather than commonly cited SMB statistics, creating massive gaps between available solutions and actual organizational needs. Risk prioritization methodologies that focus security investments on the 3% of CVEs actively exploited by attackers rather than attempting to address overwhelming vulnerability backlogs. The evolution from fear-uncertainty-doubt selling tactics toward informed decision-making frameworks that help organizations understand exactly what security technologies deliver versus marketing promises. How independent advisory perspectives enable better technology purchasing decisions by providing objective analysis separate from vendor sales motivations and product-specific solutions. The convergence of threat detection, vulnerability prioritization, and compliance requirements into cohesive risk management strategies that align with business operational realities rather than security team preferences.   Key Takeaways:    Prioritize vendors who demonstrate specific compliance requirement knowledge rather than offering generic "we do compliance" responses, particularly for PCI DSS 4.0 implementation. Frame security discussions using compliance language with business stakeholders, as regulatory requirements drive action more effectively than theoretical security benefits. Focus vulnerability management efforts on the approximately 3% of CVEs that attackers actively exploit rather than attempting to address entire vulnerability backlogs. Recognize that 99% of organizations operate with small business security constraints and require solutions scaled appropriately rather than enterprise-grade implementations. Seek independent security advisory perspectives separate from vendor sales processes to make informed technology purchasing decisions based on actual needs versus marketing promises. Evaluate security investments through risk prioritization frameworks that align with business operations rather than pursuing comprehensive security controls beyond organizational capabilities. Leverage the convergence of compliance requirements, threat intelligence, and vulnerability management to create cohesive risk management strategies rather than implementing disparate security tools.
Show more...
5 months ago
8 minutes

Future of Threat Intelligence
Trellix's John Fokker on Why Ransomware Groups Are Fragmenting
The criminal underground is experiencing its own version of startup disruption, with massive ransomware-as-a-service operations fragmenting into smaller, more agile groups that operate like independent businesses. John Fokker, Head of Threat Intelligence at Trellix, brings unique insights from monitoring hundreds of millions of global sensors, revealing how defenders' success in EDR detection is paradoxically driving criminals toward more profitable attack models. His team's systematic tracking of AI adoption in criminal networks provides a fascinating parallel to legitimate business transformation, showing how threat actors are methodically testing and scaling new technologies just like any other industry. Drawing from Trellix's latest Global Threat Report, John tells David why the headlines focus on major enterprise breaches while the real action happens in the profitable mid-market, where companies have extractable revenue but often lack enterprise-level security budgets. This conversation offers rare visibility into how macro trends like AI adoption and improved defensive capabilities are reshaping criminal business models in real-time.  Topics discussed: The systematic fragmentation of large ransomware-as-a-service operations into independent criminal enterprises, each focusing on specialized capabilities rather than maintaining complex hierarchical structures. How improved EDR detection capabilities are driving a strategic shift from encryption-based ransomware attacks toward data exfiltration and extortion as a more reliable revenue model. The economic targeting patterns that focus on profitable mid-market companies with decent revenue streams but potentially limited security budgets, rather than the headline-grabbing major enterprise victims Criminal adoption patterns of AI technologies that mirror legitimate business transformation, with systematic testing and gradual scaling as capabilities prove valuable. The emergence of EDR evasion tools as a growing criminal service market, driven by the success of endpoint detection and response technologies in preventing traditional attacks. Why building trust in autonomous security systems faces similar challenges to autonomous vehicles, requiring proven track records and reduced false positives before organizations will release human oversight. The strategic use of global sensor networks combined with public intelligence to map evolving attack patterns and identify blind spots in organizational threat detection capabilities. How entropy-based detection methods at the file and block level can identify encryption activities that indicate potential ransomware attacks in progress. The evolution from structured criminal hierarchies with complete in-house kill chains to distributed networks of specialized service providers and independent operators. Key Takeaways:  Monitor entropy changes in files and block-level data compression rates as early indicators of ransomware encryption activities before full system compromise occurs. Prioritize EDR and XDR deployment investments to force threat actors away from encryption-based attacks toward less reliable data exfiltration methods. Focus threat intelligence gathering on fragmented criminal groups rather than solely tracking large ransomware-as-a-service operations that are splintering into independent cells. Implement graduated trust models for AI-powered security automation, starting with low-risk tasks and expanding autonomy as false positive rates decrease over time. Combine internal sensor data with public threat intelligence reports to identify blind spots and validate detection capabilities across multiple threat vectors. Develop specialized defense strategies for mid-market organizations that balance cost-effectiveness with protection against targeted criminal business models. Track AI adoption patterns in criminal networks using the same systematic approach businesses use for technology transformation initiatives.
Show more...
5 months ago
10 minutes

Future of Threat Intelligence
Frost & Sullivan's Martin Naydenov on AI's Cybersecurity Trust Gap
In this special RSA episode of Future of Threat Intelligence, Martin Naydenov, Industry Principal of Cybersecurity at Frost & Sullivan, offers a sobering perspective on the disconnect between AI marketing and implementation. While the expo floor buzzes with "AI-enabled" security solutions, Martin cautions that many security teams remain reluctant to use these features in their daily operations due to fundamental trust issues. This trust gap becomes particularly concerning when contrasted with how rapidly threat actors have embraced AI to scale their attacks. Martin walks David through the current state of AI in cybersecurity, from the vendor marketing rush to the practical challenges of implementation. As an analyst who regularly uses AI tools, he provides a balanced view of their capabilities and limitations, emphasizing the need for critical evaluation rather than blind trust. He also demonstrates how easily AI can be leveraged for malicious purposes, creating a pressing need for security teams to overcome their hesitation and develop effective counter-strategies. Topics discussed: The disconnect between AI marketing hype at RSA and the practical implementation challenges facing security teams in real-world environments. Why security professionals remain hesitant to trust AI features in their tools, despite vendors rapidly incorporating them into security solutions. The critical need for vendors to not just develop AI capabilities but to build trust frameworks that convince security teams their AI can be relied upon. How AI is dramatically lowering the barrier to entry for threat actors by enabling non-technical individuals to create convincing phishing campaigns and malicious scripts. The evolution of phishing from obvious "Nigerian prince" scams with typos to contextually accurate, perfectly crafted messages that can fool even security-aware users. The disproportionate adoption rates between defensive and offensive AI applications, creating a potential advantage for attackers. How security analysts are currently using AI as assistance tools while maintaining critical oversight of the information they provide. The emerging capability for threat actors to build complete personas using AI-generated content, deepfakes, and social media scraping for highly targeted attacks. Key Takeaways:  Implement verification protocols for AI-generated security insights to balance automation benefits with necessary human oversight in your security operations. Establish clear trust boundaries for AI tools by understanding their data sources, decision points, and potential limitations before deploying them in critical security workflows. Develop AI literacy training for security teams to help analysts distinguish between reliable AI outputs and potential hallucinations or inaccuracies. Evaluate your current security stack for unused AI features and determine whether trust issues or training gaps are preventing their adoption. Create AI-resistant authentication protocols that can withstand the sophisticated phishing attempts now possible with language models and deepfake technology. Monitor adversarial AI capabilities by testing your own defenses against AI-generated attack scenarios to identify potential vulnerabilities. Integrate AI tools gradually into security operations, starting with low-risk use cases to build team confidence and establish trust verification processes. Prioritize vendor solutions that provide transparency into their AI models' decision-making processes rather than black-box implementations. Establish metrics to quantify AI effectiveness in your security operations, measuring both performance improvements and false positive/negative rates. Design security awareness training that specifically addresses AI-enhanced social engineering techniques targeting your organization.
Show more...
5 months ago
6 minutes

Future of Threat Intelligence
Unspoken Security’s AJ Nash on Protecting Against AI Model Poisoning
In our latest episode of The Future of Threat Intelligence, recorded at RSA Conference 2025, AJ Nash, Founder & CEO, Unspoken Security, provides a sobering assessment of AI's transformation of cybersecurity. Rather than focusing solely on hype, AJ examines the double-edged nature of AI adoption: how it simultaneously empowers defenders while dramatically lowering barriers to entry for sophisticated attacks. His warnings about entering a "post-knowledge world" where humans lose critical skills and adversaries can poison trusted AI systems offer a compelling counterbalance to the technology's promise. AJ draws parallels to previous technology trends like blockchain that experienced similar hype cycles before stabilizing, but notes that AI's accessibility and widespread applicability make it more likely to have lasting impact. He predicts that the next frontier in security will be AI integrity verification — building systems and organizations dedicated to ensuring that the AI models we increasingly depend on remain trustworthy and resistant to manipulation. Throughout the conversation, AJ emphasizes that while AI will continue to evolve and integrate into our security operations, maintaining human oversight and preserving our knowledge base remains essential. Topics discussed: The evolution of the RSA Conference and how industry focus has shifted through cycles from endpoints to threat intelligence to blockchain and now to AI, with a particularly strong emphasis on agentic AI. The double-edged impact of AI on workforce dynamics, balancing the potential for enhanced productivity against concerns that companies may prioritize cost-cutting by replacing junior positions, potentially eliminating career development pipelines. The risk of  AI-washing similar to how "intelligence" became a diluted buzzword, with companies claiming AI capabilities without substantive implementation, necessitating deeper verification — and even challenging — of vendors' actual technologies. The emergence of a potential "post-knowledge world" where overreliance on AI systems for summarization and information processing erodes human knowledge of nuance and detail. The critical need for AI integrity verification systems as adversaries shift focus to poisoning models that organizations increasingly depend on, creating new attack surfaces that require specialized oversight. Challenges to intellectual property protection as AI systems scrape and incorporate existing content, raising questions about copyright enforcement and ownership in an era where AI-generated work is derivative by nature. The importance of maintaining human oversight in AI-driven security systems through transparent automation workflows, comprehensive understanding of decision points, and regular verification of system outputs. The parallels between previous technology hype cycles like blockchain and current AI enthusiasm, with the distinction that AI's accessibility and practical applications make it more likely to persist as a transformative technology. Key Takeaways:  Challenge AI vendors to demonstrate their systems transparently by requesting detailed workflow explanations and documentation rather than accepting marketing claims at face value. Implement a "trust but verify" approach to AI systems by establishing human verification checkpoints within automated security workflows to prevent over-reliance on potentially flawed automation. Upskill your technical teams in AI fundamentals to maintain critical thinking abilities that help them understand the limitations and potential vulnerabilities of automated systems. Develop comprehensive AI governance frameworks that address potential model poisoning attacks by establishing regular oversight and integrity verification mechanisms. Establish cross-organizational collaborations with industry partners to create trusted AI verification authorities that can audit and certify model integrity across the security ecosystem. Docum
Show more...
5 months ago
15 minutes

Future of Threat Intelligence
Welcome to the Future of Threat Intelligence podcast, where we explore the transformative shift from reactive detection to proactive threat management. Join us as we engage with top cybersecurity leaders and practitioners, uncovering strategies that empower organizations to anticipate and neutralize threats before they strike. Each episode is packed with actionable insights, helping you stay ahead of the curve and prepare for the trends and technologies shaping the future.