Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/f6/d5/ed/f6d5ed39-c78c-bbc3-9e16-8da9c7df7142/mza_16272558080117222112.jpg/600x600bb.jpg
The Boring AppSec Podcast
The Boring AppSec Podcast
28 episodes
5 days ago
In this podcast, we will talk about our experiences having worked at different companies - from startups to big enterprises, from tech companies to security companies, and from building side projects to building startups. We will talk about the good, the bad, and everything in between. So join us for some fun, some real, and some super hot takes about all things Security in the Boring AppSec Podcast.
Show more...
Technology
RSS
All content for The Boring AppSec Podcast is the property of The Boring AppSec Podcast and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
In this podcast, we will talk about our experiences having worked at different companies - from startups to big enterprises, from tech companies to security companies, and from building side projects to building startups. We will talk about the good, the bad, and everything in between. So join us for some fun, some real, and some super hot takes about all things Security in the Boring AppSec Podcast.
Show more...
Technology
Episodes (20/28)
The Boring AppSec Podcast
The Attacker's Perspective on AI Security with Aryaman Behera

In this episode, hosts Sandesh and Anshuman chat with Aryaman Behera, the Co-Founder and CEO of Repello AI. Aryaman shares his unique journey from being a bug bounty hunter and the captain of India's top-ranked CTF team, InfoSec IITR, to becoming the CEO of an AI security startup. The discussion offers a deep dive into the attacker-centric mindset required to secure modern AI applications, which are fundamentally probabilistic and differ greatly from traditional deterministic software. Aryaman explains the technical details behind Repello's platform, which combines automated red teaming (Artemis) with adaptive guardrails (Argus) to create a continuous security feedback loop. The conversation explores the nuanced differences between AI safety and security, the critical role of threat modeling for agentic workflows, and the complex challenges of responsible disclosure for non-deterministic vulnerabilities.


Key Takeaways

- From Hacker to CEO: Aryaman discusses the transition from an attacker's mindset, focused on quick exploits, to a CEO's mindset, which requires patience and long-term relationship building with customers.


- A New Kind of Threat: AI applications introduce a new attack surface built on prompts, knowledge bases, and probabilistic models, which increases the blast radius of potential security breaches compared to traditional software.


- Automated Red Teaming and Defense: Repello’s platform consists of two core products: Artemis, an offensive AI red teaming platform that discovers failure modes , and


- Argus, a defensive guardrail system. The platforms create a continuous feedback loop where vulnerabilities found by Artemis are used to calibrate and create policies for Argus.


- Threat Modeling for AI Agents: For complex agentic systems, a black-box approach is often insufficient. Repello uses a gray-box method where a tool called AgentWiz helps customers generate a threat model based on the agent's workflow and capabilities, without needing access to the source code.


- The Challenge of Non-Deterministic Vulnerabilities: Unlike traditional software vulnerabilities which are deterministic, AI exploits are probabilistic. An attack like a system prompt leak only needs to succeed once to be effective, even if it fails nine out of ten times.


- The Future of Attacks is Multimodal: Aryaman predicts that as AI applications evolve, major new attack vectors will emerge from new interfaces like voice and image, as their larger latent space offers more opportunities for malicious embeddings.


Tune in for a deep dive!


Contacting Aryaman

* LinkedIn: https://www.linkedin.com/in/aryaman-behera/

* Company Website: https://repello.ai/


Contacting Anshuman

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/

* X: ⁠⁠⁠⁠https://x.com/anshuman_bh

* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/

* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya


Contacting Sandesh

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/

* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans

* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

Show more...
1 month ago
48 minutes 15 seconds

The Boring AppSec Podcast
From Toil to Intelligence: Brad Geesaman on the Future of AppSec with AI Agents

In this episode, host Anshuman Bhartiya sits down with Brad Geesaman, a Google Cloud Certified Fellow and Principal Security Engineer at Ghost Security, to explore the cutting edge of Application Security. With 22 years in the industry, Brad shares his journey and discusses how his team is leveraging agentic AI and Large Language Models (LLMs) to tackle some of the oldest challenges in AppSec, aiming to shift security from a reactive chore to a proactive, intelligent function. The conversation delves into practical strategies for reducing the "toil" of security tasks, the challenges of working with non-deterministic LLMs, the critical role of context in security testing, and the essential skills the next generation of security engineers must cultivate to succeed in an AI-driven world.


Key Takeaways

- Reducing AppSec Toil: The primary focus of using AI in AppSec is to reduce repetitive tasks (toil) and surface meaningful risks. With AppSec engineers often outnumbered 100 to 1 by developers, AI can help manage the immense volume of work by automating the process of gathering context and assessing risk for findings from SCA, SAST, and secrets scanning.


- Making LLMs More Deterministic: To achieve consistent and high-quality results from non-deterministic LLMs, the key is to use them "as sparingly as possible". Instead of having an LLM manage an entire workflow, break the problem into smaller pieces, use traditional code for deterministic steps, and reserve the LLM for specific tasks like classification or validation where its strengths are best utilized.


- The Importance of Evals: Continuous and rigorous evaluations ("evals") are crucial to maintaining quality and consistency in an LLM-powered system. By running a representative dataset against the system every time a change is made—even a small prompt modification—teams can measure the impact and ensure the system's output remains within desired quality boundaries.


- Context is Key (CAST): Ghost Security is pioneering Contextual Application Security Testing (CAST), an approach that flips traditional scanning on its head. Instead of finding a pattern and then searching for context, CAST first builds a deep understanding of the application by mapping out call paths, endpoints, authentication, and data handling, and then uses that rich context to ask targeted security questions and run specialized agents.


- Prototyping with Frontier vs. Local Models: The typical workflow for prototyping is to first use a powerful frontier model to quickly prove a concept's value. Once validated, the focus shifts to exploring if the same task can be accomplished with smaller, local models to address cost, privacy, and data governance concerns.


- The Future Skill for AppSec Engineers: Beyond familiarity with LLMs, the most important skill for the next generation of AppSec engineers is the ability to think in terms of scalable, interoperable systems. The future lies in creating systems that can share context and work together—not just within the AppSec team, but across the entire security organization and with development teams—to build a more cohesive and effective security posture. Tune in for a deep dive into the future of AppSec with AI and AI Agents!


Contacting Brad

* LinkedIn: https://www.linkedin.com/in/bradgeesaman/

* Company Website: https://ghostsecurity.com/


Contacting Anshuman

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/

* X: ⁠⁠⁠⁠https://x.com/anshuman_bh

* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/

* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya


Contacting Sandesh

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/

* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans

* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

Show more...
2 months ago
51 minutes 54 seconds

The Boring AppSec Podcast
The Future of Autonomous Red Teaming with Ads Dawson

In this episode, we talk to Ads Dawson (Staff AI Security Researcher @ Dreadnode).


We discuss the evolving landscape of offensive security in the age of AI. The conversation covers the practical application of AI agents in red teaming, a critical look at industry standards like the OWASP Top 10 for LLMs, and Ad's hands-on approach to building and evaluating autonomous hacking tools. He shares insights from his work industrializing offensive security with AI, his journey as a self-taught professional, and offers advice for others looking to grow in the field.


Key Takeaways

- AI is a "Force Multiplier," Not a Replacement: Ad emphasizes that AI should be viewed as a productivity tool that enhances the capabilities of human security professionals, allowing them to scale their efforts and tackle more complex tasks. Human expertise remains critical, especially since much of the data used to train AI models originates from human researchers.

- Prompt Injection is a Mechanism, Not a Vulnerability: A key insight is that "prompt injection" itself isn't a vulnerability but a method used to deliver an exploit. The discussion highlights a broader critique of security frameworks like the OWASP Top 10, which can sometimes oversimplify complex issues and become compliance checklists rather than practical guides.

- Build Offensive Agents with Small, Focused Tasks: When creating offensive AI agents, the most successful approach is to break down the overall objective into small, concise sub-tasks. For example, instead of a single goal to "find XSS," an agent would have separate tasks to log in, identify input fields, and then test those inputs.

- Hands-On Learning and Community are Crucial for Growth: As a self-taught professional, Ad advocates for getting deeply involved in the security community through meetups and CTFs. He stresses the importance of hands-on practice—"just play with it"—and curating your information feed by following trusted researchers to cut through the noise and continuously learn.


Tune in for a deep dive into the future of security and the innovative approaches shaping the industry!


Contacting Ads

* Ad's LinkedIn: https://www.linkedin.com/in/adamdawson0/

* Ad's website: https://ganggreentempertatum.github.io/


Contacting Anshuman

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/

* X: ⁠⁠⁠⁠https://x.com/anshuman_bh

* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/

* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya


Contacting Sandesh

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/

* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans

* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

Show more...
2 months ago
53 minutes 51 seconds

The Boring AppSec Podcast
Navigating AI's New Security Landscape with Vineeth Sai

In this episode, we talk to Vineeth Sai Narajala (Senior Security Engineer @ Meta).

We discuss the evolving landscape of AI security, focusing on the Model Context Protocol (MCP), Enhanced Tool Definition Interface (ETDI), and the AI Vulnerability Scoring System (AIVSS). We explore the challenges of integrating AI into existing systems, the importance of identity management for AI agents, and the need for standardized security practices. The discussion emphasizes the necessity of adapting security measures to the unique risks posed by generative AI and the collaborative efforts required to establish effective protocols.


Key Takeaways

- MCP simplifies AI integration but raises security concerns.

- Identity management is crucial for AI agents.

- ETDI addresses specific vulnerabilities in AI tools.

- AIVSS aims to standardize AI vulnerability assessments.

- Developers should start with minimal permissions for AI.

- Trust in the agent ecosystem is vital for security.

- Collaboration is key to developing effective security protocols.

- Security fundamentals still apply in AI integration.

Tune in for a deep dive into the future of security and the innovative approaches shaping the industry!


Contacting Vineeth

* Vineeth's LinkedIn: https://www.linkedin.com/in/vineethsai/

* Vineeth's website: https://vineethsai.com/


Contacting Anshuman

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/

* X: ⁠⁠⁠⁠https://x.com/anshuman_bh

* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/

* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya


Contacting Sandesh

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/

* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans

* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

Show more...
2 months ago
51 minutes 15 seconds

The Boring AppSec Podcast
Agentic AI: Transforming Vulnerability Management with Harry Wetherald

In this episode, we talk to Harry Wetherald (Co-Founder and CEO @ Maze). We explore the evolving landscape of vulnerability management. Harry shares insights from his journey in AI and machine learning, discussing the challenges of triaging vulnerabilities across diverse organizations. The conversation delves into the concept of agentic AI, the importance of context engineering, and the hurdles of achieving enterprise-grade reliability in AI systems. Harry also reflects on the inflection points that led to the founding of Maze and the role of LLMs in transforming security practices.


Key Takeaways


- Introduction to Maze and Harry's Journey: Harry shares his background in AI and machine learning, emphasizing the persistent challenges in vulnerability management and the founding of Maze to address these issues.

- Agentic AI and Context Engineering: The discussion highlights the shift from static rules to agentic AI, where AI agents autonomously investigate vulnerabilities, and the critical role of context engineering in tailoring solutions to specific organizational needs.

- Challenges in AI Reliability: Harry talks about the engineering hurdles in making AI systems reliable and consistent, focusing on the importance of tight reasoning loops and human-AI symbiosis.

- Pricing Strategies: In AI-native security solutions, clear and fixed pricing is preferred, as it simplifies budgeting and aligns with traditional models, while vendors should manage costs to ensure predictability for customers.

- Future of Security with AI: The conversation concludes with insights into the future of security, where AI agents work in the background to provide innovative solutions, and the importance of human feedback in refining AI systems.


Tune in for a deep dive into the future of security and the innovative approaches shaping the industry!


Contacting Harry

* Harry's LinkedIn: https://www.linkedin.com/in/harrywetherald/

* Maze: https://mazehq.com/


Contacting Anshuman

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/

* X: ⁠⁠⁠⁠https://x.com/anshuman_bh

* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/

* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya


Contacting Sandesh

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/

* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans

* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

Show more...
3 months ago
48 minutes 17 seconds

The Boring AppSec Podcast
Surag Patel and Arshan Dabirsiaghi

In this episode, we talk to Surag Patel (CEO @ Pixee) and Arshan Dabirsiaghi (CTO @ Pixee). We discuss the transformative approach that Pixee is taking in application security. We explore the shift from traditional security tools that merely detect vulnerabilities to a model that emphasizes automated remediation.

The discussion covers the evolving role of AppSec professionals, the integration of AI agents to scale coverage, the importance of trust in automated fixes, and the challenges of navigating a crowded security market.

We also touch on the future of security in design specifications and the need for a comprehensive approach to security that includes all stakeholders in the software development lifecycle.


Key Takeaways

- The traditional model of security tools is being challenged.

- Pixee aims to automate not just detection but also remediation.

- AI agents can help scale coverage in application security.

- The role of AppSec professionals will evolve with AI integration.

- Trust is crucial for developers to accept automated fixes.

- Developers want tools that reduce their workload, not add to it.

- Contextual understanding is key for accurate vulnerability triage.

- The security market is not saturated; there are still many unsolved problems.

- Integrating security into design specifications is the future.

- A comprehensive approach to security is necessary for effective risk management.


Tune in to find out more!


Contacting Surag & Arshan

* Surag's LinkedIn: https://www.linkedin.com/in/suragpatel/

* Arshan's LinkedIn: https://www.linkedin.com/in/arshan-dabirsiaghi/

* Pixee: https://www.pixee.ai/


Contacting Anshuman

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/

* X: ⁠⁠⁠⁠https://x.com/anshuman_bh

* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/

* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya


Contacting Sandesh

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/

* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans

* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

Show more...
3 months ago
56 minutes 37 seconds

The Boring AppSec Podcast
Ken Johnson

In this episode, we talk to Ken Johnson, Co-Founder & CTO @ DryRun Security. Ken discusses the evolution of application security, focusing on the role of AI and LLMs in enhancing security practices. He emphasizes the importance of context engineering over traditional prompt engineering, the challenges of consistency and repeatability in LLM outputs, and the ethical considerations surrounding AI in security. The discussion also highlights the need for orchestration in AI applications and the future potential of AI in the security landscape.


Key Takeaways

- DryRun Security utilizes AI to enhance code security.

- Context engineering is crucial for effective AI applications.

- LLMs can augment security practices but require careful orchestration.

- Consistency in LLM outputs is a significant challenge.

- Ethical considerations in AI are becoming increasingly important.

- Finding the right balance in using LLMs is essential.

- Community collaboration is vital for advancing AI solutions.

- Orchestration is a key factor in AI performance.

- AI will not replace jobs but will change how we work.


Tune in to find out more!


Contacting Ken

* LinkedIn: https://www.linkedin.com/in/cktricky/

* DryRun Security: https://www.dryrun.security/


Contacting Anshuman

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/

* X: ⁠⁠⁠⁠https://x.com/anshuman_bh

* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/

* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya


Contacting Sandesh

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/

* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans

* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

Show more...
3 months ago
54 minutes 36 seconds

The Boring AppSec Podcast
Casey Ellis

In this episode, we talk to Casey Ellis, Founder & Advisor @Bugcrowd.


Casey shares his personal journey through health challenges and his insights into the cybersecurity landscape. He discusses the evolution of the bug bounty industry, the importance of secure design, and the role of AI in both enhancing and complicating security measures. Casey emphasizes the need for accountability and the potential of crowdsourcing in security, while also addressing the challenges of implementing effective standards. The conversation concludes with reflections on the future of AI in security and the necessity for focused problem-solving in the industry.


Key Takeaways

- The bug bounty industry has transformed lives and created new opportunities.

- Founding a company involves learning from both successes and failures.

- The cybersecurity industry often focuses on quick wins rather than fundamental problems.

- Secure by design is essential for addressing root causes of vulnerabilities.

- Crowdsourcing can enhance accountability in security practices.

- Standards like ASVS are important but can be complex to implement.

- AI is both a tool and a threat in the cybersecurity landscape.

- Focusing on specific problems is key to leveraging AI effectively.


Tune in to find out more! 


Contacting Casey

* LinkedIn: https://www.linkedin.com/in/caseyjohnellis/

* Bugcrowd: https://www.bugcrowd.com/


Contacting Anshuman

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/

* X: ⁠⁠⁠⁠https://x.com/anshuman_bh

* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/

* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya

 

Contacting Sandesh

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/

* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans

* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

Show more...
4 months ago
54 minutes 8 seconds

The Boring AppSec Podcast
S2E10 - Vivek Ramachandran

In Season 2 Episode 10, we talk to Vivek Ramachandran, Founder  @SquareXTeam  .


In this episode, Vivek shares his journey in cybersecurity, discussing the evolution of content creation, the importance of building for a global audience, and navigating the Indian cybersecurity market. He emphasizes the need for browser security, the challenges of local markets, and the significance of personal relationships in business. In this conversation, Vivek Ramachandran shares insights on the challenges faced by founders, particularly in breaking into the U.S. market. He emphasizes the importance of building a strong advisor network and engaging in technical conversations. The discussion also delves into the evolving landscape of cybersecurity, highlighting the impact of AI on both attackers and defenders. Vivek offers valuable advice for new startup founders, stressing the need for patience, understanding the responsibilities of fundraising, and focusing on fundamental skills.


Key Takeaways


- The browser is now considered the new endpoint for security.

- Pentester Academy was born out of a need to share knowledge.

- Content creation has evolved significantly over the years. Today's audience prefers bite-sized, impactful content.

- Founders should think globally from the start.

- Cybersecurity in India is often driven by compliance rather than necessity.

- Technical founders must adapt to market needs and customer relationships.

- Design partnerships can help startups gain traction in local markets. Founders often give up after a few rejections.

- Building an advisor network is essential for success.

- AI is changing the dynamics of cybersecurity.

- Raising funds is a responsibility, not a success metric.

- Focus on fundamentals to stay relevant in tech.

- Learning by doing is becoming too easy with AI.

- Engage with your target market to build credibility.


Tune in to find out more! 


Contacting Vivek

* LinkedIn: https://www.linkedin.com/in/vivekramachandran/

* SquareX: https://www.sqrx.com/ 


Contacting Anshuman

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/

* X: ⁠⁠⁠⁠https://x.com/anshuman_bh

* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/

* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya

 

Contacting Sandesh

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/

* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans

* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

Show more...
8 months ago
47 minutes 12 seconds

The Boring AppSec Podcast
S2E9 - Ali Mesdaq

In Season 2 Episode 9, we talk to Ali Mesdaq, Founder & CEO @ Amplify Security.


We discuss the evolution of security tools, the importance of customer validation, and the role of AI agents in enhancing security practices. Ali shares insights on building a positive security culture within organizations and how Amplify Security differentiates itself in a competitive market. The conversation emphasizes the need for collaboration between security and development teams, the challenges of addressing known and unknown vulnerabilities, and the future of AI in cybersecurity.


Key Takeaways


- Amplify helps coders secure their code effectively.

- Customer validation is crucial for startup confidence.

- Security tools should enhance developer experience.

- AI agents can automate security fixes intelligently.

- Contextual understanding is vital for security solutions.

- Developers should approve code changes for security fixes.

- A positive security culture fosters collaboration.

- AI can help prioritize and manage vulnerabilities.

- The future of security involves AI-driven solutions.

- Security issues must be addressed in a timely manner.


Tune in to find out more! 


Contacting Ali

* LinkedIn: https://www.linkedin.com/in/amesdaq/

* Akto: https://amplify.security/


Contacting Anshuman

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/

* X: ⁠⁠⁠⁠https://x.com/anshuman_bh

* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/

* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya

 

Contacting Sandesh

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/

* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans

* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

Show more...
8 months ago
44 minutes 27 seconds

The Boring AppSec Podcast
S2E8 - Ankita Gupta

In Season 2 Episode 8, we talk to Ankita Gupta, Co-Founder & CEO @ Akto.io


Ankita shares her unique journey into the cybersecurity space, discussing her diverse background and the inception of her API security company. She emphasizes the importance of understanding customer needs, the role of co-founders in a startup's success, and the surprising maturity of buyers in the cybersecurity industry. Ankita also delves into marketing strategies for cybersecurity startups, highlighting the need for differentiation and continuous iteration in messaging. In this conversation, Ankita discusses various aspects of marketing strategies for enterprise SaaS, the challenges of building a brand in a competitive market, and the importance of API security. She emphasizes the need for startups to identify specific problems within their target market and how LLMs can significantly enhance API security. The discussion also touches on the necessity of experimentation and iteration in integrating AI into products.


Key Takeaways


- Understanding customer needs is crucial for product development.

- A strong co-founder relationship is vital for startup success.

- Buyers in cybersecurity are more mature than in other industries.

- Marketing should focus on product differentiation.

- Iterate marketing positioning continuously based on feedback.

- Networking is important, but building a customer base is essential.

- Cybersecurity tools are often purchased through structured processes. Social media is crucial for enterprise SaaS marketing.

- Branding requires a clear representation of the product's value.

- API security is a growing concern that needs addressing.

- LLMs can revolutionize the way API security is approached.

- It's essential to iterate and experiment with AI technologies.

- The market for API security is significant, even if not immediately recognized.

- Startups should not shy away from basic use cases with LLMs.


Tune in to find out more! 


Contacting Ankita

* LinkedIn: https://www.linkedin.com/in/ankita-gupta-89214515/

* Akto: https://www.akto.io/


Contacting Anshuman

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/

* X: ⁠⁠⁠⁠https://x.com/anshuman_bh

* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/

* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya

 

Contacting Sandesh

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/

* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans

* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

Show more...
8 months ago
43 minutes 10 seconds

The Boring AppSec Podcast
S2E7 - Jonathan Cran

In Season 2 Episode 7, we talk to Jonathan Cran, Founder @ Stealth.

Jonathan is a seasoned security industry veteran, discussing the evolution of AI in security, the challenges of adopting AI technologies in enterprises, and the future of attack surface management. We explore the role of AI agents, the importance of context in security solutions, and provide insights for cybersecurity entrepreneurs looking to navigate the rapidly changing landscape of technology and security.


Key Takeaways

- AI agents are still in early development stages.

- Consistency is crucial for AI adoption in enterprises.

- Automation can significantly enhance security processes.

- Contextual understanding is key for effective risk scoring.

- Generative AI can both solve security problems and create new ones.

- The demand for automated remediation solutions is growing.

- Attack surface management is evolving with new technologies.

- Understanding vulnerabilities requires a comprehensive approach.

- Entrepreneurs should focus on market problems, not just technology.

- Investors prioritize team, timing, and traction when evaluating startups.


Tune in to find out more!


Contacting Jonathan

* LinkedIn: https://www.linkedin.com/in/jcran/


Contacting Anshuman

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/

* X: ⁠⁠⁠⁠https://x.com/anshuman_bh

* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/

* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya


Contacting Sandesh

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/

* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans

* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

Show more...
8 months ago
45 minutes 40 seconds

The Boring AppSec Podcast
S2E6 - Vibhav Sreekanti

In Season 2 Episode 6, we talk to Vibhav Sreekanti, Co-Founder & CTO @ProphetSecurity  .


We discuss the evolving landscape of AI in cybersecurity, the skepticism surrounding generative AI, and the importance of experimentation with AI agents. Vibhav shares insights on building specialized agents for security operations, the challenges of deploying AI in production, and the critical need for security in AI infrastructure. The conversation emphasizes the necessity of asking tough questions about data security and the role of AI in enhancing security operations. We discuss the evolving landscape of security operations, focusing on the role of AI agents and the challenges faced by SOAR platforms. We explore the importance of centralized authentication, the need for human oversight in AI applications, and the lessons learned from Vibhav's startup journey, emphasizing the significance of team dynamics and market readiness.


Key Takeaways

- Vibhav has spent his career at startups, focusing on building products and teams.

- Keeping up with AI advancements requires active engagement on platforms like Twitter.

- Hands-on experimentation with new tools is crucial for understanding their applicability.

- Skepticism in AI is warranted due to past over-promises in the industry.

- Generative AI can enhance security operations if implemented thoughtfully.

- AI agents should be used selectively based on the problem at hand.

- Building a suite of specialized agents can lead to more effective outcomes.

- Security practices for distributed systems apply to agentic architectures as well.

- Data security and handling are paramount when using third-party AI models.

- Implementing gateways for AI interactions can help enforce security policies.

- Centralized authentication and authorization using OPA is compelling.

- SOAR platforms have not lived up to their promises, leading to alert fatigue.

- AI agents can enhance investigative tasks in security operations.

- Human oversight is essential in AI-driven security solutions.

- The importance of team dynamics cannot be overstated in startups.

- Understanding market dynamics is crucial for startup success.

- Being too early in a market can be as detrimental as being wrong.

- Feedback loops are vital for improving AI systems in security.

- The alert is just the beginning of incident response.

- The journey of AI agents in security is still in its infancy.


Tune in to find out more!

Contacting Vibhav

* LinkedIn: https://www.linkedin.com/in/vibhavs/

* Prophet Security: https://www.prophetsecurity.ai/


Contacting Anshuman

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/

* X: ⁠⁠⁠⁠https://x.com/anshuman_bh

* Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/

* ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya


Contacting Sandesh

* LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/

* X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans

* Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

Show more...
9 months ago
49 minutes 35 seconds

The Boring AppSec Podcast
S2E5 - Drew Dennison

In Season 2 Episode 5, we talk to Drew Dennison, Co-Founder & CTO @ Semgrep. We discuss the evolution of Semgrep as a code security tool, its focus on custom rules, and the importance of open source in democratizing application security. Drew shares insights from his entrepreneurial journey, the challenges faced in the early days of Semgrep, and the lessons learned from working in both the defense and civilian sectors of cybersecurity. The conversation highlights the shifting paradigms in application security, emphasizing the need for comprehensive coverage and the integration of modern development practices. In this conversation, Drew discusses the evolving landscape of cybersecurity, emphasizing the importance of custom rules in data security, the convergence of various security practices, and the role of open source in driving community engagement. He also explores the integration of AI and LLMs in code security, highlighting the potential for these technologies to enhance security processes while maintaining the necessity of human oversight. The discussion culminates in insights about the future of Semgrep Assistant and the balance between automation and human expertise in security. Key Takeaways - Semgrep is a code security tool focused on custom rules. - The importance of understanding user problems in product development. - Open source tools can democratize access to security solutions. - The evolution of static analysis tools has improved user experience. - Insights from the defense sector highlight the asymmetry in cybersecurity. - Companies often overlook basic security hygiene in favor of advanced solutions. - The modern application stack requires a holistic security approach. - 100% code coverage is now achievable with modern tools. - Community contributions enhance the effectiveness of open source projects. - The architecture of software development has shifted towards microservices. User data doesn't go any deeper than this in our stack. - The convergence of static analysis, software composition analysis, and secret scanning is notable. - At the technology level, we think of it as all basically the same problem. - We always knew we wanted to have an enterprise component for it. - We recognized early that LLMs were going to be the future of security. - Generative AI can help automate rule writing and prioritization. - Contextualization in security is essential for effective rule application. - The Semgrep Assistant aims to enhance developer trust and confidence. - AI will complement human roles rather than replace them in security. - Automation in security processes is crucial, similar to aviation. Tune in to find out more! Contacting Drew * LinkedIn: https://www.linkedin.com/in/drewdennison/ * Semgrep: https://semgrep.dev/ Contacting Anshuman * LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/ * X: ⁠⁠⁠⁠https://x.com/anshuman_bh * Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/ * ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh * LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/ * X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans * Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

Show more...
9 months ago
42 minutes 15 seconds

The Boring AppSec Podcast
S2E4 - Varun Badhwar

In Season 2 Episode 4, we talk to Varun Badhwar, Founder & CEO @ Endor Labs. We discuss the current state of application security, the challenges faced by development teams, and the importance of integrating security into the software development lifecycle. Varun shares insights from his previous experiences in building and acquiring cybersecurity companies, emphasizing the need for effective compliance strategies and the balance between platform solutions and best-of-breed tools. In this conversation, Varun Badhwar discusses the evolving landscape of cybersecurity, emphasizing the importance of compliance, product usability, and the integration of AI technologies like LLMs in vulnerability management. He highlights the need for a user-centric approach in AppSec, the challenges of providing context to engineers, and the future implications of AI in security governance. Key Takeaways - Endor Labs aims to make AppSec more engaging and effective. - Many existing AppSec tools create friction between teams. - The future of software development will involve AI-generated code. - Understanding the software supply chain is crucial for security. - Acquisitions in cybersecurity often fail due to integration issues. - Founders must empathize with practitioner pain to build effective products. - Compliance often drives security priorities in organizations. - Effective integration of tools can enhance security outcomes. - The industry needs to focus on enabling faster business operations. - Balancing platform capabilities with best-of-breed tools is essential. - Compliance is essential for sales enablement in cybersecurity. - First-time founders should focus on product and distribution. - User experience and developer experience are critical in AppSec products. - Contextual information is vital for engineers to make informed decisions. - Automation can help reduce noise in security alerts. - Reachability analysis improves visibility in code dependencies. - Impact assessment is crucial for effective vulnerability remediation. - LLMs can assist in reasoning but need rules for effective application. - AI governance is a growing concern in the software development space. - The industry must adapt to the rapid advancements in AI technology. Tune in to find out more! Contacting Varun * LinkedIn: https://www.linkedin.com/in/vbadhwar/ * Endor Labs: https://www.endorlabs.com/ Contacting Anshuman * LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/ * X: ⁠⁠⁠⁠https://x.com/anshuman_bh * Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/ * ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh * LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/ * X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans * Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

Show more...
9 months ago
47 minutes 5 seconds

The Boring AppSec Podcast
S2E3 - Robert Wood

In Season 2 Episode 3, we interview Robert Wood, Founder & CEO @ SideKick Security. We discuss Rob's journey from working at Cigital to starting his own consulting firm, the challenges of point solutions in cybersecurity, and the importance of soft skills in the industry. Rob shares insights on platformization versus services, tailoring security programs to unique needs, and building a security data lake to enhance data sharing and collaboration among teams. The conversation emphasizes the need for effective communication and community engagement in cybersecurity.


Key Takeaways

- Sidekick Security aims to address the challenges of siloed point solutions in cybersecurity.

- Rob emphasizes the importance of soft skills alongside technical skills in cybersecurity roles.

- Platformization can help reduce silos, but unique security needs must be considered.

- Every security program is unique and should be approached accordingly.

- Building a security data lake can enhance data sharing and collaboration among teams.

- Effective communication is crucial for security professionals to succeed.

- Engaging with the community is essential for growth in the cybersecurity field.

- Regulation and governance discussions are crucial as new technologies emerge. Tune in to find out more! Contacting Robert * LinkedIn: https://www.linkedin.com/in/holycyberbatman/

* SideKick Security: https://sidekicksecurity.io/ Contacting Anshuman * LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/ * X: ⁠⁠⁠⁠https://x.com/anshuman_bh * Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/ * ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh * LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/ * X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans * Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

Show more...
9 months ago
44 minutes 5 seconds

The Boring AppSec Podcast
S2E2 - Dustin Lehr

In Season 2 Episode 2, we interview Dustin Lehr, Co-Founder, Chief Product & Technology Officer at Katilyst. We discuss the significance of security champions in application security. We explore the cultural aspects of implementing security champions programs, the challenges of maintaining engagement, and the importance of leadership support. The conversation delves into measuring the success of these programs, the role of behavioral science, and the impact of effective training and gamification in enhancing security awareness within organizations. Dustin discusses the Octalysis framework, which identifies eight core human motivators that can be leveraged in gamification and cybersecurity culture. He emphasizes the importance of building relationships within organizations to change perceptions of security teams and foster a collaborative environment. Dustin also shares insights on the intersection of creativity and cybersecurity, his motivations for starting a company, and the role of AI in enhancing human interactions rather than replacing them.


Key Takeaways

- Security champions programs are crucial for fostering a security culture.

- Engagement and leadership support are key to program success.

- Measuring success can be challenging but is essential.

- Behavioral science plays a significant role in security engagement.

- Gamification can enhance training but must be used wisely.

- Curiosity can drive initial engagement but must be sustained.

- Training should be relevant and tailored to the audience.

- Creating empathy between teams improves security outcomes.

- Deep gamification focuses on understanding human drives.

- Starting a company is about helping others, not just profit.

- AI can augment human interactions but cannot replace them.

- Security teams should focus on providing value and support.

- Human connection is essential in cybersecurity.

- The importance of community and collaboration in security efforts. Tune in to find out more! Contacting Dustin * LinkedIn: https://www.linkedin.com/in/dustinlehr/

* Security Champion Success Guide: https://securitychampionsuccessguide.org/ Contacting Anshuman * LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/ * X: ⁠⁠⁠⁠https://x.com/anshuman_bh * Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/ * ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh * LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/ * X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans * Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

Show more...
10 months ago
48 minutes 52 seconds

The Boring AppSec Podcast
S2E1 - Jimmy Mesta

In Season 2 Episode 1, we interview Jimmy Mesta, a seasoned expert in application security and co-founder of RAD Security. We discuss the evolution of Kubernetes, its security challenges, and the importance of understanding the complexities of cloud-native infrastructure. Jimmy shares insights from his journey of starting a company, the role of AI in security, and the nuances of investing in security startups. The conversation highlights the need for a comprehensive approach to security that encompasses both application and infrastructure aspects, as well as the importance of mentorship and community in the startup ecosystem. Key Takeaways

- RAD Security aims to address real-time security for cloud-native infrastructure. - Kubernetes has evolved significantly, but security challenges remain. - Managed Kubernetes services have simplified deployment but not security. - Starting a company requires surrounding yourself with experienced mentors. - RASP solutions faced implementation challenges despite their potential. - Defining applications in a microservices architecture is complex. - AI presents both opportunities and skepticism in the security space. - Investing in startups requires trust and understanding of the founder's journey. - Efficiency in security operations is crucial for success. Tune in to find out more! Contacting Jimmy * LinkedIn: https://www.linkedin.com/in/jimmymesta/ * X: https://x.com/jimmesta Contacting Anshuman * LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/ * X: ⁠⁠⁠⁠https://x.com/anshuman_bh * Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/ * ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh * LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/ * X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans * Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

Show more...
10 months ago
54 minutes

The Boring AppSec Podcast
S1E10 - Future Security Predictions

Welcome to the Boring AppSec Podcast! In Episode 10, we discuss some security predictions that we hope to see in the near future. Some of them are:

  • AI agents - different kinds - activity based and/or persona based
  • Security talent is going to get better, hiring is important
  • AI powered security engineers - up leveling junior engineers
  • AI code review assistants - GPT4-o et al
  • Company consolidations happening in the security industry - D&R space
  • ASPM predictions and how AI agents will help evolve this space
  • CISA’s guidance on building secure by default frameworks
  • Automated red teaming
  • Hiring security engineers vs changes in interviewing

Tune in to find out more!


References mentioned in the episode:

  • OpenAI Security Bots - https://github.com/openai/openai-security-bots
  • Build an AI Appsec Team - https://srajangupta.substack.com/p/building-an-ai-appsec-team
  • CISA and secure design - https://www.cisa.gov/news-events/news/cisa-announces-secure-design-commitments-leading-technology-providers
  • Awesome secure defaults - https://github.com/tldrsec/awesome-secure-defaults
  • Slack vs MSFT teams - https://x.com/TrungTPhan/status/1640866391485194241
  • The Innovator's Dilemma - https://www.amazon.com/Innovators-Dilemma-Revolutionary-Change-Business/dp/0062060244


Contacting Anshuman

  1. LinkedIn: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 
  2. Twitter: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/anshuman_bh⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 
  3. Website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://anshumanbhartiya.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠
  4. Instagram: ⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.instagram.com/anshuman.bhartiya/⁠⁠⁠⁠⁠⁠⁠⁠⁠ 
  5. YouTube: ⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/@AnshumanBhartiya⁠⁠⁠⁠⁠⁠⁠⁠⁠   

Contacting Sandesh

  1. LinkedIn: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 
  2. Twitter: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/JubbaOnJeans/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 
  3. Website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://boringappsec.substack.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 
Show more...
1 year ago
50 minutes 41 seconds

The Boring AppSec Podcast
S1E09 - Incidents

Welcome to the Boring AppSec Podcast! In Episode 9, we discuss incidents. Both Sandesh and I share 2 incidents each and the lessons learnt from them. Tune in!


References mentioned in the episode:

  • Log4j - https://www.cisa.gov/news-events/news/apache-log4j-vulnerability-guidance
  • Incident runbook - https://engineering.razorpay.com/how-an-incident-transformed-razorpay-improving-the-5-why-rca-format-378de299b9a2


Contacting Anshuman

  1. LinkedIn: ⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/⁠⁠⁠⁠⁠⁠⁠⁠⁠ 
  2. Twitter: ⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/anshuman_bh⁠⁠⁠⁠⁠⁠⁠⁠⁠ 
  3. Website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠https://anshumanbhartiya.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠
  4. Instagram: ⁠⁠⁠⁠⁠⁠⁠⁠https://www.instagram.com/anshuman.bhartiya/⁠⁠⁠⁠⁠⁠⁠⁠ 
  5. YouTube: ⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/@AnshumanBhartiya⁠⁠⁠⁠⁠⁠⁠⁠   

Contacting Sandesh

  1. LinkedIn: ⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/⁠⁠⁠⁠⁠⁠⁠⁠⁠ 
  2. Twitter: ⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/JubbaOnJeans/⁠⁠⁠⁠⁠⁠⁠⁠⁠ 
  3. Website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠https://boringappsec.substack.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠ 
Show more...
1 year ago
37 minutes 48 seconds

The Boring AppSec Podcast
In this podcast, we will talk about our experiences having worked at different companies - from startups to big enterprises, from tech companies to security companies, and from building side projects to building startups. We will talk about the good, the bad, and everything in between. So join us for some fun, some real, and some super hot takes about all things Security in the Boring AppSec Podcast.