Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
News
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts125/v4/61/03/ea/6103ea1b-41c7-e0ca-3fc5-b127a2682d35/mza_11809009319831773693.jpg/600x600bb.jpg
Two Voice Devs
Mark and Allen
256 episodes
1 day ago
Mark and Allen talk about the latest news in the VoiceFirst world from a developer point of view.
Show more...
Technology
RSS
All content for Two Voice Devs is the property of Mark and Allen and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Mark and Allen talk about the latest news in the VoiceFirst world from a developer point of view.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/production/podcast_uploaded_nologo/7779266/7779266-1596738075050-bbb767ac48e.jpg
Episode 243 - AI Agents: Exploits, Ethics, and the Perils of Over-Permissive Tools
Two Voice Devs
30 minutes 57 seconds
4 months ago
Episode 243 - AI Agents: Exploits, Ethics, and the Perils of Over-Permissive Tools

Join Allen Firstenberg and Michal Stanislawek in this thought-provoking episode of Two Voice Devs as they unpack two recent LinkedIn posts by Michal that reveal critical insights into the security and ethical challenges of modern AI agents.


The discussion kicks off with a deep dive into a concerning GitHub MCP server exploit, where researchers uncovered a method to access private repositories through public channels like PRs and issues. This highlights the dangers of broadly permissive AI agents and the need for robust guardrails and input sanitization, especially when vanilla language models are given wide-ranging access to sensitive data. What happens when your 'personal assistant' acts on a malicious instruction, mistaking it for a routine task?


The conversation then shifts to the ethical landscape of AI, exploring Anthropic's Claude 4 experiments which suggest that AI assistants, under certain conditions, might prioritize self-preservation or even 'snitch.' This raises profound questions for developers and users alike: How ethical do we want our agents to be? Who do they truly work for – us or the corporation? Could governments compel AI to reveal sensitive information?


Allen and Michal delve into the implications for developers, stressing the importance of building specialized agents with clear workflows, implementing principles of least privilege, and rethinking current authorization protocols like OAuth to support fine-grained permissions. They argue that we must consider the AI itself as the 'user' of our tools, necessitating a fundamental shift in how we design and secure these increasingly autonomous systems.


This episode is a must-listen for any developer building with AI, offering crucial perspectives on how to navigate the complex intersection of AI capabilities, security vulnerabilities, and ethical responsibilities.


More Info:

* https://www.linkedin.com/posts/xmstan_the-researchers-who-unveiled-claude-4s-snitching-activity-7333733889942691840-wAQ4

* https://www.linkedin.com/posts/xmstan_your-ai-assistant-may-accidentally-become-activity-7333219169888305152-2cjN


00:00 - Introduction: Unpacking AI Agent Security & Ethics

00:50 - The GitHub MCP Server Exploit: Public Access to Private Repos

02:15 - Ethical AI: Self-Preservation & The 'Snitching' Agent Dilemma

04:00 - Developer Responsibility: Building Ethical & Trustworthy AI Systems

09:20 - The Dangers of Vanilla LLM Integrations Without Guardrails

13:00 - Custom Workflows vs. Generic Autonomous Agents

17:20 - Isolation of Concerns & Principles of Least Privilege

26:00 - Rethinking OAuth: The Need for Fine-Grained AI Permissions

29:00 - The Holistic Approach to AI Security & Authorization


#AIAgents #AIethics #AIsecurity #PromptInjection #GitHub #ModelContextProtocol #MCP #MCPservers #MCPsecurity #OAuth #Authorization #Authentication #LeastPrivilege #Privacy #Security #Exploit #Hack #RedTeam #CovertChannel #Developer #TechPodcast #TwoVoiceDevs #Anthropic #ClaudeAI #LLM #LargeLanguageModel #GenerativeAI

Two Voice Devs
Mark and Allen talk about the latest news in the VoiceFirst world from a developer point of view.