Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
History
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/33/fd/c9/33fdc9e0-31b7-0385-6869-07fca94aaab5/mza_17792817917780535359.jpg/600x600bb.jpg
AI Safety - Paper Digest
Arian Abbasi, Alan Aqrawi
12 episodes
6 days ago
The podcast where we break down the latest research and developments in AI Safety - so you don’t have to. Each episode, we take a deep dive into new cutting-edge papers. Whether you’re an expert or just AI-curious, we make complex ideas accessible, engaging, and relevant. Stay ahead of the curve with AI Security Papers. Disclaimer: This podcast and its content are generated by AI. While every effort is made to ensure accuracy, please verify all information independently.
Show more...
Technology
RSS
All content for AI Safety - Paper Digest is the property of Arian Abbasi, Alan Aqrawi and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The podcast where we break down the latest research and developments in AI Safety - so you don’t have to. Each episode, we take a deep dive into new cutting-edge papers. Whether you’re an expert or just AI-curious, we make complex ideas accessible, engaging, and relevant. Stay ahead of the curve with AI Security Papers. Disclaimer: This podcast and its content are generated by AI. While every effort is made to ensure accuracy, please verify all information independently.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_episode/42144493/42144493-1730711410477-ee81399ab8e22.jpg
Battle of the Scanners: Top Red Teaming Frameworks for LLMs
AI Safety - Paper Digest
14 minutes 47 seconds
1 year ago
Battle of the Scanners: Top Red Teaming Frameworks for LLMs

In this episode, we explore the findings from "Insights and Current Gaps in Open-Source LLM Vulnerability Scanners: A Comparative Analysis." As large language models (LLMs) are integrated into more applications, so do the security risks they pose, including information leaks and jailbreak attacks. This study examines four major open-source vulnerability scanners - Garak, Giskard, PyRIT, and CyberSecEval - evaluating their effectiveness and reliability in detecting these risks. We’ll discuss the unique features of each tool, uncover key gaps in their reliability, and share strategic recommendations for organizations looking to bolster their red-teaming efforts. Join us to understand how these tools stack up and what this means for the future of AI security.

Paper: Brokman, Jonathan, et al. "Insights and Current Gaps in Open-Source LLM Vulnerability Scanners: A Comparative Analysis." (2024). arXiv.

Disclaimer: This podcast summary was generated using Google's NotebookLM AI. While the summary aims to provide an overview, it is recommended to refer to the original research preprint for a comprehensive understanding of the study and its findings.

AI Safety - Paper Digest
The podcast where we break down the latest research and developments in AI Safety - so you don’t have to. Each episode, we take a deep dive into new cutting-edge papers. Whether you’re an expert or just AI-curious, we make complex ideas accessible, engaging, and relevant. Stay ahead of the curve with AI Security Papers. Disclaimer: This podcast and its content are generated by AI. While every effort is made to ensure accuracy, please verify all information independently.