Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
History
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/33/fd/c9/33fdc9e0-31b7-0385-6869-07fca94aaab5/mza_17792817917780535359.jpg/600x600bb.jpg
AI Safety - Paper Digest
Arian Abbasi, Alan Aqrawi
12 episodes
11 hours ago
The podcast where we break down the latest research and developments in AI Safety - so you don’t have to. Each episode, we take a deep dive into new cutting-edge papers. Whether you’re an expert or just AI-curious, we make complex ideas accessible, engaging, and relevant. Stay ahead of the curve with AI Security Papers. Disclaimer: This podcast and its content are generated by AI. While every effort is made to ensure accuracy, please verify all information independently.
Show more...
Technology
RSS
All content for AI Safety - Paper Digest is the property of Arian Abbasi, Alan Aqrawi and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The podcast where we break down the latest research and developments in AI Safety - so you don’t have to. Each episode, we take a deep dive into new cutting-edge papers. Whether you’re an expert or just AI-curious, we make complex ideas accessible, engaging, and relevant. Stay ahead of the curve with AI Security Papers. Disclaimer: This podcast and its content are generated by AI. While every effort is made to ensure accuracy, please verify all information independently.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_episode/42144493/42144493-1728386644188-a61277044486c.jpg
Open Source Red Teaming: PyRIT by Microsoft
AI Safety - Paper Digest
10 minutes 53 seconds
1 year ago
Open Source Red Teaming: PyRIT by Microsoft

In this episode, we dive into PyRIT, the open-source toolkit developed by Microsoft for red teaming and security risk identification in generative AI systems. PyRIT offers a model-agnostic framework that enables red teamers to detect novel risks, harms, and jailbreaks in both single- and multi-modal AI models. We’ll explore how this cutting-edge tool is shaping the future of AI security and its practical applications in securing generative AI against emerging threats.

Paper (preprint): Lopez Munoz, Gary D., et al. "PyRIT: A Framework for Security Risk Identification and Red Teaming in Generative AI Systems." (2024). arXiv.

Disclaimer: This podcast summary was generated using Google's NotebookLM AI. While the summary aims to provide an overview, it is recommended to refer to the original research preprint for a comprehensive understanding of the study and its findings.

AI Safety - Paper Digest
The podcast where we break down the latest research and developments in AI Safety - so you don’t have to. Each episode, we take a deep dive into new cutting-edge papers. Whether you’re an expert or just AI-curious, we make complex ideas accessible, engaging, and relevant. Stay ahead of the curve with AI Security Papers. Disclaimer: This podcast and its content are generated by AI. While every effort is made to ensure accuracy, please verify all information independently.