Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts122/v4/ba/d8/2e/bad82e93-4808-378e-51db-7998969753df/mza_14911023975293012637.png/600x600bb.jpg
The Security Ledger Podcasts
The Security Ledger
9 episodes
8 months ago
In this, our 70th episode of The Security Ledger podcast, we speak withXu Zou of the Internet of Things security startup Zingbox about the challenges of securing medical devices and clinical networks from cyber attack. Also: we take a look at the turmoil that has erupted around the OWASP Top 10, a list of common application security foibles. And finally: open source management vendor Black Duck Software announced that it was being acquired for more than half a billion dollars. We sit down with Black Duck CEO Lou Shipley to talk about the software supply chain and to hear what's next for his company.
Show more...
Technology
Society & Culture,
News,
Tech News
RSS
All content for The Security Ledger Podcasts is the property of The Security Ledger and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
In this, our 70th episode of The Security Ledger podcast, we speak withXu Zou of the Internet of Things security startup Zingbox about the challenges of securing medical devices and clinical networks from cyber attack. Also: we take a look at the turmoil that has erupted around the OWASP Top 10, a list of common application security foibles. And finally: open source management vendor Black Duck Software announced that it was being acquired for more than half a billion dollars. We sit down with Black Duck CEO Lou Shipley to talk about the software supply chain and to hear what's next for his company.
Show more...
Technology
Society & Culture,
News,
Tech News
https://is1-ssl.mzstatic.com/image/thumb/Podcasts122/v4/ba/d8/2e/bad82e93-4808-378e-51db-7998969753df/mza_14911023975293012637.png/600x600bb.jpg
Episode 256: Recursive Pollution? Data Feudalism? Gary McGraw On LLM Insecurity
The Security Ledger Podcasts
32 minutes 27 seconds
1 year ago
Episode 256: Recursive Pollution? Data Feudalism? Gary McGraw On LLM Insecurity

In this episode of The Security Ledger Podcast (#256) Paul speaks with Gary McGraw of the Berryville Institute of Machine Learning (BIML), about that group’s latest report: an Architectural Risk Analysis of Large Language Models. Gary and Paul talk about the many security and integrity risks facing large language model machine learning and artificial intelligence, and how organizations looking to leverage artificial intelligence and LLMs can insulate themselves from those risks.



[Video Podcast] | [MP3] | [Transcript]







Four years ago, I sat down with Gary McGraw in the Security Ledger studio to talk about a report released by his new project, The Berryville Institute of Machine learning. That report, An Architectural Risk Analysis of Machine Learning Systems, included a top 10 list of machine learning security risks, as well as some security principles to guide the development of machine learning technology.



Gary McGraw is the co-founder of the Berryville Institute of Machine Learning



The concept of cyber risks linked to machine learning and AI – back then – were mostly hypothetical. Artificial Intelligence was clearly advancing rapidly, but – with the exception of cutting edge industries like high tech and finance – its actual applications in everyday life (and business) were still matters of conjecture.



An update on AI risk



Four years later, A LOT has changed. With the launch of OpenAI’s ChatGPT-4 large language model (LLM) artificial intelligence in March, 2023, the use- and applications of AI have exploded. Today, there is hardly any industry that isn’t looking hard at how to apply AI and machine learning technology to enhance efficiency, improve output and reduce costs. In the process, the issue of AI and ML risks and vulnerabilities -from “hallucinations” and “deep fakes” to copyright infringement have also moved to the front burner.



Back in 2020, BIML’s message was one of cautious optimism: while threats to the integrity of LLMs were real, there were things that the users of LLMs could do to manage those risks. For example, scrutinizing critical LLM components like data set assembly (where the data set that trained the LLM came from); the actual data sets themselves as well as the learning algorithms used and the evaluation criteria that determine whether or not the machine learning system that was built is good enough to release.



AI security: tucked away in a black box



By controlling for those factors, organizations that wanted to leverage machine learning and AI systems could limit their risks. Fast forward to 2024, however, and all those components are tucked away inside what McGraw and BIML describe as a “black box.”




So in 2020 we said. There’s a bunch of things you can do around these four components to make stuff better and to under...
The Security Ledger Podcasts
In this, our 70th episode of The Security Ledger podcast, we speak withXu Zou of the Internet of Things security startup Zingbox about the challenges of securing medical devices and clinical networks from cyber attack. Also: we take a look at the turmoil that has erupted around the OWASP Top 10, a list of common application security foibles. And finally: open source management vendor Black Duck Software announced that it was being acquired for more than half a billion dollars. We sit down with Black Duck CEO Lou Shipley to talk about the software supply chain and to hear what's next for his company.