Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts112/v4/b0/9a/9b/b09a9b96-09a8-b13c-f882-f5642c146366/mza_223710151401275897.jpg/600x600bb.jpg
Lunchtime BABLing with Dr. Shea Brown
Babl AI, Jeffery Recker, Shea Brown
66 episodes
3 months ago
Presented by Babl AI, this podcast discusses all issues related to algorithmic bias, algorithmic auditing, algorithmic governance, and the ethics of artificial intelligence and autonomous systems.
Show more...
Technology
Education,
Business,
Management
RSS
All content for Lunchtime BABLing with Dr. Shea Brown is the property of Babl AI, Jeffery Recker, Shea Brown and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Presented by Babl AI, this podcast discusses all issues related to algorithmic bias, algorithmic auditing, algorithmic governance, and the ethics of artificial intelligence and autonomous systems.
Show more...
Technology
Education,
Business,
Management
https://image.simplecastcdn.com/images/18f349b5-c775-4529-9a83-159fc5e0ebc8/8affb7cd-e121-4791-91b8-e3dbb4de377e/3000x3000/lunchtime-20babling-20logo.jpg?aid=rss_feed
Ensuring LLM Safety
Lunchtime BABLing with Dr. Shea Brown
27 minutes 58 seconds
7 months ago
Ensuring LLM Safety
In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown dives deep into one of the most pressing questions in AI governance today: how do we ensure the safety of Large Language Models (LLMs)? With new regulations like the EU AI Act, Colorado’s AI law, and emerging state-level requirements in places like California and New York, organizations developing or deploying LLM-powered systems face increasing pressure to evaluate risk, ensure compliance, and document everything. 🎯 What you'll learn: Why evaluations are essential for mitigating risk and supporting compliance How to adopt a socio-technical mindset and think in terms of parameter spaces What auditors (like BABL AI) look for when assessing LLM-powered systems A practical, first-principles approach to building and documenting LLM test suites How to connect risk assessments to specific LLM behaviors and evaluations The importance of contextualizing evaluations to your use case—not just relying on generic benchmarks Shea also introduces BABL AI’s CIDA framework (Context, Input, Decision, Action) and shows how it forms the foundation for meaningful risk analysis and test coverage. Whether you're an AI developer, auditor, policymaker, or just trying to keep up with fast-moving AI regulations, this episode is packed with insights you can use right now. 📌 Don’t wait for a perfect standard to tell you what to do—learn how to build a solid, use-case-driven evaluation strategy today.
Lunchtime BABLing with Dr. Shea Brown
Presented by Babl AI, this podcast discusses all issues related to algorithmic bias, algorithmic auditing, algorithmic governance, and the ethics of artificial intelligence and autonomous systems.