Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Fiction
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/82/aa/e9/82aae99f-35a5-a7fa-4772-f7b64e635c69/mza_42983026472286511.png/600x600bb.jpg
Compliance Perspectives
SCCE
100 episodes
2 days ago
Podcast featuring the top Compliance and Ethics thought leaders from around the globe. The Society of Corporate Compliance and Ethics and the Health Care Compliance Association will keep you up to date on enforcement trends, current events, and best practices in the compliance and ethics arena. To submit ideas and questions, please email: service@corporatecompliance.org
Show more...
Education
Business,
Non-Profit
RSS
All content for Compliance Perspectives is the property of SCCE and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Podcast featuring the top Compliance and Ethics thought leaders from around the globe. The Society of Corporate Compliance and Ethics and the Health Care Compliance Association will keep you up to date on enforcement trends, current events, and best practices in the compliance and ethics arena. To submit ideas and questions, please email: service@corporatecompliance.org
Show more...
Education
Business,
Non-Profit
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/82/aa/e9/82aae99f-35a5-a7fa-4772-f7b64e635c69/mza_42983026472286511.png/600x600bb.jpg
Alessia Falsarone on AI Explainability [Podcast]
Compliance Perspectives
13 minutes 53 seconds
3 weeks ago
Alessia Falsarone on AI Explainability [Podcast]
By Adam Turteltaub

Why did the AI do that?

It’s a simple and common question, but the answer is often opaque, with people referring to black boxes, algorithms and other words that only those in the know tend to understand.

Allesia Falsone, a non-executive director of Innovate UK, says that’s a problem.  In cases where AI has run amok, the fallout is often worse because the company is unable to explain why the AI made the decision it made and what data it was relying on.

AI, she argues, needs to be explainable to regulators and the public.  That way all sides can understand what the AI is doing (or has done) and why.

To create more explainable AI, she recommends the creation of a dashboard showing the factors that influence the decisions made.  In addition, teams need to track changes made to the model over time.

By doing so, when the regulator or public asks why something happened, the organization can respond quickly and clearly.

In addition, by embracing a more transparent process, and involving compliance early, organizations can head off potential AI issues early in the process.

Listen is to hear her explain the virtues of explainability.
Compliance Perspectives
Podcast featuring the top Compliance and Ethics thought leaders from around the globe. The Society of Corporate Compliance and Ethics and the Health Care Compliance Association will keep you up to date on enforcement trends, current events, and best practices in the compliance and ethics arena. To submit ideas and questions, please email: service@corporatecompliance.org