
As generative artificial intelligence technologies rapidly enter nearly every aspect of human life, it is ever more urgent for organizations to develop AI systems that are trustworthy and subject to good governance. AI is an incredibly powerful technology, but new applications are outpacing associated governance protocols, yielding risks that can be material for both the enterprise and society at large. Effective AI governance is essential to mitigate potential AI-related risks, such as bias, privacy infringement, and misuse, while fostering innovation and ensuring systems are safe and ethical.
This podcast explores the foundational pillars of responsible AI, detailing five of the most influential standards and frameworks guiding global consensus. We analyze the OECD Recommendation on Artificial Intelligence, which established international consensus on core principles like accountability, transparency, respect for human rights, and democratic values. We also cover the UNESCO Recommendation on the Ethics of Artificial Intelligence, focusing on broad societal implications and principles such as "Do No Harm" and human oversight.
To translate high-level commitments into actionable practices, we delve into three critical technical standards: the voluntary NIST AI Risk Management Framework (AI RMF), which offers a flexible structure for risk assessment across four core functions: Govern, Map, Measure, and Manage. We contrast this with the ISO/IEC 42001 standard, the world’s first certifiable standard for creating and managing a formal Artificial Intelligence Management System (AIMS). Finally, we examine the IEEE 7000-2021 standard, which provides engineers and technical workers with a practical, auditable process to embed ethical principles, like fairness and accountability, into system design from the very beginning.
Beyond the frameworks, we investigate the US federal AI governance landscape, including policy milestones from the White House, the role of federal agencies like the FTC and CFPB, and how existing laws are being interpreted to apply to AI technology. We also map the dynamic external forces—categorized as Societal Guardians, The Protectors, Investor Custodians, and Technology Pioneers—that are shaping corporate behavior, influencing business decision-making, and increasingly demanding accountability and ethical design.
Join us to understand how organizations can layer these complementary approaches to effectively manage AI risks, demonstrate "reasonable care," and align their AI assurance programs with best practices and emerging legal mandates.
If understanding how these international and federal guidelines translate into day-to-day organizational policy interests you, we can next explore specific best practices for deploying robust AI governance structures, including establishing internal accountability champions and continuous auditing processes.