Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts122/v4/7e/c8/31/7ec83167-0057-d449-fef6-e3cb73c10538/mza_15189998697076217976.jpg/600x600bb.jpg
Néstor Reverón
Nestor Reveron
20 episodes
5 days ago
Technical Trainer Dedicado a la tecnología. aka.ms/nestor
Show more...
Self-Improvement
Education
RSS
All content for Néstor Reverón is the property of Nestor Reveron and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Technical Trainer Dedicado a la tecnología. aka.ms/nestor
Show more...
Self-Improvement
Education
https://d3t3ozftmdmh3i.cloudfront.net/production/podcast_uploaded/29272052/29272052-1660067738963-435037009ad7e.jpg
AI Risk Management: ISO/IEC 42001, the EU AI Act, and ISO/IEC 23894 | By NotebookLM | GenAI
Néstor Reverón
7 minutes 51 seconds
1 year ago
AI Risk Management: ISO/IEC 42001, the EU AI Act, and ISO/IEC 23894 | By NotebookLM | GenAI

  • The EU AI Act is a risk-based regulation that categorizes AI systems into *prohibited systems**, **high-risk systems**, and **minimal-risk systems* based on their potential impact on fundamental rights, safety, and health.
  • **Prohibited AI systems**, such as social scoring for negative treatment or emotion recognition in workplaces and educational institutions, are banned due to their potential negative impacts on fundamental rights. However, exceptions exist when the system is used for safety reasons, such as monitoring a pilot's tiredness.
  • *High-risk AI systems* are allowed but must comply with the AI Act's requirements, including pre- and post-market obligations for providers.
  • *Minimal-risk AI systems* are generally allowed and have fewer obligations.
  • The EU AI Act applies to all AI systems used in the European market, even if developed or produced outside of the EU.
  • The Act establishes *penalties* for non-compliance, which vary depending on the severity of the breach and the size of the company.
  • It includes provisions to support **innovation**, particularly for small companies and startups, by offering simplified registration, technical documentation, and access to AI regulatory sandboxes.
  • The *AI Liability Act* works in conjunction with the AI Act to address the liability of developers of high-risk AI systems in cases where the AI system causes harm. This Act shifts the burden of proof to the developer in certain situations, making it easier for individuals to seek compensation for damages.
  • ISO 42001 provides a structured framework for managing AI systems throughout their lifecycle, from development and deployment to continuous monitoring.
  • This standard aligns with the EU AI Act's governance requirements by emphasizing **strong governance**, **effective risk management**, and a **commitment to continuous improvement**.
  • ISO 42001 addresses key components like AI objective planning, policy formulation, risk assessment procedures, impact assessments, system support, and continuous monitoring and review.
  • The benefits of implementing ISO 42001 include systematic risk management, increased stakeholder confidence, and a clear ethical framework.
  • While ISO 42001 focuses on integrating risk management into the broader AI governance framework, ISO 23894 provides a more detailed framework specifically for managing AI risks.
  • It aligns with the EU AI Act's focus on risk assessment and mitigation by offering a structured approach to **identifying, assessing, and mitigating risks**.
  • ISO 23894 outlines a cyclical risk management process encompassing *identification, analysis, evaluation, treatment, monitoring, and review.*
  • This standard requires documentation of risk management activities, implementation of controls, and the establishment of clear roles and responsibilities for risk management tasks, ensuring transparency and accountability.
  • Integrating these standards into an organization helps ensure systematic compliance with the EU AI Act, as they provide a structured approach to addressing the Act's requirements.
  • They promote proactive risk management by encouraging organizations to identify, assess, and mitigate potential risks continuously, aiding in maintaining ongoing compliance with the EU AI Act.
  • The standards enhance transparency and accountability by requiring detailed documentation and regular reporting, building trust and confidence among stakeholders.

ISO 42001: A Framework for Ethical and Responsible AI Governance ISO 23894: A Detailed Approach to AI Risk Management Benefits of Integrating ISO 42001 and ISO 23894 In conclusion, ISO 42001 and ISO 23894 complement the EU AI Act by providing practical guidance for organizations to manage AI systems safely, transparently, and responsibly. Implementing these standards demonstrates a commitment to ethical AI practices and facilitates compliance with the EU AI Act, promoting trust and confidence in AI technologies.

Néstor Reverón
Technical Trainer Dedicado a la tecnología. aka.ms/nestor