Skip to main content

The regulatory landscape for AI is continuing to shape up as we close out a landmark year in the advancement of AI capabilities and adoption. In just the past couple of weeks, all relevant EU institutions have agreed upon the leading draft regulation called the AI Act, and the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) have published their first standard for requirements and guidelines for AI governance and risk management.

Take a look below at the key points you need to know about both the ISO/IEC 42001 standard and EU AI Act.

ISO/IEC 42001

The ISO/IEC 42001 document outlines the requirements and guidelines for establishing, implementing, maintaining, and continually improving an AI management system within an organization. It is applicable to any organization, regardless of its size, type, or nature, that provides or uses products or services utilizing AI systems. The standard aims to help organizations develop, provide, or use AI systems responsibly.

Key areas covered in the standard include:

  1. Scope and Normative References: The document defines its scope and references other necessary documents.
  2. Terms and Definitions: It outlines specific terms and definitions used throughout the document, ensuring clarity and consistency.
  3. Context of the Organization: This section addresses understanding the organization and its context, including the needs and expectations of interested parties and determining the scope of the AI management system.
  4. Leadership: Focuses on leadership commitment, AI policy, and assigning roles, responsibilities, and authorities.
  5. Planning: This involves actions to address risks and opportunities, AI objectives, planning to achieve them, and planning of changes.
  6. Support: It covers resources, competence, awareness, communication, and documented information.
  7. Operation: This section includes operational planning and control, AI risk assessment, AI risk treatment, and AI system impact assessment.
  8. Performance Evaluation: It entails monitoring, measurement, analysis, evaluation, internal audit, and management review.
  9. Improvement: The document discusses continual improvement, nonconformity, and corrective action.

Additionally, the standard includes annexes providing reference control objectives and controls, implementation guidance for AI controls, potential AI-related organizational objectives and risk sources, and the use of the AI management system across domains or sectors.

Annex A provides reference control objectives and controls. These are intended to guide organizations in addressing risks and meeting objectives related to the design and operation of AI systems. Here’s an overview of these control objectives:

  1. Policies Related to AI:
    • Objective: Provide management direction and support for AI systems in line with business requirements.
    • Controls: Include documentation of AI policy, alignment with other organizational policies, and regular review of the AI policy.
  2. Internal Organization:
    • Objective: Establish accountability within the organization for the implementation, operation, and management of AI systems.
    • Controls: Define and allocate AI roles and responsibilities and establish a process for reporting concerns related to AI systems.
  3. Resources for AI Systems:
    • Objective: Ensure that the organization accounts for all resources (including AI system components and assets) to fully understand and address risks and impacts.
    • Controls: Involve documentation of resources required for AI system life cycle stages and other AI-related activities.
  4. Assessing Impacts of AI Systems:
    • Objective: Assess AI system impacts on individuals, groups, and societies affected by the AI system throughout its life cycle.
    • Controls: Establish a process for AI system impact assessment and document these assessments.
  5. AI System Life Cycle:
    • Objective: Define criteria and requirements for each stage of the AI system life cycle.
    • Controls: Include management guidance for AI system development, specification of AI system requirements, and documentation of AI system design and development.
  6. Data for AI Systems:
    • Objective: Ensure understanding of the role and impacts of data in AI systems throughout their life cycles.
    • Controls: Define data management processes, acquisition of data, data quality requirements, data provenance, and data preparation methods.
  7. Information for Interested Parties of AI Systems:
    • Objective: Ensure relevant parties have necessary information to understand and assess the risks and impacts of AI systems.
    • Controls: Include system documentation and information for users, external reporting, communication of incidents, and information sharing with interested parties.
  8. Use of AI Systems:
    • Objective: Ensure that the organization uses AI systems responsibly and in accordance with organizational policies.
    • Controls: Define processes for responsible use of AI systems and identify objectives to guide responsible use.
  9. Third-party and Customer Relationships:
    • Objective: Ensure understanding and accountability when third parties are involved in any stage of the AI system life cycle.
    • Controls: Allocate responsibilities between the organization, partners, suppliers, and customers, and establish processes for managing these relationships.

These control objectives and their respective controls are integral to creating a robust AI management system, as they assist organizations in aligning their AI practices with broader business strategies.

EU AI Act

European Parliament members have forged a consensus on a pivotal piece of legislation designed to shape the use of artificial intelligence (AI) across Europe, ensuring its alignment with fundamental rights and democratic values while fostering a conducive environment for businesses to innovate and expand.

Negotiators from the Parliament and the Council settled on the terms of the Artificial Intelligence Act. This act is set to safeguard fundamental rights, democracy, and the rule of law from the potential risks posed by high-stakes AI technologies, propelling Europe towards becoming a frontrunner in the AI domain. It establishes a framework of responsibilities proportionate to the risk and impact level of different AI applications.

Key Prohibitions:

Acknowledging the dire risks certain AI applications could pose to citizen rights and democracy, the legislators have agreed to ban:

  • AI that categorizes individuals based on sensitive traits such as political views, religious beliefs, or racial characteristics.
  • The indiscriminate harvesting of facial recognition data from the internet or CCTV for database creation.
  • The use of emotion recognition systems in work and educational settings.
  • Social scoring systems that judge individuals based on social behavior or personal traits.
  • AI tools designed to manipulate human behavior, undermining free will.
  • AI that preys on individuals’ vulnerabilities based on factors like age, disability, or socio-economic status.

High-Risk AI Systems:

For AI deemed high-risk due to its significant implications for health, safety, fundamental rights, and the environment, the legislation stipulates clear responsibilities. These include a mandatory assessment of the impact on fundamental rights, applicable to sectors like insurance and banking.

High-risk classifications also extend to AI systems capable of influencing election outcomes and voter behavior, with provisions for citizens to lodge complaints and obtain explanations for decisions made by such systems.

General Purpose AI Systems:

The regulation addresses the diverse capabilities of general-purpose AI systems (including foundational and generative AI models), requiring adherence to transparency obligations. This includes the creation of technical documentation and compliance with EU copyright laws.

For general-purpose AI systems posing systemic risks, Parliament negotiators have secured stricter requirements. These systems must undergo evaluations, risk assessments, and report on incidents, cybersecurity, and energy efficiency.

Penalties:

Fines for non-compliance range from up to €35 million or 7% of global turnover to €7.5 million or 1.5% of turnover, scaled to the severity of the breach and the company’s size.

What’s next?

As of December 8, the majority of the act’s provisions were agreed upon, though certain technical aspects are still under discussion. These are currently being addressed, with decisive votes in the EU Parliament’s committees expected on January 25, 2024. The final text is slated for publication by the Spring of 2024.

With the final approvals of the EU AI Act and the publication of the NIST AI RMF, 2023 has been quite the year for advances in standards development and regulation for AI systems with ISO joining the fray. Now more than ever, organizations will need to be proactive in ensuring AI system security and governance. Cranium is here to help.