What is ISO 42001?
On December 18, 2023, the International Organization for Standardization (ISO) adopted ISO 42001-2022, which sets a voluntary standard for organizations to implement an artificial intelligence (AI) management system.
Under ISO 42001, the overarching goal of an AI management system is for organizations to establish policies, procedures, and objectives for their AI systems. It provides a series of high-level principles and objectives for an organization’s stakeholders to follow when implementing its AI management system. The annexes map the principles and objectives to more granular controls and implementing guidance.
Who is the intended audience?
The ISO 42001 standard is meant for any organization that develops, deploys, and/or uses AI. Similar to the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF), ISO 42001 is intended to be adaptable and scalable based on the needs of the organization and its size. It is also flexible across industries, meaning that it is agnostic to the organization’s products or services.
Additionally, organizations interested in demonstrating trustworthiness in their AI use cases may consider proactively adopting ISO 42001. As consumers become more aware of how organizations use AI and the risks associated with those use cases, organizations demonstrating ISO 42001 compliance may build great trust with their customers and the public.
Why does a “voluntary” standard matter?
Organizations that are considering how to manage their AI systems can think of ISO 42001 as another tool in the toolbox to assist them in implementing the appropriate policies and procedures. However, organizations should also consider how voluntary frameworks and standards can easily be translated into enforceable regulatory requirements.
Policymakers, especially in the European Union (EU), may gravitate towards ISO 42001 as an enforceable standard for AI governance. For instance, organizations deploying high-risk systems must demonstrate compliance with the EU AI Act through conformity assessments once the law becomes effective. EU policymakers have indicated that compliance with ISO 42001 may be incorporated into the conformity assessment requirements.
Additionally, President Biden’s Executive Order on AI directs NIST to continue building upon the AI RMF and its work on other AI-related standards. NIST could align some of its future AI-standards setting work with ISO 42001. Given that US regulators are likely to rely on NIST’s AI standards for establishing AI regulatory regimes, this has the potential to translate aspects of ISO 42001 into enforceable obligations in the US.
How does it compare to ISO 23894-2023?
The key difference between ISO 42001 and ISO 23894 is scope. ISO 23894 adapts the generic risk management standards from ISO 38001-2018 to AI. ISO 42001 focuses on how an organization designs, adopts, and documents internal operating procedures to manage its AI systems.
While ISO 42001 does address risk management, it is within the broader context of policies and procedures in place throughout the organization to manage AI development and deployment, as well as for internal AI use cases. For instance, ISO 42001 requires organizations to conduct regular risk assessments and contains implementing guidance on how to effectively document AI risks. However, ISO 23894 provides more descriptive guidance on how to design an effective risk assessment.
How does ISO 42001 compare to the NIST AI RMF?
ISO 42001 can be thought of as a complement to the NIST AI RMF. While NIST focuses primarily on managing AI risks, both address high-level policies and procedures that organizations should consider to manage AI systems generally.
Notably, while both the NIST AI RMF and ISO 42001 are technically voluntary standards, ISO 42001 can be used as an auditable standard. In addition, ISO 42001 provides a greater level of detail for implementing controls than the NIST AI RMF. For instance, the NIST AI RMF stipulates that organizations should implement policies and procedures to continuously monitor AI systems throughout their lifecycle. However, it does not provide details on those policies and procedures. Conversely, ISO 42001 outlines more specific guidance on what organizations should consider when implementing continuous AI monitoring policies and procedures.
How does ISO 42001 compare to ISO 27001-2022?
While ISO 27001 is not an AI-related standard, it is an internationally accepted information security management standard. An organization can be ISO 27001 certified, which demonstrates that it has the appropriate measures in place to secure their information system assets. ISO 42001 is analogous to ISO 27001 in that it will provide organizations with a certifiable standard on managing their AI systems.
How can I get certified against ISO 42001?
As with other ISO standards, there will be certifying bodies established to audit organizations that seek to adopt ISO 42001. It is reasonable to expect those certification bodies will begin to take form beginning in 2024.