top of page

Adopt the best-in-class AI framework for US-based companies

The NIST AI Risk Management Framework is highly regarded as an effective playbook for private and public sector organizations to adopt AI responsibly.

What is the NIST AI Risk Management Framework?

The NIST AI RMF is the U.S. federal government’s first comprehensive framework to identify and manage risks associated with the development and deployment of AI. Released in January 2023, the NIST AI RMF is organized around four core risk management functions: Govern, Map, Measure, and Manage. Each of the four functions have underlying categories and sub-categories of risk management actions and outcomes. The NIST AI RMF is accompanied by a series of companion documents meant to offer a practical roadmap for organizations to implement the framework.

image.png

Key Requirements of the AI RMF

Understanding Risk

Organizations need to have a baseline understanding of how AI systems generally negatively impact individuals, their organization, and society.

Mapping AI Systems

There must be an understanding about what AI systems are designed and deployed throughout the organization, as well as the purpose of those AI systems.

Building Trustworthy AI

There are seven characteristics of trustworthy AI that must underpin an organization’s AI risk management activities.

Measuring AI Risks

A methodology must be established to analyze and assess AI risks and their relevant impacts.

Governance Structures

An organization must implement policies and procedures that execute and oversee risk management activities.

Managing AI Risks

Identified risks must be prioritized and an organization must provide sufficient resources to mitigate AI risks.

Navigate the NIST AI RMF with Trustible

AI INVENTORY

Centralize EU AI Act required documentation in a single
source of truth across AI use cases

AI POLICIES

Develop and enforce AI policies that protect your organization, users, and society.

RISK MANAGEMENT

Identify, manage, measure, and mitigate potential risks in your AI systems.

Trustible AI Governance Platform

FAQs

Is the NIST AI RMF enforceable?

The NIST AI RMF is meant to be a voluntary framework. However, as legislators in the U.S. continue discuss how to regulate AI, components of the NIST AI RMF may become enforceable regulatory requirements.

Do I need to adopt the NIST AI RMF if my use cases are low risk?

While the NIST AI RMF is primarily focused on managing high risk AI use cases, organizations that design, development, or deploy AI in any context should consider how the NIST AI RMF can assist them in establishing an AI governance structure. AI use cases can also evolve over time and implementing a risk management framework now can help address potential use case changes in the future.

What are the seven characteristics of trustworthy AI?

The NIST AI RMF identifies the following as characteristics of trustworthy AI: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.

What other guidance is available to help implement the NIST AI RMF?

NIST provides a number of resources to assist organizations with understanding and implementing the AI RMF. Those resources include the NIST AI RMF Playbook, an AI RMF Explainer Video, an AI RMF Roadmap, AI RMF Crosswalk, and various independent perspectives

As seen on

Axios png.png
Yahoo! Finance Logo.png
Markets Insider_edited.png
Forbes Logo.png
bottom of page