NIST Is Taking Critical Steps Towards an AI Risk Management Framework

The National Institute of Standards and Technology (NIST) is leading the federal government’s charge on a framework for assessing and managing risks in artificial intelligence (AI), with a critical workshop this week to review its latest draft. All signs point to a risk management framework in early 2023, but there is more work for NIST and stakeholders to do before then.

This week, on October 18–19, NIST will hold a workshop to discuss feedback on its latest draft (Second Draft) of an AI Risk Management Framework (RMF), which NIST released in August along with a draft Playbook with implementation suggestions. The public comment period for both the Second Draft and the draft Playbook closed on September 29, but the workshop will provide another opportunity for feedback.

As the Administration pushes forward with an AI Bill of Rights and federal agencies explore regulatory approaches to AI, voluntary and risk-based approaches to managing AI risks are a potentially critical tool for companies looking to develop and deploy AI in a responsible manner.

The AI RMF Second Draft and Draft Playbook

With the AI RMF, NIST aims to develop a framework to “better manage risks to individuals, organizations, and society associated with [AI].” The Second Draft follows NIST’s First Draft of the AI RMF, which was released on March 17, 2021, and previous workshops seeking stakeholder input.

The AI RMF is intended to provide guidance on promoting trustworthy and responsible AI and to provide users with tools for addressing AI system-specific challenges across the lifecycle of an AI system. The Second Draft states that the AI RMF aims to be risk-based, non-prescriptive, law- and regulation-agnostic, and a living document. It has two parts: an outline of the characteristics of “trustworthy” AI and an explanation of core functions for risk management.

In Part 1, the Second Draft outlines a number of characteristics of trustworthy AI, stating that trustworthy AI is “valid and reliable, safe, fair, and bias is managed, secure and resilient, accountable and transparent, explainable and interpretable, and privacy-enhanced.”

In Part 2, NIST outlines the AI RMF “Core,” which includes four “functions,” similar to other NIST risk management guidance. These functions are:

  • Govern: A culture of risk management is cultivated and present
  • Map: Context is recognized and risks related to context are identified
  • Measure: Identified risks are assessed, analyzed, or tracked
  • Manage: Risks are prioritized and acted upon based on a projected impact

The Second Draft explains each function and then provides a corresponding, constitutive set of categories and subcategories for each function.

Additionally, Part 2 notes that NIST is welcoming contributions towards “use case profiles” that implement the AI RMF for certain applications, such as for use cases involving AI in hiring and fair housing. NIST also explains that there may be “temporal profiles” that describe either the current state or the desired, targeted state of a certain AI risk management activity. The Second Draft does not include any sample profiles, but NIST encourages users to submit use case profiles, which “will likely lead to improvements which can be incorporated into future versions of the framework.”

The accompanying draft AI RMF Playbook provides more details about how an organization might implement each function – though it is still under development and only provides draft guidance for the “Map” and “Govern” functions at this time. For each AI RMF function subcategory, the Playbook will include: (1) the text of the subcategory, (2) a more detailed “About” section, (3) “Actions” users can take to address the subcategory, (4) “Transparency and Documentation” resources, and (5) additional “References.”

Most information about NIST’s goals for the Playbook can be found in its FAQ. This page explains that the Playbook is “not a one-size-fits-all resource – and it is neither a checklist nor an ordered list of steps for AI actors to implement. Playbook users are not expected to review or implement all of the suggestions or to go through it as an ordered series of steps.” Additionally, the FAQ states that the Playbook will be updated more frequently than the AI RMF, and that there will be “no final version.”

What Comes Next

While the comment period on the Second Draft and the draft Playbook has closed, NIST still has several action items on its agenda for the AI RMF. First and foremost, NIST is holding a third workshop on the AI RMF on October 18–19. After the workshop, NIST will incorporate feedback on the Second Draft and release Version 1.0 of the AI RMF – as well as a complete Playbook – sometime in January 2023. NIST’s plan for receiving feedback on the remaining parts of the Playbook remains uncertain, though it appears that NIST anticipates iterative comments on the Playbook over time.

New resources are also anticipated, but these do not presently have release dates. NIST explains that a new “Trustworthy and Responsible AI Resource Center” will host the AI RMF and similar documents. NIST invites contributions to the Resource Center, including “AI RMF profiles, explanatory papers, document templates, approaches to measurement and evaluation, toolkits, datasets, policies, or a proposed AI RMF crosswalk with other resources – including standards and frameworks. Eventually, contributions could include AI RMF case studies, reviews of Framework adoption and effectiveness, educational materials, additional technical forms of technical guidance related to the management of trustworthy AI, and other implementation resources.” The AI RMF profiles are probably the most notable resources that will be included.

As AI use cases evolve and multiply, the AI RMF and accompanying guidelines will be important for companies looking to use AI responsibly for beneficial uses, particularly as regulatory expectations continue to evolve and grow.

***

Wiley’s AI team assists clients in advocacy, compliance, and risk management approaches to AI technology and algorithmic decision-making. Please reach out to any of the authors with questions.

Tags

Wiley Connect

Sign up for updates

Necessary Cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytical Cookies

Analytical cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.