NIST Seeks Feedback on Draft AI Risk Management Framework in Connection with Extensive Stakeholder Workshop

On March 29-31, 2022, the National Institute for Science and Technology (NIST) held its second broad stakeholder workshop on its draft Artificial Intelligence Risk Management Framework, titled Building the NIST AI Risk Management Framework: Workshop #2.  The workshop extensively discussed NIST’s recently released AI Risk Management Framework: Initial Draft (AI RMF or Draft).  NIST is seeking stakeholder feedback on the Draft as part of a process over the next year to release a full version 1.0 of the AI RMF, which NIST intends to be a critical tool for organizations to identify and manage risks related to AI, including in areas like potential bias. 

Both the Draft and the workshop are a part of NIST’s established AI efforts. In the National Defense Authorization Act for Fiscal Year 2021 (2021 NDAA), Congress directed NIST to develop “a voluntary risk management framework for trustworthy [AI] systems.” Following this directive, in late July 2021, NIST issued a Request for Information (RFI) seeking input to help inform the development of the AI RMF. NIST held its first AI RMF workshop on October 19-21, 2021, released an AI RMF Concept Paper on December 13, 2021, and released the first draft of the RMF on March 17, 2022.

At the workshop, NIST discussed its latest draft, which is open for public comment until April 29.  While much was covered in the two days of the workshop devoted to AI RMF development, key points of discussion included:

  • The importance of a common risk management language, which NIST has indicated it will continue to try to develop;
  • The necessity of viewing AI risk management as a continuous process, including throughout the AI lifecycle;
  • The complexities of defining many of the terms in the AI RMF, such as “risk” and “AI,” which NIST also will consider;
  • The question of the appropriate audience for the AI RMF; and
  • The wealth of standards to draw on that can promote AI RMF interoperability with other AI standards, and the complex relationship between standards and regulations.

While the first two days of the workshop were related to the AI RMF development generally, the third day focused on the risk of AI bias. In addition to its work in developing the AI RMF, NIST has also been working for several years on fundamental AI research, including into AI security, explainability, and bias.  Most recently, NIST published NIST SP 1270, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence in mid-March.

NIST’s efforts are part of a larger effort across the federal government to address issues associated with emerging AI technology.  For example, last year, White House officials signaled that they are developing a potential consumer “Bill of Rights” for AI.  At this week’s NIST workshop, Office of Science and Technology Policy’s acting director Dr. Alondra Nelson spoke, encouraging the AI risk management development process to move from principles to practices.

Businesses that develop and/or use AI, as well as other interested stakeholders, should pay close attention to NIST’s AI efforts and consider engaging in NIST’s consensus-based process to develop the AI RMF.  Like other risk management frameworks that NIST has developed, including the Cybersecurity Framework, the final AI RMF promises to have important impacts on how AI is deployed and managed. Comments are due on the current AI RMF draft by April 29, 2022.

Wiley Connect

Sign up for updates

Necessary Cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytical Cookies

Analytical cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.