NIST Releases Plan for Developing AI Standards and Focuses on Industry Engagement
August 16, 2019
On August 12, the National Institute of Standards and Technology (NIST) announced the release of its long-anticipated Plan for federal engagement and U.S. leadership on artificial intelligence (AI) standards. The Plan sets out a framework for federal agencies to move forward on developing AI standards that will be critical to both U.S. and international regulatory and policy approaches. And it makes industry engagement a centerpiece of federal efforts, particularly as the U.S. government attempts to work globally to shape AI standards with countries that share a similar pro-innovation approach.
The Plan has four main recommendations:
Bolster AI standards for leadership and coordination among agencies;
Promote focused research to help support “trustworthy” AI;
Support and expand public-private partnerships on AI; and
Strategically engage with international parties to advance AI standards for U.S. economic and national security needs.
The Plan identifies nine categories of AI standards for further development. These include a number of standards that will be important as regulatory approaches evolve.
Some of the more substantive regulatory standards include issues that have repeatedly come up in regulatory and policy debates around AI. For example, standards will address data quality, privacy, safety, security, risk management, explainability, and “objectivity,” which appears to cover issues around bias. These can form the foundation of regulatory approaches by defining an optimal outcome for regulation. For example, what does it mean for AI outcomes to be “explainable” when AI algorithms can learn as they go? As the Plan recognizes, work is already being done in some of these areas but more industry input is needed.
Agencies leading the standards work
The Plan largely directs individual agencies to drive standards-setting in individual sectors. NIST recommends that each agency should assess how AI can be used to further the agency’s mission, conduct a “landscape scan and gap analysis” to identify standards that need to be developed, and engage in standards development if necessary. The plan points to the U.S. Department of Transportation (DOT) and the U.S. Food and Drug Administration (FDA) as being “ahead of the curve.” DOT for example has issued guidance (AV 3.0) on its approach to autonomous vehicles and safety.
Additionally, the Plan establishes a few centralized areas of coordination on the domestic front:
National Science and Technology Council (NSTC): NIST recommends that the NSTC Machine Learning/AI Subcommittee establish a Standards Coordinator who will gather and share standards strategies and best practices across agencies. The Coordinator will identify specific areas for prioritization, ensure coordination with private sector standards development organizations (SDOs), and determine whether additional guidance is appropriate.
Office of Management and Budget (OMB): NIST recommends that OMB “[r]einforce the importance of agencies’ adherence to Federal policies for standards and related tools,” including in the area of data access.
NIST / U.S. Department of Commerce: NIST will take the lead on developing metrics, data sets, and benchmarks to assess reliable and “trustworthy” attributes of AI systems, and identify research needs for related scientific breakthroughs. And along with the National Science Foundation, it will facilitate research and collaboration on societal and ethical considerations that might bear on the use of standards.
Importance of industry engagement and public/private partnerships
Overall, the Plan envisions that agencies will work collaboratively with industry in engaging in standards-setting processes. This approach relies “largely on the private sector to develop voluntary consensus standards, with Federal agencies contributing to and using those standards.”
The Plan directs agencies to prioritize standards-setting efforts that are “consensus-based,” open and transparent, and globally non-discriminatory. And the plan encourages those efforts to be (among other things) innovation-oriented, regularly updated, human-centered, and applicable across sectors, but also focused on specific sectors where there are specific risks.
Finally, the Plan recommends that the U.S. government escalate its efforts, likely in partnership with industry, to shape AI technical standards and other policies globally. The Plan identifies the U.S. Departments of Commerce, State, and Justice as the lead agencies on international engagement. This priority – coupled with increased interest in Europe and elsewhere in AI regulation – points to the need for businesses to develop policy positions and advocate with these agencies.
In particular, the Plan includes a recommendation to “champion U.S. AI standards priorities” around the world. This will likely result in increased involvement by the U.S. government in global AI standards development, such as international SDOs like the International Organization for Standardization (ISO) and U.S.-based organizations like the IEEE that create standards for global use. The Plan notes that some governments play a more centrally managed role in standards development and related activities, and it emphasizes that the government should “ensure that U.S. standards-related priorities and interests . . . are not impeded.” Thus, it will be important for industry to ensure U.S. agencies are aware and supportive of business priorities, and industry should be prepared for government requests for technical expert input and increased participation in standards-setting activities.
The Plan also recommends that the U.S. develop AI standards collaboratively with “like-minded countries.” U.S. support for the Organisation for Economic Co-operation and Development’s (OECD) AI principles is a good example of this approach, and many policy debates outside the U.S. are now moving into a more detailed phase of implementation of high-level principles from the OECD and from the European Commission (EC). Notably, the OECD and EC frameworks apply compliance expectations throughout the AI ecosystem, including to both AI developers and users.
Overall, NIST has created a framework that opens the door for industry leadership in driving the formation of key AI standards like explainability, bias, accuracy, risk management, and safety. Well-developed and thoughtful standards in those areas can promote innovation in an exciting and quickly moving area. Our AI team at Wiley Rein continues to engage with key government stakeholders as the process moves forward.