New Guidance on AI Regulations as Federal Agencies Plan Transition to a New Administration
The current Administration has released guidance to federal agencies on artificial intelligence (AI) regulation—outlining a relatively light touch approach, but pushing agency planning into the new year and a new Administration. Even the current guidance points to areas where there may be greater regulation on the horizon. For many agencies, this will represent a key turning point in defining their approach to AI, as new leadership takes over.
In February 2019, the Administration launched the American AI Initiative, beginning a coordinated effort across multiple federal government agencies to promote U.S. leadership in artificial intelligence (AI) research, development, and deployment. The latest development in this effort is an Office of Management and Budget (OMB) memo, finalized on November 17, 2020, which establishes guidance and new requirements for federal agencies in dealing with regulations for AI applications. Specifically, the memo directs federal agencies to conduct cost-benefit analyses and consider non-regulatory approaches before regulating AI applications, and it requires agencies to develop plans to implement the guidance by May 17, 2021. This will have a clear impact on the development of a federal regulatory approach to AI, and the memo effectively invites outside collaboration – including industry collaboration – in this dynamic area.
Background: The 2019 AI Executive Order Launched Efforts Across the Federal Government to Establish a Consistent Approach to AI.
On February 11, 2019, the Executive Order on Maintaining American Leadership in Artificial Intelligence (AI Executive Order) launched the American AI Initiative to drive a coordinated strategy across the federal government to promote AI. The EO launched various workstreams across the government, including critical ongoing research and development of technical standards for trustworthy AI at the National Institute of Standards and Technology (NIST).
One directive under the AI Executive Order was for the OMB to issue a memo on “regulatory and non-regulatory approaches by such agencies regarding technologies and industrial sectors that are either empowered or enabled by AI,” and to “consider ways to reduce barriers to the use of AI technologies in order to promote their innovative application while protecting civil liberties, privacy, American values, and United States economic and national security.” In January 2020, OMB published and sought public comment on a draft memo to satisfy this requirement. The November 17, 2020 memo finalizes the guidance.
OMB’s Guidance Promotes a Risk-Based Approach to AI Regulation, And May Pave the Way for Targeted Regulatory Solutions.
The final memorandum focuses on a risk-based, cost-benefit approach to regulation, which prioritizes attempting to find solutions through non-regulatory approaches, and states that “agencies must avoid a precautionary approach that holds AI systems to an impossibly high standard such that society cannot enjoy their benefits and that could undermine America's position as the global leader in AI innovation.” Importantly, it emphasizes opportunities for public input, evidence-based decision-making, and risk management, while also promoting public trust and transparency and avoiding discrimination and safety and security risks. The final memo specifically encourages the use of voluntary frameworks (e.g., the NIST cybersecurity and privacy frameworks).
The memo, however, does not completely foreclose on the idea of AI regulation. Rather, it suggests approval of “narrowly tailored and evidence-based regulations [to] address specific and identifiable risks” in some circumstances, for purposes of enhancing public trust and ensuring US competitiveness. Examples given are an “appropriate regulatory approach that reduces accidents,” (e.g., autonomous vehicle safety) and regulation “to protect reasonable expectations of privacy on the part of individuals who interact with AI and to ensure that AI does not compromise the ability of individuals to make their own informed decisions.” This suggests that agencies will have greater leeway to regulate AI in the name of privacy, and seemingly leaves the door open for regulations that could address issues like algorithmic content moderation and deepfakes. Notably, the final memo places a greater emphasis on explainability and transparency, and even suggests that regulatory approaches may be advisable in this area.
The New Requirement for Agencies to Develop AI Plans Spans the Change in Administrations.
The memo requires the development of agency plans, which are due to the Office of Information and Regulatory Affairs (OIRA) on May 17, 2021 and must be publicly posted. This deadline pushes finalization of plans into the new Administration, likely giving the Biden Administration significant say in agencies’ AI regulatory approaches.
Interesting, the memorandum purports to apply to independent agencies, such as the Federal Communications Commission, which is an addition to the draft. Republican Senators have recently urged the Trump Administration to apply more searching OMB review of significant regulations to independent agencies (via OIRA), in a departure from previous practice, and this would be consistent with that approach. Looking ahead, the Biden Administration may be less likely to seek to exercise greater review of independent agency regulations. However, it remains to be seen how the Biden Administration will handle this new OMB directive on AI, and how centralized its approach will be.
In any event, agencies will need to more substantively engage on AI issues starting now, in order to finalize plans for submission the spring – meaning that stakeholders have a critical opportunity to weigh in as AI regulatory approaches over the next few years begin to fully form.
Wiley’s Artificial Intelligence practice counsels clients on AI compliance, risk management, and regulatory and policy approaches, and we continue to engage with key government stakeholders in this quickly moving area.