Congress Ramps Up Its Focus on Artificial Intelligence
Over the past several months, federal lawmakers have continued exploring regulation of artificial intelligence (AI) through hearings and legislative proposals. Amidst many other Congressional priorities, legislators seem set on making headway toward some type of AI legislation, especially following the landmark Executive Order released by the White House at the end of October.
Most recently, Senators John Thune (R-S.D.) and Amy Klobuchar (D-Minn.) proposed the AI Research, Innovation and Accountability Act of 2023 (AI Accountability Act), which proposes a framework to promote AI accountability. Additionally, the House Energy and Commerce Committee held its first three of a series of hearings surveying the role of AI across economic sectors.
Below we highlight some of the high-level takeaways from both developments. Overall, despite the waning of this legislative session, federal legislators are not slowing down in their pursuit of addressing emerging issues with AI. The continued prioritization of AI foreshadows more AI-centered legislative activity in the new year.
AI Research, Innovation, and Accountability Act of 2023
The AI Accountability Act—which was introduced on November 15, 2023, and is within the jurisdiction of the Senate Committee on Commerce, Science, and Transportation—proposes a regulatory framework intended to increase transparency and accountability for high-risk AI applications. The proposed bill touches a wide variety of AI use cases and industries. This substantive legislation suggests an increased congressional effort to push forward with creating guardrails for AI deployment and management—especially in high-risk scenarios. As this legislation moves forward, industry should take note of the reporting and notice obligations and enforcement mechanisms proposed. Of note:
Definition of AI. The bill focuses on “artificial intelligence system(s),” which are identified as "engineered system[s] that generate outputs, such as content, predictions, recommendations, or decisions for a given set of . . . objectives; and [are] designed to operate with varying levels of adaptability and autonomy using machine and human-based inputs.”
Regulation of Critical-Impact and High-Impact AI. The proposed legislation categorizes some “artificial intelligence systems” as “critical-impact” and “high-impact” AI. Critical-impact includes use of AI to have a significant effect on (1) collection of biometric data without consent, (2) management and operation of critical infrastructure and space-based infrastructure; or (3) criminal justice. Such critical-impact AI systems would be required to conduct risk assessments prior to deployment biennially and then submit those assessments in a report to the Secretary of Commerce. High-impact AI systems are ones designed to have a significant effect “on the access of an individual to housing, employment, credit, education, healthcare or insurance,” in a way that could impact constitutional rights or safety. Under the legislation, deployers of these high-impact AI systems would be subject to an annual reporting obligation that includes among other things, submitting to Commerce a description of their use of AI, internal safeguards and any testing processes that are used prior to deployment. The bill also gives the Secretary of Commerce authority to impose financial penalties for critical-impact and high-impact AI systems that violate these provisions. The Attorney General would also be granted authority to bring civil action to enjoin a violation of noncompliance.
Notice Requirements for Persons Operating Covered Internet Platforms. The legislation would prohibit use of generative AI to operate a “covered internet platform,” unless a clear and conspicuous notice of generative AI use is given to users prior to their interaction with a platform. “Covered internet platform” includes “any public facing website, consumer facing internet application, or mobile application available to consumers in the United States; and includes a social network site, video sharing service, search engine, and content aggregation service.” The definition excludes platforms that do not employ more than 500 employees, averaged less than 50 million in annual gross receipts over the past three years, and collects data on less than 1 million individuals within a year. It additionally excludes operations that are solely for non-profit, research purposes.
NIST Directives. The bill proposes numerous research and standard development projects conducted both by the National Institute of Standards and Technology (NIST) and other agencies. For example, the legislation would require NIST to research and develop standards for AI verification for digital content, such as requiring watermarking, in addition to creating a 10-year pilot program, spearheaded by NIST, to test content verification technologies and open standards. NIST would also be authorized to create best practices and recommendations for detecting AI in photos, videos, audio, etc.
House Energy and Commerce Committee AI Hearings
The House Energy and Commerce Committee has held three hearings so far as a part of its initiative to identify opportunities and risks presented by AI in various sectors of the economy. While the hearings covered a range of topics and sectors, several key themes ran through all three hearings, including the need for national privacy legislation, as well as interest in promoting AI innovation while addressing potential risks, including cyber-attacks and bias.
First, on October 18, 2023, the House Energy and Commerce Innovation, Data, and Commerce Subcommittee held a hearing on “Safeguarding Data and Innovation: Building the Foundation for the Use of Artificial Intelligence,” which focused on developing federal safeguards to protect Americans’ data and incentivize innovation. House Energy and Commerce Committee Chair Cathy McMorris Rodgers (R-Wash.), Ranking Member Frank Pallone (D-N.J.), and other Committee members emphasized the importance of developing national privacy legislation to safeguard consumers and enhance transparency in a rapidly evolving AI landscape. Witnesses agreed—stressing the need to have a comprehensive privacy bill. The Committee asked questions about potential mechanisms to protect consumers’ data and encourage innovation, including bolstering existing agency rules and requiring watermarks to indicate AI-generated content.
Second, on October 19, 2023, the House Energy and Commerce Energy, Climate, and Grid Security Subcommittee held the second AI hearing, “The Role of Artificial Intelligence in Powering America’s Energy Future,” which analyzed the role of AI in the energy sector. While highlighting the need for federal privacy legislation, the Committee discussed several other priorities including use of AI in the energy sector to enhance electricity for development of cutting edge technology. Committee members also inquired about potential AI risks, including significant energy consumption, water consumption, and carbon emissions that could result from AI uses.
Third, on November 14, 2023, Congress held its most recent hearing, titled “Leveraging AI to Enhance American Communications,” with the House Energy and Commerce Communications and Technology Subcommittee focusing on AI in the communications sector. During the hearing, the Committee reiterated its AI priorities, including developing national data privacy legislation, and focused on recent developments coming from the White House, including the AI Executive Order and National Telecommunications and Information Administration’s (NTIA) National Spectrum Strategy.
The Committee will continue the conversation about AI, with additional hearings to be announced. In the interim, Committee leadership continues to work on bills related to privacy and AI.
Wiley’s Privacy, Cyber & Data Governance and Artificial Intelligence (AI) teams assist clients with government advocacy, as well as compliance and risk management approaches to privacy compliance, AI technology and algorithmic decision-making, including on compliance and risk management issues for generative AI. Please reach out to any of the authors with questions.
Melissa Alba, a Law Clerk at Wiley Rein LLP, contributed to this blog post.