EU Presses Ahead on AI Regulation – With Global Implications
On April 21, the European Commission (EC) proposed a set of Artificial Intelligence (AI) regulations that would dramatically re-shape how AI will be deployed in Europe and potentially effect businesses around the world, including in the United States.
The Proposal for a Regulation on “Harmonized Rules on Artificial Intelligence” has the objective of promoting human-centric AI, and would prohibit some AI uses while imposing a comprehensive regime of strict regulations on AI uses with the highest risks. Other uses would be subject to lesser regulation, based on transparency principles and codes of conduct. Overall, the proposed regulation would impact the entire AI system life cycle from development through sale or use of the end product – and would impact AI development and implementation outside the European Union’s (EU) borders given international development of AI systems and the European market.
The proposed Regulation is open for feedback until June 22, and comments will be summarized by the EC and presented to the European Parliament. The EC has attempted a detailed approach in its comprehensive 108-page document, but many questions are expected. We are flagging considerations for business (including those based outside the EU) as the proposal advances through the EU legislative process.
Overview of the Proposed Regulation
The proposed Regulation is the first step in a legislative process that will wind through the EU Parliament and the Council of EU member states. Key stakeholders are already debating whether the proposal is too restrictive or it is too flexible. We can expect that provisions may change, making it important for businesses to track and possibly influence the content as the process unfolds.
The framework of the proposed Regulation sets up multiple tiers of AI uses. The proposed Regulation would prohibit certain “unacceptable” AI use cases. These include exploitation of vulnerabilities of groups based on age or disability to “distort” individual behavior in a way that causes harm, use of what the Regulation describes as “subliminal techniques” in ways that cause or are likely to cause harm, or use by public authorities of “social scores” resulting in detrimental or unfavorable treatment of individuals without sufficient justification. Remote biometric identification systems like facial recognition also would be severely restricted when used for law enforcement purposes, and otherwise considered to be high risk. “High-risk” AI systems, such as those affecting personal safety or those used in hiring or credit decisions, would be subject to strict obligations including risk assessments, logging and documentation requirements, and human oversight measures.
A threshold issue for any business will be where it fits into the proposed regulatory system – whether an AI application it develops or uses falls within the AI definitions and whether the business is a provider, user, importer, distributor, or user of an AI system. The intent of the Regulation is to cover all AI value chain participants, but these definitions are likely to require elaboration.
Moreover, the proposed Regulation is potentially applicable to parties outside the EU. For example, the proposed rules apply to providers putting AI systems into service in the EU, regardless of whether those providers are established within the EU or in a third country, and to providers and users of AI systems located in a third country where output by the system is used in the EU. Additionally, if data collected in the EU is used outside the EU for a high-risk AI system and has effects that impact natural persons in the EU, that AI system will be subject to the rules. Applicability questions could be complex, particularly where AI systems are embedded in the middle of cross-border value chains.
Potential Compliance Challenges
For compliance purposes, assuming the Regulation is adopted with a similar framework as proposed, businesses will need to map their AI systems (whether they are AI providers, users, or intermediaries) to categorize uses as prohibited, high risk, or non-high-risk AI. The proposed Regulation recognizes that the list of identified high-risk categories likely will be amended over time, creating greater risk in business planning. If new proposed examples of high-risk AI are introduced, covered companies will need to adjust quickly.
The requirements for high-risk systems will require clarification, especially since voluntary codes of conduct for non-high-risk AI systems are encouraged to cover these categories. These include data and data governance, documentation and record keeping, provision of information and transparency, human oversight, accuracy, and robustness. This could be an important area for business engagement during the next phase.
Additionally, the proposed Regulation encourages codes of conduct with only a brief section of text. The proposal appears to contemplate that entities involved with non-high-risk AI systems would abide voluntarily with the requirements imposed in the case of high-risk systems. This could be a considerable undertaking for a business, and an important topic for input regarding feasibility and ways to facilitate business planning in this context.
Overall, the proposed Regulation should be viewed in the broader contexts of the EC’s history in tech and privacy regulation, its recent focused work on AI, and AI public policy development in other venues such as the Organization for Economic Cooperation and Development (OECD). For example, as recounted in the narrative for the proposed regulation, in 2018 the EC established a highly regarded advisory group of experts that produced important guidelines and reports on AI, and the EC conducted a consultation in 2020 that drew 1200 responses. During this period, the OECD developed guidelines for trustworthy AI that have been a global model, created a vast repository of materials, and collaborated with the EU on AI regulatory issues.
As the details of the proposed regulation continue to be dissected and debated, stakeholders have less than two months to weigh in with their views. Some of the definitional ambiguities and compliance expectations may be resolved in the coming months. But with a potentially global impact, all eyes are on Europe for the next regulatory steps on AI.