California Eyes New Privacy, Cyber, and AI Obligations
California continues to forge ahead on potential new privacy, cybersecurity, and artificial intelligence (AI) obligations, including through its California Consumer Privacy Act (CCPA) rulemaking process and by launching a new generative AI effort.
Below, we briefly describe the latest updates from California, including some potential proposals on privacy, cybersecurity, and AI that would be particularly far-reaching.
The CPPA Releases Draft Regulations That Signal Onerous New Risk Assessment and Cyber Requirements May Be Forthcoming.
As we have flagged in previous posts, the California Privacy Protection Agency (CPPA or agency) is preparing to launch its second round of rulemakings under the California Privacy Rights Act (CPRA), which will establish new rules in three areas: (1) automated decisionmaking, (2) risk assessments, and (3) cybersecurity audits. Most recently, the CPPA Board released draft regulations for two of the three topics—risk assessments and cybersecurity audits—for discussion at its September 8 open meeting.
While the agency has not officially started the formal rulemaking process yet, the CPPA Board’s latest announcement in its preliminary rulemaking phase indicates that the agency intends to impose broad new risk assessment and cybersecurity audit compliance obligations on businesses that are subject to the CCPA. And while the agency has not yet released a draft of stand-alone automated decisionmaking rules, the cybersecurity audit and risk assessment drafts extensively address the use of automated decisionmaking technology (ADT), including AI, and propose expansive recordkeeping and assessment requirements for businesses that use this technology for a wide range of purposes. In particular:
- Draft Risk Assessment Obligations. The draft risk assessment regulations propose to require businesses to conduct substantial and detailed risk assessments when processing consumers’ personal information (PI) in certain activities that could put consumers’ privacy at risk. Activities that would require a risk assessment include: (1) selling or sharing PI, (2) processing sensitive PI, (3) processing children’s (<16) PI, (4) using ADT in furtherance of certain kinds of decisions, (5) processing PI to train AI or ADT, and (6) processing PI for purposes of monitoring location, movement, or activities.
- Draft Cybersecurity Audit Obligations: The draft cybersecurity audit regulations propose requiring businesses to conduct detailed annual cybersecurity audits when the business processes consumers’ PI and meets one of the “significant risk” factors. The proposed “significant risk factors” include: businesses that (1) generate more than fifty percent of their annual revenue from selling or sharing PI, (2) process personal information of a to-be-determined number of consumers and have annual gross revenues in excess of $25 million annually, (3) exceed the to-be-determined threshold for gross annual revenue, or (4) have more than the to-be-determined number of employees.
- Focus on AI: The draft regulations propose extensive requirements for businesses that use ADT or process PI to train AI. The regulations propose broad definitions of both AI and ADT.
- “AI” is defined as “an engineered or machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”
- “ADT” is defined as “any system, software, or process . . . that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decisionmaking.” The draft notes that this definition includes use of AI or machine learning (ML) in its processing.
The use of ADT or PI to train AI would require a comprehensive risk assessment, including explanations of:
- Why the business is using ADT, including the benefits over manual processing;
- The appropriate use and limitations of the technology;
- What output will be generated and how the business will use it;
- The steps the business will take to maintain the quality of PI processed by ADT or used to train AI;
- An explanation of the logic of the ADT (including any assumptions of the logic);
- How the business will evaluate the use of ADT for validity, reliability, and fairness, along with an extensive explanation of how these elements are evaluated;
- An extensive explanation of any human involvement in the use of ADT, and;
- An explanation of safeguards the business will implement to address the negative impact to consumers’ privacy from the use of ADT.
Additionally, when PI is used to train AI, the showing must include an explanation of how the business will notify consumers of that use.
Governor Newsom’s New Generative AI Executive Order Launches New Efforts on State Use of the Emerging Technology.
In addition to the CPPA’s rulemaking activities, California Governor Gavin Newsom, signed an Executive Order (EO) on September 6, 2023, to require a series of reports and frameworks for government use of generative AI. Taken together with the CPPA’s focus on AI, it is clear that California is looking to take the lead on potential AI regulations, including by attempting to use authority under state privacy law.
The new EO is designed to deploy generative AI “ethically and responsibly throughout state government, protect and prepare for potential harms, and remain the world’s AI leader.” The EO considers California a national leader in AI and directs California agencies to build on and utilize the state’s academic and technical prowess.
- Reports: The EO directs state agencies and departments to draft two reports. The first is a report on the beneficial uses of generative AI. This report will also include the potential risks to “individuals, communities, and government and state government workers.” The second report is a risk-analysis report focusing specifically on generative AI’s potential threats to California’s critical energy infrastructure.
- AI Tools: The EO directs state agencies to develop AI tools to assist with the government’s use and support of generative AI, such as public sector procurement guidelines. These guidelines should be based on the White House AI Bill of Rights and the National Institute for Science and Technology’s (NIST’s) AI Risk Management Framework. Additionally, the EO encourages the creation of pilot projects and test programs for generative AI. The EO also wants agencies to develop guidance for analyzing “the impact that adopting a [generative] AI tool may have on vulnerable communities.”
- Training: California agencies and departments are also directed to provide trainings for state government workers on how to responsibly use generative AI.
- Academic Partnership: The EO establishes a partnership with the University of California, Berkeley and Stanford University to conduct research on how California can advance its position as a leader in AI.
- Legislative Proposals: The EO encourages state agencies to engage with stakeholders and legislators to develop policy recommendations for responsible use of AI.
- Reevaluation: Throughout the EO, agencies are directed to reevaluate generative AI’s impact on existing guidance, tools, programs, and regulations.
The EO directs the Government Operations Agency, the California Department of Human Resources, the California Department of General Services, the California Department of Technology, the Office of Data and Innovation, and the California Cybersecurity Integration Center to engage with stakeholders to put forward legislative proposals.
Wiley’s Privacy, Cyber & Data Governance and Artificial Intelligence (AI) teams assists clients with government advocacy, as well as compliance and risk management approaches to privacy compliance, AI technology and algorithmic decision-making, including on compliance and risk management issues for generative AI. Please reach out to any of the authors with questions.