California enacts sweeping new AI regulation
Over the course of September, more than 30 artificial intelligence (AI)-focused bills crossed California Governor Gavin Newsom’s desk. The bills proposed a broad set of rules, ranging from mandatory disclosure of information used to train AI systems to ensuring AI is not over-relied on in healthcare decisions.
Among them were several polarizing proposals, including AB 2013, a bill mandating high-level disclosures of AI system training data, and SB 1047 (the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act), a bill drafted to impose comprehensive obligations on developers and operators of powerful AI models.
While several of these bills were signed into law, many fell short of Governor Newsom’s approval and were subject to vetoes or postponed decisions.
Signing of AB 2013: AI Training Data Transparency
On September 28, AB 2013 was one of the many bills signed into law by Governor Newsom. The primary purpose of the bill, which requires compliance from January 1, 2026, is to establish a set of rules for generative AI systems (those capable of learning from data and “generating” new content) or services made available to Californians. It seeks to do so by ensuring developers of these technologies are required to implement several transparency and accountability measures that clearly establish the provenance and content of the data they have used during development.
Requirements
AB 2013 requires AI developers (defined as any person, entity, or government agency that either creates or substantially modifies a generative AI system or service) to make available on their relevant websites documentation that outlines the data used to train their generative AI systems or services. This must be implemented prior to making the system or service available to individuals in California.
The documentation must include information regarding:
- The data points contained in the datasets, including sources, owners, types of labels (if unlabeled, “general characteristics”), total number of data points, and protection status (covered by intellectual property protections or in the public domain)
- Whether the developer purchased or licensed the datasets used by the system or service
- Whether the datasets include personal information or aggregate consumer information, as defined by the California Consumer Privacy Act
- The purpose and methodology of any data collection, cleaning, processing, or modification
- The dates on which the datasets were first used during development, and
- The use of synthetic information for ongoing training and development.
Much like several other new AI regulations, AB 2013 includes a grandfathering provision that exempts generative AI systems or services from compliance if they were released before January 1, 2022. Where a substantial modification has occurred on or after January 1, 2022, the provisions outlined in the bill will apply. Substantial modifications may include new versions, updates, or releases, or material changes to the underlying system or service by the original provider or a downstream party (including where changes occur due to retraining or finetuning).
The bill also includes several use-specific exceptions, including systems or services used solely to protect operational security and integrity, aircraft operation, or national security, or for military and defense purposes.
Governor Newsom vetoes SB 1047
On September 29, 2024, Governor Newsom vetoed SB 1047. SB 1047 was a contentious proposal which aimed to regulate powerful AI models and impose obligations on developers and compute resource providers.
The bill sought to impose heavy obligations on developers and operators of large AI models with the intent of limiting the potential for “critical harm.” These obligations included mandated technical and organizational controls, safety protocols, annual third-party audits, and compliance and incident reporting requirements. Further information on the specific original obligations of the bill can be found here.
Many of these obligations faced strong opposition from major players in the tech industry, who argued that the bill was overly broad and practically untenable (in addition to arguments that the bill’s extraterritorial requirements would constitute a First Amendment violation of the rights of developers). The bill did, however, receive support from several industry experts, outlining that the bill was a necessary and timely step towards mitigating AI risks.
In his veto message, Governor Newsom emphasized the critical need for AI safety and security but expressed his concern that the bill was not the right approach. He argued that the bill's focus on regulating only the most expensive and large-scale AI models could create a false sense of security, potentially overlooking smaller, specialized models that might pose equal or greater risks.
Governor Newsom also highlighted that the bill's stringent standards applied broadly, without considering whether an AI system is deployed in high-risk environments or involves critical decision-making. He stressed the importance of adaptability in regulation and the need for a framework informed by empirical evidence that can evolve alongside technology.
Other bills enacted
Despite the veto of SB 1047, Governor Newsom enacted several other bills designed to regulate specific applications of AI and their underlying technologies. These bills include:
Entertainment
- AB 2602, which requires studios to enter a written contract with an actor before creating an AI-generated digital replica of their voice or likeness, and the actor must be professionally represented in negotiating the contract.
- AB 1836, which prohibits studios from using digital replicas of deceased performers without first obtaining the consent of their estates.
Elections
- AB 2655, which requires large online platforms to remove or label AI deepfakes related to elections, as well as create channels to report such content.
- AB 2839, which takes aim at social media users who post, or repost, AI deepfakes that could deceive voters about upcoming elections.
- AB 2355, which requires disclosures about AI-generated political advertisements.
Misinformation and Deepfakes
- SB 942, which requires widely used generative AI systems to add watermarks to AI-generated content stating that they are AI generated.
- AB 2905, which mandates that any prerecorded message using an artificial voice must inform the recipient of this fact during the initial natural voice announcement.
- SB 926, which makes it a criminal act to blackmail someone with AI-generated nude images.
- SB 981, which requires social media platforms to establish reporting channels for deepfake nudes.
Consumer Protection
- AB 1008, which amends the California Consumer Privacy Act to clarify definitions related to personal information, specifying that personal information can exist in various formats, including information stored by AI systems.
Healthcare
- AB 3030, which mandates that health facilities and clinics using generative AI for patient communications must include disclaimers and contact instructions for human providers.
- SB 1120, which requires healthcare service plans and disability insurers using AI for utilization review or management to comply with specified requirements, some of which protect against discrimination, to ensure that provider decision-making is not replaced by AI.
Education
- AB 2876, which requires the Instructional Quality Commission to consider incorporating media and AI literacy into curriculum frameworks and instructional materials for various subjects.
- SB 1288, which establishes a working group by the Superintendent of Public Instruction to develop guidance and a model policy for the safe and effective use of artificial intelligence in public schools, aiming to benefit and protect pupils and educators.
Implications and horizon scanning
While the veto of SB 1047 is viewed by proponents of the bill as a regulatory setback in establishing California as a leader in the AI space, enactment of AB 2013 and accompanying AI-related bills can be considered a positive step toward California’s commitment to tackling the challenges and opportunities of AI. More bills are expected to be introduced in the next legislative session as AI continues to evolve and impact various sectors and domains.
DLA Piper is here to help
DLA Piper’s team of lawyers and data scientists assist organizations in navigating the complex workings of their AI systems to ensure compliance with current and developing regulatory requirements. We continuously monitor updates and developments arising in AI and its impacts on industry across the world.
As part of the Financial Times’s 2023 North America Innovative Lawyer awards, DLA Piper was conferred the Innovative Lawyers in Technology award for its AI and Data Analytics practice.
For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.
Gain insights and perspectives that will help shape your AI Strategy through our newly released AI ChatRoom series.
For further information or if you have any questions, please contact any of the authors.