Dec 22, 2024

New California Bill Proposes Strict Regulations on AI Development and Safety

by Lawrence J. Tjan | Sep 04, 2024
A person standing in an office environment, looking at a digital representation of a human face made up of particles and light. Photo Source: Adobe Stock Images by Sergey Nivens

The California State Assembly passed SB 1047, a bill aimed at regulating the development of artificial intelligence (AI) in the state. The bill, titled the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, sets out stringent requirements for developers before they can begin training AI models, marking a significant step toward managing the risks associated with advanced AI technologies.

Approved by a 49-15 vote, SB 1047 focuses on AI developers responsible for "covered models"—AI systems defined by the large quantity of computing power and high costs required for their training. These covered models include advanced AI systems that could pose substantial risks if misused. Developers of such models must meet several pre-training conditions, including establishing cybersecurity protocols, ensuring shutdown capabilities, and creating comprehensive safety procedures.

The bill requires developers to report any AI safety incidents and mandates annual third-party audits to ensure compliance. In addition, whistleblower protections have been included to encourage transparency and accountability within the AI development community.

The bill, authored by Senator Scott Wiener, targets “frontier” AI models, or advanced AI systems that require significant computational power to train. The bill would add provisions to both the Business and Professions Code and the Government Code, introducing a series of mandates for developers, such as:

Safety Protocols and Shutdown Capabilities: Before training any AI model, developers must implement robust safety and security protocols, including the ability to enact a full shutdown if necessary. These protocols would be documented and shared with the California Attorney General, who would be given access to unredacted safety reports.

Audits and Reporting: Developers would be required to conduct independent annual audits of their compliance with the safety regulations. Additionally, any AI safety incidents—such as unauthorized use or theft of AI models—must be reported to the Attorney General within 72 hours.

Restrictions on AI Use: The bill prohibits the deployment of AI models if there is a significant risk that they could cause "critical harm," such as enabling cyberattacks on critical infrastructure or the misuse of AI for destructive purposes, including the creation of weapons.

Whistleblower Protections: The bill includes whistleblower protections to safeguard employees from retaliation if they report AI safety concerns or noncompliance by their employers.

CalCompute: SB 1047 proposes the creation of CalCompute, a state-managed public cloud computing cluster that would foster safe and equitable AI research and innovation. The consortium managing CalCompute would include experts from academia, government, and industry, and would be tasked with ensuring that computational resources are accessible to a broad range of AI developers, including academic institutions and startups.

SB 1047 comes in response to growing public concerns over the risks of AI misuse, including its potential to enable the creation of weapons of mass destruction. These concerns echo the 2022 AI Bill of Rights Blueprint released by the White House, which outlined similar issues surrounding AI's impact on society. President Joe Biden signed an executive order in 2023 to establish national AI safety standards.

The bill highlights the dual nature of AI. While it can potentially revolutionize industries such as medicine and environmental science, it poses significant risks if not adequately controlled. The text of the bill specifically mentions the dangers of AI being used to develop cyber weapons, nuclear technologies, and biological weapons.

By establishing stringent regulations, the bill aims to ensure that California continues to lead AI innovation while safeguarding against its misuse. Senator Wiener emphasized that “California must remain at the forefront of AI development, but we must also prioritize public safety and security.”

While the bill has garnered support from several AI safety advocates, it is expected to face opposition from some in the tech industry who may view the regulations as overly burdensome. Companies developing large-scale AI models may argue that compliance with the bill's provisions, such as audits and shutdown protocols, could increase costs and slow down innovation.

Following the Assembly vote, Senator Wiener described the legislation as a "light touch, commonsense measure" that codifies safety commitments AI companies have already made voluntarily. He praised the Assembly for taking a proactive approach to safeguarding public interest while advancing AI technology.

Tesla CEO Elon Musk, a vocal advocate for AI regulation, voiced his support for the bill on X (formerly Twitter), highlighting the need for oversight in AI's rapid development. In contrast, Congresswoman Nancy Pelosi criticized the bill, calling it "well-intentioned but ill-informed," warning that it could be "more harmful than helpful" to consumers.

Despite potential pushback, the bill reflects growing concerns globally about the unchecked development of AI. Similar efforts to regulate AI are being pursued at the federal level and in other states, driving the momentum for the governance of AI technologies.

With the Assembly's approval, SB 1047 will return to the California Senate for a final vote. If passed, it will go to Governor Gavin Newsom for approval or veto. If passed, SB 1047 would go into effect in 2026, making California one of the first jurisdictions to implement comprehensive legal safeguards for advanced AI systems.

Share This Article

If you found this article insightful, consider sharing it with your network.

Lawrence J. Tjan
Lawrence J. Tjan
Lawrence is an attorney with experience in corporate and general business law, complemented by a background in law practice management. His litigation expertise spans complex issues such as antitrust, bad faith, and medical malpractice. On the transactional side, Lawrence has handled buy-sell agreements, Reg D disclosures, and stock option plans, bringing a practical and informed approach to each matter. Lawrence is the founder and CEO of Law Commentary.

Related Articles

Illustration depicting a man and a woman with digital data graphics in the background, related to the topic of political deepfakes and a new law in California.
California’s New Law on Political Deepfakes Faces Legal Challenge

California’s new law, Assembly Bill 2839, aimed at combating AI-generated deepfakes in elections, faces a legal challenge after a federal judge issued a preliminary injunction blocking most of its enforcement. The law, signed by Governor Gavin Newsom last month, targets the distribution of “materially deceptive” AI-generated media designed to mislead... Read More »

A California Assembly member speaking about a bill for age verification on pornography websites.
California Assembly Advances Bill to Require Age Verification on Porn Sites

In a bipartisan effort, Republican Assembly member Juan Alanis and Democrat Rebecca Bauer-Kahan have successfully persuaded their colleagues in the California Assembly to advance legislation mandating age verification on pornography websites. The bill, Assembly Bill 3080, aims to protect children from exposure to explicit and violent sexual material online. Juan... Read More »

A display of various firearms mounted on a wall in a gun shop.
California Officials Propose New Legislation to Curb Gun Violence

California ranks first in the nation for gun safety. But the mass shootings of 19 Californians last month have prompted Governor Newsom, Attorney General Rob Bonta, and a state senator to propose new amendments that are designed to strengthen gun safety laws and make them even stronger. California’s rate of... Read More »