The AI safety bill
Should California’s proposed AI safety bill be passed? Viewpoints from multiple sides.
Enjoying Framechange? Forward to a friend to help spread the word!
New to Framechange? Sign up for free to see multiple sides in your inbox.
Learn more about our mission to reduce polarization and how we represent different viewpoints here.
Snippets
After going on strike earlier this week, roughly 45,000 US dockworkers on the East Coast and Gulf Coast agreed to suspend their strike until Jan 15 to allow time for new contract negotiations.
Israel began a ground offensive in Lebanon in what it said was a “limited, localized and targeted” operation aimed at Hezbollah’s infrastructure, with a goal of securing the border region for Israelis to return to their homes in northern Israel. It also launched airstrikes on Beirut targeting Hezbollah’s presumed replacement leader, Hashem Safieddine.
Iran launched 180+ ballistic missiles at Israel in retaliation for Israel’s recent attacks on Hezbollah and assassinations of key Hezbollah leaders. The majority of projectiles were intercepted, with some landing in central and southern parts of the country.
The death toll from Hurricane Helene rose to at least 215 across the southeast, with more than half of the deaths in North Carolina. President Biden deployed 1,000 active duty soldiers to join the North Carolina National Guard in relief efforts.
A judge in Georgia reversed the state’s 6-week ban on abortion, making it legal to carry out an abortion up to 22 weeks. The previous 6-week ban (which included exceptions for rape and incest) was originally signed into law in 2019 and went into effect after Roe v. Wade was overturned in 2022.
What’s happening
This week, California Governor Gavin Newsom vetoed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, commonly known as SB 1047. The proposed bill would have imposed safety requirements on developers of large AI models operating in the state.
Why it’s important: SB 1047 is considered the most significant piece of AI safety legislation proposed to date in the US. California, the nation’s largest economy and the 5th-largest in the world, is traditionally a regulatory bellwether for the rest of the country. All AI companies with customers in California would have been impacted and the bill would have likely informed regulatory efforts nationally and globally.
What’s in the bill: The proposed bill defines safety standards and potential legal liabilities for developers of “covered” AI models. A few notable provisions:
Covered models: Large models that exceed $100M+ in costs and 10^26 integer or floating-point operations (FLOPs) in computing power to train are subject to the rules. (No publicly available model has reached the computing power threshold but future models are expected to.)
Safety precautions: AI developers are required to implement specific controls and testing designed to prevent catastrophic consequences from use of their models (e.g., mass casualty events, cybersecurity attacks on infrastructure exceeding $500M).
“Kill switch”: Developers are required to create a shutdown capability that can cease all operations of a model that is within their control in the event of a catastrophic event.
Whistleblower protections: AI companies are prohibited from preventing employees from reporting noncompliance internally or with the state.
Penalties: The California Attorney General may inflict civil penalties for violations.
Newsom said he vetoed the bill because it only focused on the largest AI models and did not specify restrictions on models that could be deployed in “high-risk environments.” Revisions are expected before the state legislature and Newsom revisit a decision on the bill.
The debate: Viewpoints were mixed across traditional political party lines and the tech industry. Opposition included Google, Meta, Microsoft, OpenAI, startup accelerator Y-Combinator, prominent venture capital firm a16Z, former speaker of the House, Rep. Nancy Pelosi (D-California), and prominent AI researchers Fei-Feii Li and Andrew Ng.
Supporters included Elon Musk, AI researchers Geoffrey Hinton and Yoshua Bengio, 120 Hollywood actors, and AI company Anthropic, which said the bill’s “benefits likely outweigh its costs” after many of its requested amendments were implemented.
This week, we bring you the viewpoints from multiple sides on legislation that may ultimately shape standards for AI regulation in the US. Let us know what you think.
Notable viewpoints
More opposed to SB 1047:
The bill attempts to regulate AI at the model layer when it should regulate at the application level.
The proposed bill makes a fundamental mistake of regulating a general purpose technology rather than its applications; there is no way for a developer of an AI model to control how it is ultimately used and therefore regulation should focus more on restricting how it is applied (i.e., it should regulate at the application level rather than at the model layer).
The bill’s use of a $100M training cost threshold for determining which models are big enough to regulate is flawed because of the bill’s focus on regulating at the underlying model layer rather than at the application level; it becomes very hard to accurately calculate the total training costs that go into developing an AI model (e.g., deciding whether to include researcher salaries, costs of all training runs vs. the most recent training run, etc.).
The bill would slow AI innovation and have unintended effects.
The proposed bill’s holding an AI developer liable for misuse of its model will drive start-up developers to act more defensively and cautiously, ultimately slowing innovation and AI technology development.
SB 1047’s strict guidelines will encourage AI companies and developers to leave the country for more favorable environments.
The threat of legal penalties will discourage large model developers (e.g., big tech players) from sharing their underlying models with the developer community (a practice known as open sourcing) for fear of liability over how they are ultimately used.
If SB 1047 discourages open source, it would have the impact of making AI less safe because open source makes it easier for developers to check each other’s work for mistakes and safety risks.
The threat of AI and current pace of its advancement are exaggerated.
Supporters of strict AI regulation in the near-term lack an understanding of the challenges ahead in making significant progress toward human-level AI, and over-regulating now will severely limit AI research and its potential benefits for humanity.
The dangers of AI have been overblown; recent studies by AI developers like Anthropic, for example, found that AI models were marginally more helpful than traditional search engines in building a biological weapon.
The proposed bill is too vague and misses important safeguards.
The proposed bill’s language is overly vague in some parts, leaving gray areas for future court proceedings that could have consequences unintended by the bill’s authors.
While focusing heavily on restrictions for certain AI models, the bill fails to lay out needed guardrails on deepfakes and AI bias.
Regulation should focus more on AI security than AI safety; that is, ensuring AI as a tool doesn’t get into the hands of bad actors the same way any other weapon or tool should be secured from bad actors.
More supportive of SB 1047:
AI poses significant risks and requires urgent regulatory action.
There are potentially catastrophic risks associated with unregulated AI, including broadened access to biological weapons and sophisticated cyberattacks on critical infrastructure; legislation is urgently needed to limit these risks.
Many of the same tech companies that oppose the bill – including Google and OpenAI – also signed an open statement in 2023 saying “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
According to AI firm Anthropic, serious misuse of AI could emerge in as little as 1-3 years.
The proposed bill encourages AI safety while remaining light touch.
After several revisions that have taken into account feedback from tech companies, researchers, and other experts, SB 1047 is relatively light touch and imposes reasonable but needed standards on responsible AI model development.
The bill requires developers to maintain safety practices in-line with common guidelines but does not require them to adhere to any specific one, largely leaving their approach up to them.
SB 1047’s high threshold for what constitutes a covered model ($100M+ in costs and 10^26 FLOPs of computing power to develop) was intended explicitly to shield startups from regulation and support innovation among the startup ecosystem.
Regulating AI at the model layer rather than the application layer is the right approach and in-line with other liability laws; for example, parents that give their children guns are ultimately held liable for the outcome.
Opponents to the bill commonly mischaracterize its provisions.
SB 1047 only enables the California Attorney General to hold a developer of a covered AI model liable for a catastrophic event if the developer does not perform the required upfront safety evaluation or take steps to mitigate the risks of a catastrophic event.
The proposed bill will not drive startups out of California because all tech companies that conduct business in California (e.g., that serve California customers) will be subject to the regulations, regardless of where they are headquartered.
Opponents of the bill claim the 10^26 FLOP computing power threshold for covered models is arbitrary and will ultimately sweep up models that don’t pose any significant danger; in reality, the bill’s proposed oversight board has the flexibility to change that threshold starting in 2027 to ensure future models that may become eligible under the 10^26 FLOP threshold don’t get swept up unnecessarily.
While opponents contend SB 1047 would kill open source model development because of its requirement that covered models retain a “kill switch” to shut them down in the event of a catastrophic event, the bill only requires developers whose model is still within their control to shut a model down in such a scenario; “downstream” open source models or their derivatives that are no longer in a developer’s control are not the developer’s liability.
From the source
Read more from select primary sources:
Full text: SB 1047
Full text: Letter from Y-Combinator and startup founders to SB 1047 author California State Senator Scott Wiener (D)
Full text: Letter from SB 1047 author California State Senator Scott Wiener (D) to Y-Combinator and a16z
Full text: Letter from 100+ current and former employees at AI companies to Governor Newsom
Be heard!
We want to hear from you! Reply to this email with your perspective on California’s proposed AI safety bill and we may feature it in our socials or future newsletters. Below are topic ideas to consider:
Do you support passing SB 1047 in its current form? Why or why not?
What are some arguments or supporting points you appreciate about a viewpoint you disagree with?
Give us your feedback! Please let us know how we can improve.
Music on the bottom
Try to resist bobbing your head to this dance masterpiece by Ghostland Observatory, “Give Me the Beat.”
Listen on Spotify, Apple Music, or Amazon Music.