Sunday, September 14, 2025

The talk behind SB 53, the landmark California invoice attempting to forestall AI from constructing nukes

In the case of AI, as California goes, so goes the nation. The most important state within the US by inhabitants can be the central hub of AI innovation for the complete globe, house to 32 of the world’s prime 50 AI firms. That dimension and affect have given the Golden State the burden to change into a regulatory trailblazer, setting the tone for the remainder of the nation on environmental, labor, and shopper safety rules — and extra just lately, AI as nicely.

Now, following the dramatic defeat of a proposed federal moratorium on states regulating AI in July, California policymakers see a restricted window of alternative to set the stage for the remainder of the nation’s AI legal guidelines. Within the early hours of Saturday morning, the California State Meeting voted in favor of SB 53, a invoice that might require transparency experiences from the builders of extremely highly effective, “frontier” AI fashions. The invoice, which has handed each elements of the state legislature, now goes to Gov. Gavin Newsom to both be vetoed or signed into regulation.

The fashions focused signify the cutting-edge of AI — extraordinarily adept generative programs that require huge quantities of information and computing energy, like OpenAI’s ChatGPT, Google’s Gemini, xAI’s Grok, and Anthropic’s Claude.

AI can provide super advantages, however because the invoice is supposed to deal with, it’s not with out dangers. And whereas there isn’t any scarcity of current dangers from points like job displacement and bias, SB 53 focuses on attainable “catastrophic dangers” from AI. Such dangers embody AI-enabled organic weapons assaults and rogue programs finishing up cyberattacks or different legal exercise that might conceivably carry down vital infrastructure. Such catastrophic dangers signify widespread disasters that might plausibly threaten human civilization at native, nationwide, and international ranges. They signify dangers of the type of AI-driven disasters that haven’t but occurred, quite than already-realized, extra private harms like AI deepfakes.

Precisely what constitutes a catastrophic threat is up for debate, however SB 53 defines it as a “foreseeable and materials threat” of an occasion that causes greater than 50 casualties or over $1 billion in damages {that a} frontier mannequin performs a significant function in contributing to. How fault is decided in apply could be as much as the courts to interpret. It’s arduous to outline catastrophic threat in regulation when the definition is way from settled, however doing so may help us shield in opposition to each near- and long-term penalties.

By itself, a single state invoice centered on elevated transparency will in all probability not be sufficient to forestall devastating cyberattacks and AI-enabled chemical, organic, radiological, and nuclear weapons. However the invoice represents an effort to manage this fast-moving expertise earlier than it outpaces our efforts at oversight.

SB 53 is the third state-level invoice to attempt to particularly concentrate on regulating AI’s catastrophic dangers, after California’s SB 1047, which handed the legislature solely to be vetoed by the governor — and New York’s Accountable AI Security and Schooling (RAISE) Act, which just lately handed the New York legislature and is now awaiting Gov. Kathy Hochul’s approval.

SB 53, which was launched by state Sen. Scott Wiener in February, requires frontier AI firms to develop security frameworks that particularly element how they strategy catastrophic threat discount. Earlier than deploying their fashions, firms must publish security and safety experiences. The invoice additionally provides them 15 days to report “vital security incidents” to the California Workplace of Emergency Providers, and establishes whistleblower protections for workers who come ahead about unsafe mannequin deployment that contributes to catastrophic threat. SB 53 goals to carry firms publicly accountable for his or her AI security commitments, with a monetary penalty as much as $1 million per violation.

“The science of easy methods to make AI protected is quickly evolving, and it’s at present troublesome for policymakers to put in writing prescriptive technical guidelines for a way firms ought to handle security.”

— Thomas Woodside, co-founder of Safe AI Undertaking

In some ways, SB 53 is the non secular successor to SB 1047, additionally launched by Wiener.

Each cowl massive fashions which can be educated at 10^26 FLOPS, a measurement of very vital computing energy used in quite a lot of AI laws as a threshold for vital threat, and each payments strengthen whistleblower protections. The place SB 53 departs from SB 1047 is its concentrate on transparency and prevention

Whereas SB 1047 aimed to maintain firms chargeable for catastrophic harms attributable to their AI programs, SB 53 formalizes sharing security frameworks, which many frontier AI firms, together with Anthropic, already do voluntarily. It focuses squarely on the heavy-hitters, with its guidelines making use of solely to firms that generate $500 million or extra in gross income.

“The science of easy methods to make AI protected is quickly evolving, and it’s at present troublesome for policymakers to put in writing prescriptive technical guidelines for a way firms ought to handle security,” stated Thomas Woodside, the co-founder of Safe AI Undertaking, an advocacy group that goals to cut back excessive dangers from AI and is a sponsor of the invoice, over e mail. “This gentle contact coverage prevents backsliding on commitments and encourages a race to the highest quite than a race to the underside.”

A part of the logic of SB 53 is the power to adapt the framework as AI progresses. The invoice authorizes the California Legal professional Normal to alter the definition of a giant developer after January 1, 2027, in response to AI advances.

Proponents of the invoice have been optimistic about its possibilities of being signed by the governor ought to it move the legislature. On the identical day that Gov. Newsom vetoed SB 1047, he commissioned a working group focusing solely on frontier fashions. The ensuing report by the group supplied the muse for SB 53. “I’d guess, with roughly 75 p.c confidence, that SB 53 will probably be signed into regulation by the tip of September,” stated Dean Ball — former White Home AI coverage adviser, vocal SB 1047 critic, and SB 53 supporter — to Transformer.

However a number of business organizations rallied in opposition, arguing that further compliance regulation could be costly, provided that AI firms ought to already be incentivized to keep away from catastrophic harms. OpenAI has lobbied in opposition to it, and expertise commerce group Chamber of Progress argues that the invoice would require firms to file pointless paperwork and unnecessarily stifle innovation.

“These compliance prices are merely the start,” Neil Chilson, head of AI coverage on the Abundance Institute, advised me over e mail. “The invoice, if handed, would feed California regulators truckloads of firm data that they may use to design a compliance industrial advanced.”

Against this, Anthropic enthusiastically endorsed the invoice on Monday. “The query isn’t whether or not we want AI governance – it’s whether or not we develop it thoughtfully right this moment or reactively tomorrow,” the corporate defined in a weblog put up. “SB 53 provides a stable path towards the previous.” (Disclosure: Vox Media is considered one of a number of publishers which have signed partnership agreements with OpenAI, whereas Future Excellent is funded partly by the BEMC Basis, whose main funder was additionally an early investor in Anthropic. Neither group has editorial enter into our content material.)

The talk over SB 53 ties into broader disagreements about whether or not states or the federal authorities ought to drive AI security regulation. However for the reason that overwhelming majority of those firms are based mostly in California, and practically all do enterprise there, the state’s laws issues for the complete nation.

“A federally led transparency strategy is way, far, far preferable to the multi-state different,” the place a patchwork of state rules can battle with one another, stated Cato Institute expertise coverage fellow Matthew Mittelsteadt in an e mail. However “I like that the invoice has a provision that might enable firms to defer to a future different federal normal.”

“The pure query is whether or not a federal strategy may even occur,” Mittelsteadt continued. “For my part, the jury is out on that however the chance is way extra doubtless that some counsel. It’s been lower than 3 years since ChatGPT was launched. That’s hardly a lifetime in public coverage.”

However in a time of federal gridlock, frontier AI developments received’t look forward to Washington.

The catastrophic threat divide

The invoice’s concentrate on, and framing of, catastrophic dangers will not be with out controversy.

The thought of catastrophic threat comes from the fields of philosophy and quantitative threat evaluation. Catastrophic dangers are downstream of existential dangers, which threaten humanity’s precise survival or else completely scale back our potential as a species. The hope is that if these doomsday eventualities are recognized and ready for, they are often prevented or no less than mitigated.

But when existential dangers are clear — the tip of the world, or no less than as we all know it — what falls beneath the catastrophic threat umbrella, and one of the simplest ways to prioritize these dangers, is dependent upon who you ask. There are longtermists, individuals centered totally on humanity’s far future, who place a premium on issues like multiplanetary growth for human survival. They’re typically mainly involved by dangers from rogue AI or extraordinarily deadly pandemics. Neartermists are extra preoccupied with current dangers, like local weather change, mosquito vector-borne illness, or algorithmic bias. These camps can mix into each other — neartermists would additionally prefer to keep away from getting hit by asteroids that might wipe out a metropolis, and longtermists don’t dismiss dangers like local weather change — and one of the simplest ways to consider them is like two ends of a spectrum quite than a strict binary.

You may consider the AI ethics and AI security frameworks because the near- and longtermism of AI threat, respectively. AI ethics is concerning the ethical implications of the methods the expertise is deployed, together with issues like algorithmic bias and human rights, within the current. AI security focuses on catastrophic dangers and potential existential threats. However, as Vox’s Julia Longoria reported within the Good Robotic sequence for Unexplainable, there are inter-personal conflicts main these two factions to work in opposition to one another, a lot of which has to do with emphasis. (AI ethics individuals argue that catastrophic threat issues over-hype AI capabilities and ignores its affect on weak individuals proper now, whereas AI security individuals fear that if we focus an excessive amount of on the current, we received’t have methods to mitigate larger-scale issues down the road.)

However behind the query of close to versus long-term dangers lies one other one: what, precisely, constitutes a catastrophic threat?

SB 53 initially set the usual for catastrophic threat at 100 quite than 50 casualties — just like New York’s RAISE Act — earlier than halving the edge in an modification to the invoice. Whereas the typical individual would possibly take into account, say, many individuals pushed to suicide after interacting with AI chatbots to be catastrophic, such a threat is exterior of the invoice’s scope. (The California State Meeting simply handed a separate invoice to manage AI companion chatbots by stopping them from collaborating in discussions about suicidal ideation or sexually express materials.)

SB 53 focuses squarely on harms from “expert-level” frontier AI mannequin help in creating or deploying chemical, organic, radiological, and nuclear weapons; committing crimes like cyberattacks or fraud; and “lack of management” eventualities the place AIs go rogue, behaving deceptively to keep away from being shut down and replicating themselves with out human oversight. For instance, an AI mannequin could possibly be used to information the creation of a brand new lethal virus that infects tens of millions and kneecaps the worldwide financial system.

“The 50 to 100 deaths or a billion {dollars} in property injury is only a proxy to seize actually widespread and substantial affect,” stated Scott Singer, lead creator of the California Report for Frontier AI Coverage, which helped inform the idea of the invoice. “We do take a look at like AI-enabled or AI probably [caused] or correlated suicide. I believe that’s like a really critical set of points that calls for policymaker consideration, however I don’t assume it’s the core of what this invoice is attempting to deal with.”

Transparency is useful in stopping such catastrophes as a result of it might probably assist increase the alarm earlier than issues get out of hand, permitting AI builders to right course. And within the occasion that such efforts fail to forestall a mass casualty incident, enhanced security transparency may help regulation enforcement and the courts work out what went fallacious. The problem there may be that it may be troublesome to find out how a lot a mannequin is accountable for a selected end result, Irene Solaiman, the chief coverage officer at Hugging Face, a collaboration platform for AI builders, advised me over e mail.

“These dangers are coming and we needs to be prepared for them and have transparency into what the businesses are doing,” stated Adam Billen, the vice chairman of public coverage at Encode, a corporation that advocates for accountable AI management and security. (Encode is one other sponsor of SB 53.) “However we don’t know precisely what we’re going to want to do as soon as the dangers themselves seem. However proper now, when these issues aren’t taking place at a big scale, it is smart to be kind of centered on transparency.”

Nevertheless, a transparency-focused invoice like SB 53 is inadequate for addressing already-existing harms. Once we already know one thing is an issue, the main target needs to be on mitigating it.

“Possibly 4 years in the past, if we had handed some kind of transparency laws like SB 53 however centered on these harms, we’d have had some warning indicators and been in a position to intervene earlier than the widespread harms to youngsters began taking place,” Billen stated. “We’re attempting to type of right that mistake on these issues and get some kind of forward-facing details about what’s taking place earlier than issues get loopy, principally.”

SB 53 dangers being each overly slender and unclearly scoped. We’ve not but confronted these catastrophic harms from frontier AI fashions, and probably the most devastating dangers would possibly take us completely without warning. We don’t know what we don’t know.

It’s additionally definitely attainable that fashions educated under 10^26 FLOPS, which aren’t lined by SB 53, have the potential to trigger catastrophic hurt beneath the invoice’s definition. The EU AI Act units the threshold for “systemic threat” on the smaller 10^25 FLOPS, and there’s disagreement concerning the utility of computational energy as a regulatory normal in any respect, particularly as fashions change into extra environment friendly.

Because it stands proper now, SB 53 occupies a unique area of interest from payments centered on regulating AI use in psychological healthcare or knowledge privateness, reflecting its authors’ want to not step on the toes of different laws or chunk off greater than it might probably moderately chew. However Chilson, the Abundance Institute’s head of AI coverage, is a part of a camp that sees SB 53’s concentrate on catastrophic hurt as a “distraction” from the true near-term advantages and issues, like AI’s potential to speed up the tempo of scientific analysis or create nonconsensual deepfake imagery, respectively.

That stated, deepfakes might definitely trigger catastrophic hurt. As an illustration, think about a hyper-realistic deepfake impersonating a financial institution worker to commit fraud at a multibillion-dollar scale, stated Nathan Calvin, the vice chairman of state affairs and common counsel at Encode. “I do assume a few of the strains between this stuff in apply generally is a bit blurry, and I believe in some methods…that isn’t essentially a nasty factor,” he advised me.

It could possibly be that the ideological debate round what qualifies as catastrophic dangers, and whether or not that’s worthy of our legislative consideration, is simply noise. The invoice is meant to manage AI earlier than the proverbial horse is out of the barn. The typical individual isn’t going to fret concerning the chance of AI sparking nuclear warfare or organic weapons assaults, however they do take into consideration how algorithmic bias would possibly have an effect on their lives within the current. However in attempting to forestall the worst-case eventualities, maybe we are able to additionally keep away from the “smaller,” nearer harms. In the event that they’re efficient, forward-facing security provisions designed to forestall mass casualty occasions may even make AI safer for people.

If Gov. Newsom indicators SB 53 into regulation, it might encourage different state makes an attempt at AI regulation by an identical framework, and finally encourage federal AI security laws to maneuver ahead.

How we take into consideration threat issues as a result of it determines the place we focus our efforts on prevention. I’m a agency believer within the worth of defining your phrases, in regulation and debate. If we’re not on the identical web page about what we imply once we discuss threat, we are able to’t have an actual dialog.

Replace, September 13, 2025, 11:55 am ET: This story was initially printed on September 12 and has been up to date to replicate the end result of the California State Meeting vote.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles