See the full press release here: sd11.senate.ca.gov/news/senato...
And I saved the best for last
@TeriOlle of Economic Security California Action
@SnehaRevanur of Encode AI.
CEO of Elicit @stuhlmueller.
Prolific startup founder @snewmanpv.
Nobel Prize winner @geoffreyhinton.
Bruce Reed of @CommonSense, and also President Biden's former Deputy Chief of Staff for Policy.
Very lucky to have @Yoshua_Bengio's support.
Glad to see @Scott_Wiener's SB 53 picking up support from top scientists, startup founders, and civil society leaders. A thread.
SB 53 is an important step forward. See the full text here: leginfo.legislature.ca.gov/faces/billT...
The bill includes provisions on policies and reporting of risks from internal deployment of AI systems. This mirrors a recommendation in Governor Newsom’s report that I was very happy to see.
As the Report recommends, SB 53 also provides the Attorney General the authority to update the definition of “large developer.” But the Attorney General is expressly forbidden from covering companies that aren’t well-resourced or are behind the frontier of AI development.
SB 53 applies only to large AI developers that have trained a foundation model with 10^26 floating point operations (FLOPs) of compute. This means companies spending hundreds of millions of dollars on AI development.
Finally, SB 53 continues to include provisions on whistleblower protections and a public cloud computing cluster, CalCompute, which were present in SB 1047 last year and have been widely popular.
Third, it requires large companies to report a set of critical safety incidents caused by their AI models to the Attorney General.
Second, it requires large companies to report on the risk evaluations they are doing for catastrophic risks. This includes justifications for releasing risky models, sometimes called “safety cases.”
First, the bill requires the largest AI companies to have, publish, and follow a safety and security protocol. This mirrors voluntary commitments that companies have already made.
SB 53 is focused on catastrophic risks, defined as a small subset of risks that could result in more than 100 deaths or injuries or more than $1B in damages. The Report warned that evidence for these risks is “growing.”
In 2024, in the wake of his veto of SB 1047, Governor Newsom established a working group to study how California should respond to risks from advanced AI systems. The working group recently returned a report, which SB 53 draws from.
Today, Senator Scott Wiener introduced an amended version of his frontier AI legislation SB 53. Secure AI Project is proud to co-sponsor this important legislation, which follows the recommendations of the California Report on Frontier AI. A thread.
An important step forward for California. Thank you, Senator Wiener!