Image

Scott Wiener on his combat to make Big Tech disclose AI’s risks

This is not California state Senator Scott Wiener’s first attempt at addressing the dangers of AI.

In 2024, Silicon Valley mounted a fierce campaign against his controversial AI safety bill, SB 1047, which would have made tech companies liable for the potential harms of their AI systems. Tech leaders warned that it would stifle America’s AI boom. Governor Gavin Newsom ultimately vetoed the bill, echoing similar concerns, and a popular AI hacker house promptly threw a “SB 1047 Veto Party.” One attendee told me, “Thank god, AI is still legal.”

Now Wiener has returned with a new AI safety bill, SB 53, which sits on Governor Newsom’s desk awaiting his signature or veto sometime in the next few weeks. This time around, the bill is much more popular or at least, Silicon Valley doesn’t seem to be at war with it.

Anthropic outright endorsed SB 53 earlier this month. Meta spokesperson Jim Cullinan tells TechCrunch that the company supports AI regulation that balances guardrails with innovation and says “SB 53 is a step in that direction,” though there are areas for improvement.

Former White House AI policy advisor Dean Ball tells TechCrunch that SB 53 is a “victory for reasonable voices,” and thinks there’s a strong chance Governor Newsom signs it.

If signed, SB 53 would impose some of the nation’s first safety reporting requirements on AI giants like OpenAI, Anthropic, xAI, and Google — companies that today face no obligation to reveal how they test their AI systems. Many AI labs voluntarily publish safety reports explaining how their AI models could be used to create bioweapons and other dangers, but they do this at will and they’re not always consistent.

The bill requires leading AI labs — specifically those making more than $500 million in revenue — to publish safety reports for their most capable AI models. Much like SB 1047, the bill specifically focuses on the worst kinds of AI risks: their ability to contribute to human deaths, cyberattacks, and chemical weapons. Governor Newsom is considering several other bills that address other types of AI risks, such as engagement-optimization techniques in AI companions.

Techcrunch event

San Francisco
|
October 27-29, 2025

SB 53 also creates protected channels for employees working at AI labs to report safety concerns to government officials, and establishes a state-operated cloud computing cluster, CalCompute, to provide AI research resources beyond the big tech companies.

One reason SB 53 may be more popular than SB 1047 is that it’s less severe. SB 1047 also would have made AI companies liable for any harms caused by their AI models, whereas SB 53 focuses more on requiring self-reporting and transparency. SB 53 also narrowly applies to the world’s largest tech companies, rather than startups.

But many in the tech industry still believe states should leave AI regulation up to the federal government. In a recent letter to Governor Newsom, OpenAI argued that AI labs should only have to comply with federal standards — which is a funny thing to say to a state governor. The venture firm Andreessen Horowitz wrote a recent blog post vaguely suggesting that some bills in California could violate the Constitution’s dormant Commerce Clause, which prohibits states from unfairly limiting interstate commerce.

Senator Wiener addresses these concerns: he lacks faith in the federal government to pass meaningful AI safety regulation, so states need to step up. In fact, Wiener thinks the Trump administration has been captured by the tech industry, and that recent federal efforts to block all state AI laws are a form of Trump “rewarding his funders.”

The Trump administration has made a notable shift away from the Biden administration’s focus on AI safety, replacing it with an emphasis on growth. Shortly after taking office, Vice President J.D. Vance appeared at an AI conference in Paris and said: “I’m not here this morning to talk about AI safety, which was the title of the conference a couple of years ago. I’m here to talk about AI opportunity.”

Silicon Valley has applauded this shift, exemplified by Trump’s AI Action Plan, which removed barriers to building out the infrastructure needed to train and serve AI models. Today, Big Tech CEOs are regularly seen dining at the White House or announcing hundred-billion-dollar data centers alongside President Trump.

Senator Wiener thinks it’s critical for California to lead the nation on AI safety, but without choking off innovation.

I recently interviewed Senator Wiener to discuss his years at the negotiating table with Silicon Valley and why he’s so focused on AI safety bills. Our conversation has been edited lightly for clarity and brevity. My questions are in bold, and his answers are not.

Maxwell Zeff: Senator Wiener, I interviewed you when SB 1047 was sitting on Governor Newsom’s desk. Talk to me about the journey you’ve been on to regulate AI safety in the last few years.

Scott Wiener: It’s been a roller coaster, an incredible learning experience, and just really rewarding. We’ve been able to help elevate this issue [of AI safety], not just in California, but in the national and international discourse.

We have this incredibly powerful new technology that is changing the world. How do we make sure it benefits humanity in a way where we reduce the risk? How do we promote innovation, while also being very mindful of public health and public safety. It’s an important — and in some ways, existential — conversation about the future. SB 1047, and now SB 53, have helped to foster that conversation about safe innovation.

In the last 20 years of technology, what have you learned about the importance of laws that can hold Silicon Valley to account?

I’m the guy who represents San Francisco, the beating heart of AI innovation. I’m immediately north of Silicon Valley itself, so we’re right here in the middle of it all. But we’ve also seen how the large tech companies — some of the wealthiest companies in world history — have been able to stop federal regulation.

Every time I see tech CEOs having dinner at the White House with the aspiring fascist dictator, I have to take a deep breath. These are all really brilliant people who have generated enormous wealth. A lot of folks I represent work for them. It really pains me when I see the deals that are being struck with Saudi Arabia and the United Arab Emirates, and how that money gets funneled into Trump’s meme coin. It causes me deep concern.

I’m not someone who’s anti-tech. I want tech innovation to happen. It’s incredibly important. But this is an industry that we should not trust to regulate itself or make voluntary commitments. And that’s not casting aspersions on anyone. This is capitalism, and it can create enormous prosperity but also cause harm if there are not sensible regulations to protect the public interest. When it comes to AI safety, we’re trying to thread that needle.

SB 53 is focused on the worst harms that AI could imaginably cause — death, massive cyber attacks, and the creation of bioweapons. Why focus there?

The risks of AI are varied. There is algorithmic discrimination, job loss, deep fakes, and scams. There have been various bills in California and elsewhere to address those risks. SB 53 was never intended to cover the field and address every risk created by AI. We’re focused on one specific category of risk, in terms of catastrophic risk.

That issue came to me organically from folks in the AI space in San Francisco — startup founders, frontline AI technologists, and people who are building these models. They came to me and said, ‘This is an issue that needs to be addressed in a thoughtful way.’

Do you feel that AI systems are inherently unsafe, or have the potential to cause death and massive cyberattacks?

I don’t think they’re inherently safe. I know there are a lot of people working in these labs who care very deeply about trying to mitigate risk. And again, it’s not about eliminating risk. Life is about risk, unless you’re going to live in your basement and never leave, you’re going to have risk in your life. Even in your basement, the ceiling might fall down.

Is there a risk that some AI models could be used to do significant harm to society? Yes, and we know there are people who would love to do that. We should try to make it harder for bad actors to cause these severe harms, and so should the people developing these models.

Anthropic issued its support for SB 53. What are your conversations like with other industry players?

We’ve talked to everyone: large companies, small startups, investors, and academics. Anthropic has been really constructive. Last year, they never formally supported [SB 1047] but they had positive things to say about aspects of the bill. I don’t think [Anthropic} loves every aspect of SB 53, but I think they concluded that on balance the bill was worth supporting.

I’ve had conversations with large AI labs who are not supporting the bill, but are not at war with it in the way they were with SB 1047. It’s not surprising. SB 1047 was more of a liability bill, SB 53 is more of a transparency bill. Startups have been less engaged this year because the bill really focuses on the largest companies.

Do you feel pressure from the large AI PACs that have formed in recent months?

This is another symptom of Citizens United. The wealthiest companies in the world can just pour endless resources into these PACs to try to intimidate elected officials. Under the rules we have, they have every right to do that. It’s never really impacted how I approach policy. There have been groups trying to destroy me for as long as I’ve been in elected office. Various groups have spent millions trying to blow me up, and here I am. I’m in this to do right by my constituents and try to make my community, San Francisco, and the world a better place.

What’s your message to Governor Newsom as he’s debating whether to sign or veto this bill?

My message is that we heard you. You vetoed SB 1047 and provided a very comprehensive and thoughtful veto message. You wisely convened a working group that produced a very strong report, and we really looked to that report in crafting this bill. The governor laid out a path, and we followed that path in order to come to an agreement, and I hope we got there.

SHARE THIS POST