Image

Elon Musk helps California AI security invoice

The first attempt at codifying AI regulations anywhere in the U.S. just won the support of a powerful voice at a critical juncture. 

Elon Musk, CEO of Tesla and founder of Grok chatbot parent xAI, threw his weight behind California’s “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” (Senate Bill 1047). 

Should it pass the state assembly and receive final approval from Governor Gavin Newsom before the legislative term ends this week, it would put initial guardrails around the technology. The bill seeks to require developers to create safety protocols, to be able to shut down a runaway AI model, to report security incidents, to give rights to whistleblowers inside AI companies, to require companies to take steps to shield AI from being used by malicious hackers, and to create liabilities for companies if their AI software runs out of control.

It is however opposed by venture capitalists like Marc Andreessen and is even hotly disputed among AI luminaries: Meta chief AI officer Yann LeCun opposes the bill while AlexNet’s co-creator Geoffrey Hinton supports it.

“This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill,” Musk posted on Monday, citing the “risk to the public” from AI.

Up to now the only regulatory framework that exists focuses on only the largest supercomputers with 10*26 power floating-point operations, which cost more than $100 million to train. Yet this is not federal legislation on the statute books, but rather an executive order by the Biden administration that could easily be undone by his successor next year.

This bill would at least in part mitigate this and provide some legal clarity for Big Tech firms like Microsoft affiliate OpenAI, Amazon-backed Anthropic and Google, even if they may not necessarily agree with it.

“SB 1047 is a straight-forward, common-sense, light-touch bill that builds on President Biden’s executive order,” said California state senator Scott Wiener, sponsor of the bill, earlier this month.

Final week for California to pass it before legislative term ends

If any one state were to pick up the mantle, it would make the most sense for it to be California. Its $4 trillion economy is roughly the size of Germany and Japan’s in absolute dollar terms, thanks mainly to its thriving tech sector in Silicon Valley. Arguably it is doing much more to drive innovation than either of those two G7 nations.

Speaking to Bloomberg TV, Wiener said he empathized with the argument that Washington ought to have pressed forward but he cited a range of issues including data privacy laws, social media and net neutrality that Capitol Hill has consistently failed to tackle conclusively.

“I agree, it should be handled at the federal level,” Wiener told the broadcaster on Friday. “Congress has a very poor record in terms of regulating the tech sector and I don’t see that changing so California should lead.

This month is the final opportunity for SB 1047 to pass. After the week ends, the state legislature term goes into recess ahead of fresh elections in November. If it does pass, it still needs to be approved by Newsom prior to the end of September, and last week the U.S. Chamber of Congress urged him to veto the bill should it cross his desk.

But regulating technology can be a fool’s errand since policy always lags the speed of innovation. Intervening in the free market can inadvertently stifle innovation—and that is the primary criticism around SB 1047

Former OpenAI researcher reveals his colleagues are giving up

Only a year ago, Big Tech champions could largely smother any outside attempt to intervene in the sector. Most policymakers understood America was locked in a high-stakes AI arms race with China, and neither could afford to lose. Were the U.S. to slap restraints on its domestic industry, it could tip the scales in favor of Beijing. 

A rash of recent departures among senior AI safety experts from OpenAI, the firm that launched the AI gold rush, has however sparked concerns that executives—including its CEO Sam Altman—may be throwing caution to the wind in a bid to accelerate commercialization of the terrifically expensive technology

Former OpenAI safety researcher Daniel Kokotajlo told Fortune on Monday that nearly half of the AI governance staff have on their own decided to collectively leave the former non-profit, dismayed by the direction it has taken. 

“It’s just people sort of individually giving up,” he said in an exclusive interview. Kokotajlo chose to spurn whatever equity he had in the firm to avoid signing an extensive non-disclosure agreement barring him from speaking about his former employer.

Musk would likely be affected personally by the legislation, as well. Last year he founded his own artificial general intelligence startup in xAI. He just opened a brand new supercompute cluster in Memphis that is powered by AI training chips and staffed by experts he effectively poached from Tesla.

But Musk isn’t your average challenger: he’s well acquainted with technology, having co-founded OpenAI in December 2015 and personally recruit its former chief scientist. Later the Tesla CEO and entrepreneur fell out with Altman, deciding ultimately to sue the firm not once but twice.

Recommended Newsletter: High-level insights for high-powered executives. Subscribe to the CEO Daily newsletter for free today. Subscribe now.

SHARE THIS POST