Wiener says Newsom veto isn’t the end of AI safety effort

The controversy over SB 1047 ended up being a good thing, because it sparked a conversation about the need for regulation, Wiener said Wednesday

featured-image

Although California Gov. Gavin Newsom last month vetoed his controversial artificial-intelligence safety bill, state Sen. Scott Wiener is optimistic the state is going to enact similar legislation.

While Senate Bill 1047 attracted much ire and angst within the industry, the San Francisco politician said Wednesday during a panel discussion at the TechCrunch Disrupt conference in The City that the conversation it sparked was productive. He is already working with opponents of the bill, including “aggressive” ones, he said. Wiener compared the controversy over SB 1047 to the debate over some of the housing bills he introduced in the past.



While those might not have passed initially, the debate they engendered set the stage for successful legislation later, he said. “I think the same can happen here,” Wiener said. SB 1047 would have required developers of cutting-edge artificial intelligence models to test before releasing them to ensure they wouldn’t cause catastrophic harms, such mass casualties or physical damage in excess of $500 million.

The bill would have allowed the state attorney general to sue developers whose models caused such harms if they didn’t follow its safety testing requirements. The legislation had some high-profile supporters, including Tesla CEO Elon Musk; Anthropic, a leading AI company based in San Francisco; and Geoffrey Hinton and Yoshua Bengio, researchers who are known as two of the “godfathers of AI.” But it also drew widespread opposition within the tech and venture industries and from within academia, including from Google, ChatGPT developer OpenAI, startup accelerator Y Combinator, prominent venture capital firm Andreessen Horowitz and Stanford AI researcher Fei Fei Li.

Critics charged that it would throttle AI innovation and prompt leading companies to leave the state. The state legislature overwhelmingly passed the bill anyway . Despite agreeing with the need for regulation, Newsom rejected the bill .

In his veto message, he noted that it wouldn’t cover smaller, less advanced models that could pose similar risks and wasn’t focused on models that are being deployed in high-risk areas . The veto was “disappointing,” Wiener said. “Of course I wish the governor had signed the bill,” he added.

But the veto and the controversy leading up to it put a spotlight on the issue of AI safety and the need to do something about it, he said. “It really highlighted the importance of this conversation,” Wiener said. Earlier at the conference, one of the outspoken opponents of SB 1047 said something similar.

On stage in a sit-down conversation with a reporter, Andreessen Horowitz general partner Martin Casado said that the controversy over the bill encouraged academic experts like Li to step forward and engage in the process. “I think the good news about SB 1047 is that it forced a dialogue,” Casado said. But Casado charged that the bill was ill-informed, premature and sprung suddenly on the industry.

Before putting new laws in place, policymakers need to have an understanding of the extent to which the risk AI poses is different or bigger than that of previous technologies like Google search or the internet in general, he said. “We don’t have that today, so I think we’re a little bit early before we’re starting to glom on a bunch of regulations,” Casado said. For his part, Wiener said he didn’t think he would have done anything differently with the way he handled SB 1047.

He noted that he released an outline of the bill — something unusual for him — five months before actually introducing it and sent it out to an array of investors, startups and other people to get feedback on it. Wiener said he worked with critics and opponents throughout the process, amending the bill numerous times based on their feedback. Much of the opposition, he charged, was due to negative “vibes” about the bill, rather than what was actually in it.

“I think it was a good bill, a well-crafted bill,” he said. The opposition to SB 1047 and ultimate veto demonstrates the difficulty of passing some kind of AI regulation, said Jessica Newman, director of the AI Security Initiative at UC Berkeley. Lawmakers can’t reasonably try to regulate all AI models, but by targeting only bigger, cutting-edge ones, as with SB 1047, they open themselves up to criticism that they’re leaving out models that could also cause harms, she said.

Lawmakers have been criticized for being late to enact regulations to address other tech-related concerns, such as privacy or the harms caused by social media, Newman said. But SB 1047 was criticized for being too early, she said. “Those tensions are really hard to overcome,” Newman said.

Newman said her sense was that companies would prefer to have national legislation to govern AI, rather than a smattering of state bills. That would be ideal, Wiener said. But Congress hasn’t passed major tech legislation since the 1990s, he said.

Congress still hasn’t passed either a data privacy law or a net-neutrality law, even though the need for those laws has been apparent for years, he said. In both cases, California stepped into the breach. The state will likely have to do the same thing with AI safety, Wiener said.

“I agree it should be handled at the federal level, and of course it hasn’t,” he said. If you have a tip about tech, startups or the venture industry, contact Troy Wolverton at [email protected] or via text or Signal at 415.

515.5594..