California Gov. Gavin Newsom (D) vetoed a bill on Sunday that would have instituted the nation’s strictest artificial intelligence regulations — a major win for tech companies and venture capitalists who had lobbied fiercely against the law, and a setback for proponents of tougher AI regulation.
The legislation, known as S.B. 1047, would have required companies to test the most powerful AI systems before release and held them liable if their technology was used to harm people — for example, by helping to plan a terrorist attack.
Tech executives, investors and prominent California politicians, including Rep. Nancy Pelosi (D), had argued the bill would quash innovation by making it legally risky to develop new AI systems, since it could be difficult or impossible to test for all the potential harms of the multipurpose technology. Opponents also argued that those who used AI for harm — not the developers — should be penalized.
Advertisement
The bill’s proponents, including pioneering AI researchers Geoffrey Hinton and Yoshua Bengio, argued that the law only formalized commitments that tech companies had already made voluntarily. California state Sen. Scott Wiener, the Democrat who authored the bill, said the state must act to fill the vacuum left by lawmakers in Washington, where no new AI regulations have passed despite vocal support for the idea.
Hollywood also weighed in, with Star Wars director J.J. Abrams and “Moonlight” actor Mahershala Ali among more than 120 actors and producers who signed a letter this past week asking Newsom to sign the bill.
California’s AI bill had already been weakened several times by the state’s legislature, and the law gained support from AI company Anthropic and X owner Elon Musk. But lobbyists from Meta, Google and major venture capital firms, as well as founders of many tech start-ups, still opposed it.
Advertisement
In a statement to California’s legislature on his veto, Newsom wrote that the bill was wrong to single out AI projects using more than a certain amount of computing power while ignoring an AI system’s use case, such as whether it is involved in critical decision-making or uses sensitive data.
“I do not believe this is the best approach to protecting the public from real threats posed by the technology,” he said. “A California-only approach may well be warranted — especially absent federal action by Congress — but it must be based on empirical evidence and science.”
Newsom also announced Sunday that his administration will work with leading academics to develop what his office called “workable guardrails” for deploying generative AI technology. The experts involved include Fei-Fei Li, co-director of Stanford’s Human-Centered AI Institute and an AI entrepreneur, and Mariano-Florentino Cuéllar, president of the Carnegie Endowment for International Peace.
Advertisement
Anthony Aguirre, executive director of the Future of Life Institute, a nonprofit that studies existential risks to humanity and vocally advocated for the bill, said in a statement that the veto signaled it was time for federal or even global regulation to oversee Big Tech companies developing AI technology.
“The furious lobbying against the bill can only be reasonably interpreted in one way: these companies believe they should play by their own rules and be accountable to no one,” Aguirre’s statement said.
The California bill had prompted unusually impassioned public debate for a piece of technology regulation. At a tech conference in San Francisco earlier this month, Newsom said the measure had “created its own weather system,” triggering an outpouring of emotional comments. He also noted that the California legislature had passed other AI bills that were more “surgical” than S.B. 1047.
Advertisement
Newsom’s veto came after he signed 17 other AI-related laws, which impose new restrictions on some of the same tech companies that opposed the law he blocked. The regulations include a ban on AI-generated images that seek to deceive voters in the months ahead of elections; a requirement that movie studios negotiate with actors for the right to use their likeness in AI-generated videos; and rules forcing AI companies to create digital “watermark” technology to make it easier to detect AI videos, images and audio.
Newsom’s veto is a major setback for the AI safety movement, a collection of researchers and advocates who believe smarter-than-human AI could be invented soon and argue that humanity must urgently prepare for that scenario. The group is closely connected to the effective altruism community, which has funded think tanks and fellowships on Capitol Hill to influence AI policy and been derided as a “cult” by some tech leaders, such as Meta’s chief AI scientist, Yann LeCun.
Despite those concerns, the majority of AI executives, engineers and researchers are focused on the challenges of building and selling workable AI products, not the risk of the technology one day developing into an able assistant for terrorists or swinging elections with disinformation.