California’s failed AI safety bill is a warning to Britain

What can London learn from Sacramento's doomed attempt to regulate the existential risks posed by powerful AI?

Oct 4, 2024 - 13:00
California’s failed AI safety bill is a warning to Britain

LONDON — California has a message for Britain: Good luck trying to regulate AI.

The Golden State’s governor Gavin Newsom on Sunday vetoed a bill which sought to impose safety vetting requirements on developers of powerful artificial intelligence models — siding with much of Silicon Valley and high profile politicians like Nancy Pelosi in the process.

The demise of the bill will come as a warning for Britain’s Labour government, which is drafting a proposal similarly aimed at placing restraints on the most powerful forms of the technology known as frontier AI.

While the Californian proposal — known as SB 1047 — garnered high-profile support from Hollywood A-listers and scores of employees at AI developers including OpenAI, Google DeepMind and Anthropic, it was undone by an aggressive campaign spearheaded by deep-pocketed tech firms and venture capitalists.

London’s efforts, which ministers have repeatedly stressed will be both light-weight and narrowly targeted, could spark similar pushback.

“We heard a similar refrain from tech companies around SB 1047 that we did around the EU AI act: that the regulation was too burdensome and would harm innovation. But this misses the point,” said Andrew Strait of the Ada Lovelace Institute, a U.K. nonprofit.

Widespread adoption of AI technologies, which the Labour government is banking on as part of its bid to boost the U.K.’s lagging economy, “is only achievable with regulation that ensures people and organizations are confident the technology has been proven to be effective and safe,” Strait said.

“Guardrails will help them move faster.”

Battle on the West Coast

The California experience shows London the kind of lobbying battle it could be up against, and highlights how carefully Whitehall officials will need to tread when drafting their own legislation, which the British government hopes to consult on before Christmas.

Critics of the California bill argued that AI regulation should be addressed at the federal level, while opponents such as Google and OpenAI argued its requirements would unduly burden developers.

But the campaign against SB 1047 also featured claims slammed by some — including a leading think tank and the author of the bill itself — as misleading.

These included assertions that the new rulebook would require developers to guarantee their models couldn’t cause harm and that it would decimate the open source AI community by requiring developers to install a “kill switch” on their models.

Fei-Fei Li’s intervention was cited in subsequent opposition to the bill, including from longstanding California Congresswoman Nancy Pelosi and OpenAI. | Craig Barritt/Getty Images for Clinton Global Initiative

In truth, the bill only ever called for developers to have “reasonable assurance” on whether their AI would cause catastrophic harm, rather than a full-blown guarantee, while the bill’s architect, State Senator Scott Wiener, rubbished claims that developers would need to build in the ability to shutdown models via a so-called kill switch.

“We believe the bill has never required the original developer to retain shutdown capabilities over derivative models no longer in their control,” he argued in a letter addressing what he called “inflammatory statements” from Andreessen Horowitz and Y Combinator, VC funds that led much of the campaign against the bill.

Wiener has since described some of the allegations levelled against his bill as “misinformation.”

But that didn’t stop a lot of these claims becoming repeated as gospel, including in an influential op-ed by Fei-Fei Li, a computer scientist feted as the “Godmother of AI.” Li’s intervention was cited in subsequent opposition to the bill, including from longstanding California Congresswoman Nancy Pelosi and OpenAI, showing the snowball effect of these kinds of lobbying campaigns.

A spokesperson for Andreessen Horowitz said they “respectfully disagreed” with accusations they promoted misleading claims, pointing POLITICO to a 14-page response they issued to Wiener’s comments.

A Y Combinator spokesperson said: “The semantic debates alone demonstrate the challenges with bills like SB 1047 being vague and open-ended. We welcome the opportunity to support policymakers in creating clear and reasonable rules that support the startup economy.”

A Californian education

Britain may have some advantages over California, though — the first being that the big cheeses in tech may just care less about the U.K. than their home turf on the U.S. West Coast.

Advocates of SB 1047 argue that the bill fell because of the quirks of California politics, where a bill that made it through the legislature was felled by an ambitious governor keen not to get on the wrong side of the state’s powerful tech barons.

The second is that in the U.K. AI Safety Institute, Britain has unrivalled access to state capacity in AI safety know-how.

Meta executive and former U.K. Deputy Prime Minister Nick Clegg recently said Britain had “wasted a huge amount of time.” | Nhac Nguyen/AFP via Getty Images

But California’s experience had other lessons for Britain, too.

In vetoing the bill, Newsom took aim at its narrow focus on frontier systems, arguing that it ignored the context in which they’re deployed — a criticism that has been lobbed at the U.K.’s approach, too.

Britain’s ruling Labour Party has repeatedly said its bill will place binding safety requirements on only those creating the most powerful models, rather than regulating the way the tech is used.

“If you have a very small AI that’s used only by a number of people, but that AI decides whether somebody will get a loan or not, well, that should absolutely be regulated, right? Because it has a real impact on people’s lives,” said Stefanie Valdés-Scott, head of policy and government relations in Europe for Adobe.

Valdés-Scott argued that, like the EU’s AI Act, the U.K. should target AI models’ applications in specific areas rather than the capability of top performing machines.

Meta executive and former U.K. Deputy Prime Minister Nick Clegg recently said Britain had “wasted a huge amount of time” due to the energy spent by the last government on the risks posed by AI in areas like cybersecurity and bioterrorism.

But even as the new Labour government has shifted its rhetoric towards a more positive vision of adopting AI to grow the economy and improve public services, it has been urged not to throw out the baby by the water when delivering on a promise to impose binding rules on the world’s most valuable tech firms.

Britain’s ruling Labour Party has repeatedly said its bill will place binding safety requirements on only those creating the most powerful models, rather than regulating the way the tech is used. | Ian Forsyth/Getty Images

Julian David, CEO of TechUK, the U.K.’s leading industry body, previously said the government would have to “get the right balance between new laws” and promoting economic growth.

Jessica Lennard, chief strategy officer at the U.K.’s competition and consumer protection watchdog, meanwhile warned that a focus on AI safety is seemingly coming at the expense of other AI governance issues like liability and accountability.

“That to me, leaves a real serious lacuna in some spaces,” Lennard said. “AI liability and accountability particularly tends not to be covered explicitly in that AI safety conversation.”

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow