On May 16, 2023, Sam Altman appeared before a subcommittee of the Senate Judiciary. The title of the hearing was “Oversight of AI.” The session was a lovefest, with both Altman and the senators celebrating what Altman called AI’s “printing press moment”—and acknowledging that the US needed strong laws to avoid its pitfalls. “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” he said. The legislators hung on Altman’s every word as he gushed about how smart laws could allow AI to flourish—but only within firm guidelines that both lawmakers and AI builders deemed vital at that moment. Altman was speaking for the industry, which widely shared his attitude. The battle cry was “Regulate Us!”
Two years later, on May 8 of this year, Altman was back in front of another group of senators. The senators and Altman were still singing the same tune, but one pulled from a different playlist. This hearing was called “Winning the AI Race.” In DC, the word “oversight” has fallen out of favor, and the AI discourse is no exception. Instead of advocating for outside bodies to examine AI models to assess risks, or for platforms to alert people when they are interacting with AI, committee chair Ted Cruz argued for a path where the government would not only fuel innovation but remove barriers like “overregulation.” Altman was on board with that. His message was no longer “regulate me” but “invest in me.” He said that overregulation—like the rules adopted by the European Union or one bill recently vetoed in California would be “disastrous.” “We need the space to innovate and to move quickly,” he said. Safety guardrails might be necessary, he affirmed, but they needed to involve “sensible regulation that does not slow us down.”
What happened? For one thing, the panicky moment just after everyone got freaked out by ChatGPT passed, and it became clear that Congress wasn’t going to move quickly on AI. But the biggest development is that Donald Trump took back the White House, and hit the brakes on the Biden administration’s nuanced, pro-regulation tone. The Trump doctrine of AI regulation seems suspiciously close to that of Trump supporter Marc Andreessen, who declared in his Techno Optimist Manifesto that AI regulation was literally a form of murder because “any deceleration of AI will cost lives.” Vice President J.D. Vance made these priorities explicit in an international gathering held in Paris this February. “I’m not here … to talk about AI safety, which was the title of the conference a couple of years ago,” he said. “We believe that excessive regulation of the AI sector could kill a transformative industry just as it’s taking off, and we’ll make every effort to encourage pro-growth AI policies.” The administration later unveiled an AI Action Plan “to enhance America’s position as an AI powerhouse and prevent unnecessarily burdensome requirements from hindering private sector innovation.”
Two foes have emerged in this movement. First is the European Union which has adopted a regulatory regimen that demands transparency and accountability from major AI companies. The White House despises this approach, as do those building AI businesses in the US.
But the biggest bogeyman is China. The prospect of the People’s Republic besting the US in the “AI Race” is so unthinkable that regulation must be put aside, or done with what both Altman and Cruz described as a “light touch.” Some of this reasoning comes from a theory known as “hard takeoff,” which posits that AI models can reach a tipping point where lightning-fast self-improvement launches a dizzying gyre of supercapability, also known as AGI. “If you get there first, you dastardly person, I will not be able to catch you,” says former Google CEO Eric Schmidt, with the “you” being a competitor (Schmidt had been speaking about China’s status as a leader in open source.) Schmidt is one of the loudest voices warning about this possible future. But the White House is probably less interested in the Singularity than it is in classic economic competition.
The fear of China pulling ahead on AI is the key driver of current US policy, safety be damned. The party line even objects to individual states trying to fill the vacuum of inaction with laws of their own. The version of the tax-break giving, Medicaid-cutting megabill just passed by the House included a mandated moratorium on any state-level AI legislation for 10 years. That’s like eternity in terms of AI progress. (Pundits are saying that this provision won’t survive some opposition in the Senate, but it should be noted that almost every Republican in the House voted for it.)