Friday, July 19, 2024

Opinion | AI could wreak havoc on elections. Congress can get ahead of it.

Opinion | AI could wreak havoc on elections. Congress can get ahead of it.


Political ads have never been known for accurately portraying the candidate’s opponent — and now artificial intelligence threatens to make misrepresentation more realistic than ever. Rather than waiting for AI to cause chaos in the 2024 election, regulators, lawmakers and political parties should act now.

The past few months have shown us just how convincing AI-generated images can be. Sometimes the examples have had little to do with politics — see, for instance, the mock-up of the pope clad in a ludicrously puffy white coat that fooled swaths of the internet this spring. Sometimes they’ve involved candidates but not come from competing campaigns, as with deep-faked photos of Donald Trump being violently arrested. Yet they’ve also been directly deployed as electoral tools.

Florida Gov. Ron DeSantis’s 2024 team posted manufactured depictions of Mr. Trump during his time as president hugging Anthony S. Fauci.

Mr. Trump, in turn, shared a parody of Mr. DeSantis’s much-mocked campaign launch featuring AI-generated voices mimicking the Republican governor as well as Elon Musk, Dick Cheney and others. And the Republican National Committee released an ad stuffed with fake visions of a dystopian future under President Biden. The good news? The RNC included a note acknowledging that the footage was created by a machine.


Screenshots from

RNC’s AI-generated ad

The Republican National Committee released

an ad in April entirely illustrated with AI-

generated images that depicted a dystopian

future if President Biden were re-elected.

Source: Republican National Committee

Screenshots from RNC’s AI-generated ad

The Republican National Committee released an ad in

April entirely illustrated with AI-generated images

that depicted a dystopian future if President Biden

were re-elected.

Source: Republican National Committee

Screenshots from RNC’s AI-generated ad

The Republican National Committee released an ad in April entirely illustrated with AI-generated

images that depicted a dystopian future if President Biden were re-elected.

Source: Republican National Committee

The bad news is that there’s no guarantee disclosure will be the norm. Rep. Yvette D. Clarke (D-N.Y.) has introduced a bill that would require disclaimers identifying AI-generated content in political ads. That’s a solid start — and a necessary one. But, better yet, the RNC, the Democratic National Committee and their counterparts coordinating state and local races should go further than disclosure alone, telling candidates to identify such material in all their messaging, including fundraising and outreach.

The party committees should also consider taking some uses off the table entirely. Large-language models can be instrumental for small campaigns that can’t afford to hire staff to draft fundraising emails. Even deeper-pocketed operations could stand to benefit from personalizing donor entreaties or identifying likely supporters. But those legitimate uses are different from simulating a gaffe by your opponent and blasting it out across the internet or paying to put it on television.

Ideally, campaigns would refrain altogether from using AI to depict false realities — including, say, to render a city exaggeratedly crime-infested to criticize an incumbent mayor or to feign a diverse group of eager supporters. Similar effects could admittedly be achieved with more traditional photo-editing tools. But the possibility that AI will evolve into an ever more adept illusionist, as well as the likelihood that bad actors will deploy it to huge audiences, means it’s crucial to preserve a world in which voters can (mostly) believe what they see.

Party committees should do this rule-setting together, signing a pact, proving that the integrity of information in elections isn’t a partisan question. And honest candidates should refrain of their own accord from dishonest tricks. Realistically, though, they’ll need a push from regulators. The Federal Election Commission already has a stricture on the books prohibiting the impersonation of candidates in campaign ads. The agency recently deadlocked over examining whether this authority extends to AI images. The commissioners who voted “no” on opening the issue to public comment should reconsider. Even better, lawmakers should explicitly grant the agency the authority to step in.

There’s plenty of reasons to worry about what the rise of AI will do to our democracy. Persuading foreign adversaries as well as domestic mischief-mongers not to sow discord is probably a lost cause. Platforms’ job of rooting out disinformation has become all the more important now that better lies can be told to so many people for so little money — and all the more difficult. Congress is working on a broad-based framework to regulate AI, but that will take months or even years. There’s no excuse for government not to take smaller steps forward on the path immediately in front of it.

The Post’s View | About the Editorial Board

Editorials represent the views of The Post as an institution, as determined through debate among members of the Editorial Board, based in the Opinions section and separate from the newsroom.

Members of the Editorial Board and areas of focus: Opinion Editor David Shipley; Deputy Opinion Editor Karen Tumulty; Associate Opinion Editor Stephen Stromberg (national politics and policy); Lee Hockstader (European affairs, based in Paris); David E. Hoffman (global public health); James Hohmann (domestic policy and electoral politics, including the White House, Congress and governors); Charles Lane (foreign affairs, national security, international economics); Heather Long (economics); Associate Editor Ruth Marcus; Mili Mitra (public policy solutions and audience development); Keith B. Richburg (foreign affairs); and Molly Roberts (technology and society).





Source link