Another war is being fought in your social media feed, powered by AI


“Do you want to count my fingers?”

This was the question Israeli Prime Minister Benjamin Netanyahu asked in a video of himself ordering a coffee in Jerusalem — a pointed response to viral AI-fakes claiming he’d been assassinated.

Rumors that Netanyahu was dead or injured were discussed recently by Iranian state media outlets, and spread by pro-regime social media accounts. Some speculated, in one of the official videos, that Netanyahu had six fingers — a common generative AI glitch.

At the same time, a video of an Iranian man hugging a cardboard cutout of Mojtaba Khamenei, Iran’s new supreme leader, was going viral on social media. Khamenei, who replaced his father, Ali Khamenei, has not been seen or heard since being named Iran’s new ruler.

Shahriar Kaisar, a senior lecturer in the Department of Information Systems and Business Analytics at RMIT University, told SBS Examines “battles are now fought not only on the ground, but on social media and media as well”.

Kaisar said opposing sides in the war in the Middle East are using AI-fakes as a form of psychological warfare, attempting to disrupt trust.

“The distinction between truth and lie is very blurred. It’s very difficult to understand, you know, what to trust anymore,” he told SBS Examines.

“The war crime can be real or fake. But based on the video or audio or image, we cannot really distinguish what is true and what is false.”

He added this also means reality can be dismissed as fake.

“If we see something like, you know, an army being very cruel … that can be dismissed based on the suggestion that it is a fake video, and not necessarily a real one.”

Tales of ‘military prowess’

Since the start of the war in Iran and in the broader Middle East, social media has been flooded with misinformation and disinformation pushing different competing narratives.

US-based research think-tank Newsguard reported last week that the Iranian regime has been engaging in disinformation, actively trying to “exaggerate or entirely fabricate tales of Iran’s military prowess”.

Some examples include deepfake footage of Iranian attacks on different parts of the region: US bases in the Middle East, residential buildings in Tel Aviv, and commercial buildings in Dubai.

Other videos show US and Israeli soldiers allegedly crying and saying they miss home.

Dara Conduit, a senior lecturer in political science at the University of Melbourne, told SBS Examines the Iranian regime has imposed a nationwide internet blackout. Under such circumstances, “disinformation has been really powerful inside the country,” she said.

“The Iranian regime basically, for once, actually has control of the narrative inside Iran. So that disinformation is really important in helping to create a strong narrative that Iran is vulnerable,” she told SBS Examines.

Conduit said the Iranian regime is aiming to convince Iranians that “we’re the victim of the Israeli and US conspiracy that we’ve been telling you about for decades. And here it is, it’s come to fruition, it’s killed our supreme leader, and we’re fighting back because we’re strong.”

But she added not all of the disinformation appears to target Iranians.

“There are various campaigns running … [disinformation] is targeting a wide range of people and serving a wide range of goals.

“When targeting the West … they are looking to sow confusion and looking to sow dissent.”

Old-school methods, not just AI

Disinformation as a form of warfare is nothing new.

“Authoritarian regimes have been using disinformation their entire lives through state media,” Conduit said.

“In the contemporary sense, the sort of disinformation through social media, [Iran] has been doing it for at least a decade … through various platforms.”

In 2019, X, then known as Twitter, announced it had removed 4,800 accounts it claimed were spreading Iranian regime-related misinformation.

In February this year, the Institute for Strategic Dialogue in the UK reported the regime responded to the country’s nationwide protests “with a wide propaganda campaign” using social media accounts.

AI-generated videos aren’t the only form of misinformation circulating on social media.

Social media influencers, including Australians, have shared misleading footage on X, claiming it depicts attacks on CIA headquarters in Dubai. The video is actually from 2015, and depicts a fire in a residential complex in the UAE.

Other forms of more ‘traditional’ disinformation tactics include staged videos.

US-based Iranian journalist and anti-regime activist Masih Alinejad shared a video featuring interviews from Iranian state television. In one video, a woman is crying on a Tehran street, saying her hotel was attacked and her friends were killed. In another, a different woman is crying, telling reporters her house was bombed.

Alinejad claimed both women were actors, and shared other videos of the same women appearing in other similar interviews.

SBS Examines cannot verify these claims.

“Disinformation has long been central to warfare,” Conduit said.

“We kind of think of disinformation as trying to spread a certain narrative, but actually, just one of the most powerful ways that disinformation can have an impact is by creating distrust.

“It’s spreading a whole pile of different narratives that are just enough to confuse people and either make them disengage or stop them from believing what they hear.”

‘Ongoing war between good and evil’

Kaisar told SBS Examines “a collective effort” by media, legislators and the public is needed to fight disinformation in its many forms.

“It is very difficult nowadays to detect defects or fake images or [fake] news stories or even [fake] articles or videos.”

He said social media platforms were beginning to embed some fact checking and AI detection tools into their platforms.

“From a government perspective … in Australia we have deepfake laws for pornographic images, but not necessarily for other kinds of deepfakes,” Kaisar said.

He added a number of research initiatives are looking into how to “fight fire with fire” and use AI tools to detect AI-fakes.

As for the audience, he urged people to “think before you share,” and highlighted the so-called “ABC rule”.

“Look at the actor [A], and their movement and their posture and all that.

“Look at the background [B], whether the objects make sense or not.

“Then look at the context [C], verify the source as well.

“All of this is being done by the users, as well as by different agencies.

“I think we should be able to address this. But then again, it’s an ongoing war between the good and the evil.”



Source link