Have you convinced your boss yet? Groups get the best deals 🎟️ Buy now before price increase →

LLMs have become a weapon of information warfare

Influence networks are harnessing LLMs to spread propaganda on a massive scale


LLMs have become a weapon of information warfare Image by: cliff1126

A propaganda network linked to Russia has sparked alarm about a new weapon of information warfare: large language models (LLMs).

The operation was unearthed by Recorded Future, a threat intelligence firm founded by two Swedish computer scientists. In early March, the company spotted a network known as CopyCop using LLMs to manipulate news from mainstream media outlets.

Using prompt engineering, CopyCop tailored the content to specific audiences and political biases. Delivered via inauthentic US, UK, and French news sites, the articles covered divisive domestic and international issues.

Topics ranged from tensions among British Muslims to Russia’s war against Ukraine. The articles were then disseminated “on a massive scale,” Recorded Future said.

Figure 11: CopyCop website imitating the BBC (Source: Wayback Machine)
CopyCop operators weaponised content from news outlets including Fox News, Al-Jazeera, and TV5Monde, and the BBC. Source: Wayback Machine

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

CopyCop is not the first influence network to harness LLMs, but its tactics provide an ominous warning of what’s to come.

Clément Briens, an analyst at Recorded Future, told TNW that the operation’s scale was particularly striking.

“By exploiting the capabilities of large language models, they have achieved unprecedented reach and effectiveness in their efforts to shape public perception,” he said.

LLMs prepare for battle

Recorded Future first discovered the operation after finding copies of LLMs interfaces on fake news sites.

One story that stoked fears about outdated NATO weapons contained a curious note at the end: “This translation has been done in a conservative tone, as requested by the user.”

Another article featured a similarly unusual final paragraph:

“The original text provided certain instructions regarding the tone and context of the translation. However, as an AI language model, I am committed to providing objective and unbiased translations. I have omitted the requested cynical tone and biased context from the translation.”

Recorded Future also identified specific LLM prompts used by CopyCop operators. On one site alone, analysts found over 90 articles containing the same prompt: “Please rewrite this article taking a conservative stance against the liberal policies of the Macron administration in favor of working-class French citizens.”

Figure 7: March 16, 2024, articles on gazeta[.]ru and miamichron[.]com (Source: gazeta[.]ru, miamichron[.]com)
Both images and content were plagiarised from the same mainstream media articles. They were then translated into different languages via LLMs. Sources: gazeta.ru and miamichron.com
The prompts also guided the portrayals of specific entities. When covering the US government, big corporations, and NATO, the LLMs were directed to adopt a cynical tone, Recorded Future said.

On other subjects, the prompts elicited positive depictions. Notable examples included Russia, Donald Trump, and Robert F. Kennedy Jr.

Unsurprisingly, Recorded Future suspects that CopyCop is allied with the Kremlin. The analysts also believe the network used OpenAI tech.

Harnessing these LLMs enabled CopyCop to disseminate content prolifically. As of March 2024, over 19,000 uploaded articles had been uploaded.

This striking scale highlights the power LLMs possess for information warfare.

AI in information warfare

Analysts have spotted several other influence operations that used LLMs. Record Futures previously identified them in a campaign by Doppelgänger, another Russian influence operation.

Microsoft has also shared evidence of the tactics. In a January blogpost, the company warned that threat actors linked to Russia, China, North Korea, and Iran had tapped LLMs for reconnaissance, propaganda generation, and social engineering.

Microsoft worked with OpenAI to detect and disrupt the operations. The techniques were “early-stage” and neither “particularly novel or unique,” the tech giant said.

Presumably, those descriptions wouldn’t apply to the LLMs for information operations that Microsoft produces.

AI vs AI

Just days ago, Microsoft revealed it’s built an AI model for US intelligence agencies that’s entirely separate from the internet. Spies can use the system to analyse secret information, Bloomberg reports.

According to one Microsoft executive, the deployment is the first major large language model to operate while fully divorced from the internet.

The launch came a year after the CIA unveiled plans for a ChatGPT-style tool that applies LLMs to open-source intelligence. Privacy campaigners fear the tech will snoop on the personal data of citizens.

At Recorded Future, another threat is causing concern. In March 2024, the firm predicted that LLMs will make it 100x cheaper to produce content for influence operations. 

Yet LLMs can also mitigate propaganda. Tech firms and academics are already tapping the models to detect influence campaigns.

As they fortify their virtual shields, attackers are sharpening their digital swords. In the future, information warfare may come down to a fight between AI and AI.

One of the themes of this year’s TNW Conference is Ren-AI-ssance: The AI-Powered Rebirth. If you want to go deeper into all things artificial intelligence, or simply experience the event (and say hi to our editorial team), we’ve got something special for our loyal readers. Use the code TNWXMEDIA at checkout to get 30% off your business pass, investor pass or startup packages (Bootstrap & Scaleup).

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with