AI and disinformation: Opportunities and risks amid war

AI and disinformation: Opportunities and risks amid war

Ukrinform
The attention of mankind today is focused on artificial intelligence (AI). OpenAI providing free access to the ChatGPT chatbot and publication on social networks of doctored images created with the help of Midjorney and other neural networks made this tool closer than ever to ordinary netizens. 

This has actualized discussions of the risks and opportunities that artificial intelligence brings along as part of information wars.

How artificial intelligence helps in working with information

AI has great potential for creating and processing content. The Center for Strategic Communication and Information Security employs AI capabilities to monitor media space and analyze an array of online publications. This is about automated tools, including SemanticForce and Attack Index platforms.

SemanticForce helps users identify information trends, track changes in user response on social media to news and events, identify hate speech, etc. Another vector of the neural network application is detailed image analysis, which allows for the rapid detection of inappropriate or malign content.

Attack Index uses machine learning (assessment of message tonality, source ranking, forecasting the development of information dynamics), cluster analysis (automated grouping of text messages, detection of plots, formation of story chains), computer linguistics (to identify established phrases and narratives), formation, clustering, and visualization of semantic networks (to determine connections and nodes, development of cognitive maps), and correlation and wavelet analysis (to detect ongoing psyops).

The available tools allow distinguishing between organic and coordinated content distribution, detect automated spam distribution systems, assess the impact on the audience of certain social media user accounts, tell bots from real users, and much more – all using AI.

These tools can be used to both detect disinformation, analyze misinformation campaigns, and develop countermeasures. 

AI potential to create and spread disinformation

Almost every day, neural networks demonstrate the improvement of their capabilities in creating graphic, textual, and audiovisual content. Its quality will improve considering the capabilities of machine learning. Today, popular neural networks are used by Internet users more like a toy than a tool for creating fakes.

However, there are already examples of how the images generated by neural networks not only became viral, but also were perceived by users as real. In particular, the image of “a boy who survived a missile strike in Dnipro” or “Putin greeting Xi Jinping on his knees.”

These examples clearly demonstrate that the images designed with the help of neural networks already compete with the real ones in terms of their emotional charge, and this will certainly be used for the purpose of disinformation.

A study by the NewsGuard analytical center, conducted in January 2023, found that ChatGPT is able to generate texts that develop the already existing conspiracy theories and include real events in their context. The tool has the potential for automated distribution (through bot farms) of multiple messages, the topic and tone of which will be determined by a human operator but their text will be generated by AI. Already today, with the help of this bot, disinformation messages can be created, including those based on the narratives of Kremlin propaganda – by formulating appropriate requests. Countering the spread of artificially generated fake content is a challenge that we already have to be prepared to respond to.

War use of AI: what to expect from Russians

Russia’s special services, already having extensive experience in using photo and video editing to create fakes and run psychological operations, are now actively mastering AI. Deepfake technology is based on AI. It was used, in particular, to create a fake video address by President Zelensky about Ukraine’s “surrender” that appeared in the media space in March 2022.

Given the poor quality of this “product,” prompt reaction of state communications bodies, the president, who personally refuted the fake, and journalists, the issue didn’t get much coverage. The video did not reach its goal either in Ukraine or abroad. But Russians are obviously not going to stop at that. 

Today, the Kremlin uses a huge number of tools to circulate disinformation: TV, radio, websites, propaganda blogs on Telegram, YouTube, and social networks. 

AI has the potential to be used primarily for creating photo, audio, and video fakes, as well as for bot farms. AI can replace a significant part of human personnel at Russian “troll factories,” Internet warriors who provoke conflicts on social media and create the illusion of mass support for Kremlin narratives online.

Instead of “trolls” who pen comments according to certain guidebooks, this can be done by AI using keywords and the vocabulary it is fed with. At the same time, it’s actual influencers (politicians, propagandists, bloggers, conspiracy theorists, etc.) who have a decisive impact on the loyal audience rather than nameless bots and Internet trolls. However, with the help of AI, the weight of the latter can be increased by quantitative growth and “fine-tuning” of messages for different target audiences.

In 2020, the Ukrainian government approved the “Concept for the Development of Artificial Intelligence.” The framework document defines AI as a computer program, respectively, the legal regulation of the use of AI is the same as in other software products. So, it is too early to talk about any legal regulation of AI in Ukraine.

The development of AI outpaces the creation of safeguards against its malicious use and the formulation of policies to regulate it.

Therefore, the cooperation of Ukrainian government agencies with Big Tech companies in countering the spread of disinformation, as well as identifying and eliminating bot farms, should only deepen. Both Ukraine government and the world’s technological giants are interested in this.

Center for Strategic Communication and Information Security

Photo: armyinform.com.ua/authored by Beata Kurkul

While citing and using any materials on the Internet, links to the website ukrinform.net not lower than the first paragraph are mandatory. In addition, citing the translated materials of foreign media outlets is possible only if there is a link to the website ukrinform.net and the website of a foreign media outlet. Materials marked as "Advertisement" or with a disclaimer reading "The material has been posted in accordance with Part 3 of Article 9 of the Law of Ukraine "On Advertising" No. 270/96-VR of July 3, 1996 and the Law of Ukraine "On the Media" No. 2849-Х of March 31, 2023 and on the basis of an agreement/invoice.

© 2015-2024 Ukrinform. All rights reserved.

Website design Studio Laconica

Extended searchHide extended search
By period:
-