Artificial Intelligence and Disinformation: Opportunities and Risks in War
April 5, 2023
0
Humanity’s attention today is focused on artificial intelligence (AI). OpenAI’s free access to the chatbot ChatGPT, the social networking of a large number of images created with the
Humanity’s attention today is focused on artificial intelligence (AI).
OpenAI’s free access to the chatbot ChatGPT, the social networking of a large number of images created with the help of Midjorney and other neural networks, has brought this tool closer than ever to ordinary Internet users. This has brought to the fore discussions about the risks and opportunities created by artificial intelligence during information wars.
How does artificial intelligence help work with knowledge?
AI has great potential for creating and processing content. The Center for Strategic Communications and Information Security uses artificial intelligence capabilities to monitor the media space and analyze a range of online publications. These are automated tools, especially the SemanticForce and Attack Index platforms.
Among the semantic analysis tools used by SemanticForce is artificial intelligence. It helps to identify information trends, monitor changes in social network users’ reactions to informational reasons, identify hate speech, etc. It helps. Another application vector of neural networks is detailed image analysis, which allows you to quickly identify unacceptable or harmful content.
Attack Index uses machine learning (estimating the tonality of messages, sequencing sources, predicting the evolution of information dynamics), cluster analysis (automatic grouping of text messages, plot detection, plot generation), computer linguistics (to determine constant). expressions and narratives), the formation, clustering and visualization of semantic networks (to identify connections and nodes, to create cognitive maps), and correlation and wavelet analysis (to describe information processes).
Available tools allow artificial intelligence to distinguish between organic and coordinated content delivery, detect automated spam delivery systems, evaluate the impact of different accounts of social network users on the target audience, distinguish bots from real users, etc. allows.
They can be used to identify disinformation, analyze disinformation campaigns, and develop response and countermeasures.
The potential of artificial intelligence to create and spread disinformation
Neural networks are improving their ability to create graphics, text, and audiovisual content almost every day. Considering the possibilities of machine learning, its quality will increase. Today, popular neural networks are used by Internet users more as a toy than a tool for creating fakes.
However, there are already examples of how images produced by neural networks not only go viral, but are perceived as real by users. Specifically, the image of “a boy who survived a rocket attack in Dnipro” or “Putin welcoming Xi Jinping on his knees”.
These examples clearly show that images created with the help of neural networks already compete with real images in terms of brightness and emotionality, and this will certainly be used for disinformation purposes.
A January 2023 study by the NewsGuard think tank found that the popular chatbot ChatGPT can generate texts that enhance existing conspiracy theories and incorporate real events into their context. This tool has the potential to automatically distribute (with the help of bot farms) large numbers of messages whose subject and tone will be determined by one person and whose direct text will be generated by artificial intelligence. Even today, with the help of this robot, it is possible to create disinformation messages, including those based on Kremlin propaganda narratives, by formulating relevant demands. Responding to the spread of artificially crafted fake content is a preparation and response challenge that needed to be answered literally yesterday.
The use of AI in war: what to expect from the Russians?
Special services of the Russian Federation have extensive experience in using photo and video editing to create fakes and conduct psychological operations, and are actively developing artificial intelligence. Deepfake technology is based on artificial intelligence. Specifically, it was used to create a fake video message about President Zelenskyi’s surrender, released in March 2022.
The poor quality of this “product” was “not blown away”, given the rapid response of state communications, the president and journalists, who personally denied the fake. The video did not achieve its purpose, neither in Ukraine nor abroad. But the Russians are obviously not going to stop.
Today, the Kremlin uses a multitude of tools to spread disinformation: propaganda bloggers who produce and promote content on television, radio, websites, Telegram, YouTube and social networks.
First of all, AI has the potential to be used to create photo, audio and video fakes, as well as to run bot farms. AI can replace a large part of the staff in Russia’s “troll factories”, Internet warriors who provoke conflicts on social networks and create the illusion of mass support for Kremlin narratives among users.
Instead of “trolls” who write comments according to methods, this can be done using keywords and vocabulary suggested to it by artificial intelligence. The influencers mentioned above (politicians, propagandists, bloggers, conspiracy theorists, etc.) have a decisive influence on loyal audiences, not anonymous bots and internet trolls. However, with the help of artificial intelligence, the weight of the latter can be increased due to quantitative growth and “fine-tuning” for different audiences.
In 2020, the Ukrainian government approved the “Concept for the Development of Artificial Intelligence”. This framework document defines artificial intelligence as a computer program, according to which the legislation on the use of artificial intelligence is the same as for other software products. Therefore, it is too early to talk about any normative and legal regulation of AI.
The development of AI precedes the creation of safeguards against its unscrupulous and malicious use and the formulation of policies to regulate it.
Therefore, the cooperation of Ukrainian government structures with Big Tech companies to counter the spread of disinformation and identify and eliminate bot farms should only be deepened. Both our state and global technology giants are interested in this.
Strategic Communication and Information Security Center
As an experienced journalist and author, Mary has been reporting on the latest news and trends for over 5 years. With a passion for uncovering the stories behind the headlines, Mary has earned a reputation as a trusted voice in the world of journalism. Her writing style is insightful, engaging and thought-provoking, as she takes a deep dive into the most pressing issues of our time.