April 25, 2025
Trending News

How the astroturfing or disinformation industry has managed to manipulate…

  • October 12, 2022
  • 0

Photo: Fantasycrave1 on Pixabay Although disinformation has always existed, digital scenarios and technologies have allowed it to turn into a lucrative industry based on manipulation. The work also

astroturf pitch
Photo: Fantasycrave1 on Pixabay

Although disinformation has always existed, digital scenarios and technologies have allowed it to turn into a lucrative industry based on manipulation.

The work also reached Spain. Digital propaganda companies like Cambridge Analytica, broadcasting both from within the country and from others – the United States, the Philippines, Nigeria or Venezuela – specialize in the viralization of content that conditions public opinion according to ideological orientations and interests. users.

In our recent studies, we found that messages from the Philippines went viral to influence the Spanish public.

These actions have the capacity to address a wide range of goals and topics: sex, far-right or left-wing politics, youth, animals, law enforcement, sports, video games, religion and humor.

Seemingly unrelated topics are supported by news sources such as: RT (Russia today) and famous disinformation websites. Digital Mediterranean Y isolated box. This allows them to create very fine relationships and connections among users and to encourage discussion and conflict among artificially created user groups (with false flag attacks, pro and con messages). A scenario that facilitates the viralization of disinformation content and the use of hate speech, especially supported by far-right ideological groups.

  • WhatsApp lets you add 1,000+ participants to group chats

this astroturf pitch

It is known as one of the usual strategies used to condition public opinion. astroturf pitch. This methodology carried out in the fields of marketing and public relations have been identified in many countries. For example, in the South Korean elections. Purpose astroturf pitch is to give an image of the spontaneity and spontaneity of the topics covered: so-called people initiate seemingly spontaneous messages and ideas in a short period of time.

During the first waves of Covid-19, we were able to detect the astroturf following the same pattern in many cases, such as the false news about the infection of Minister Pedro Duque or the attacks on the newspaper. Country. The strategy has several stages:

  • Distribution stage. The strategy starts with accounts managed by real people who are hired to post several posts on the same topic within a short period of time – these are people who appear to be housewives, athletes or people from their own city, with no apparent connection to each other.
  • Amplification stage. Later, the echoes of the messages are increased and the media or journalists are questioned, even if it is false information.
  • Flood stage. If the previous step is successful, a large number of support messages will be launched with the help of automated bots at certain times of the day. This phase usually coincides with mealtimes and there may be several consecutive actions depending on their extent.

hard to detect astroturf pitch. Participating accounts or users never have a large profile. influencers. These are nano or micro influencers, and are characterized by not being different from other users (usually they have several hundred followers). Actors who want to network and spread the news without attracting attention. And they do so without any obvious accusation or expressions of intense hatred. This strategy helps them encourage intense basic emotions and avoid detection by algorithms that monitor such expressions.

What can we do?

We must move forward in teaching citizens skills that enable them to become aware of disinformation and learn to use existing verification projects.

In general, we should be wary of accounts that want to be ours to detect such scam campaigns. friend without knowing them, pretending to be ordinary people. These accounts aim to approach us in order to then send us media-supported, often disinformational messages and gradually persuade us of certain ideas. They also want to know about our tastes, opinions and other data to classify us in future campaigns.

We must also take into account the risks that exist in social networks and the Internet in general. An example is newspaper forums. There are many users trying to polarize and acquire our data for commercial or political use.

It must be admitted that misinformation circulates freely on our computers and mobile phones with the help of an industry. We should know that behind these campaigns there is a very strong sector with high technology and that it drinks from the convenience we provide while giving our data over the internet.

Progress has already been made in mechanisms to detect misinformation content and hate speech from existing digital scenarios, such as those developed from projects like Hatemedia. Meanwhile, we must be careful and assume that much of what is read and retransmitted on networks can contribute to data capture, polarization, disinformation and the spread of hatred.

Sergio Arce Garcia, professor and researcher in digital communications and social networks, UNIR – International University of La Rioja ; Daria Mottareale Calvanese, Professor, UNIR – International University of La Rioja and Professor of Education, Elias Manuel Said Hung, UNIR – International University of La Rioja

This article was originally published on The Conversation. Read the original.

Independent journalism needs the support of its readers to keep going and have the disturbing news at hand that they don’t want you to read. Today, with your support, we will continue to work hard for uncensored journalism!

Source: El Nacional

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version