May 13, 2025
Trending News

https://www.xataka.com/aplicaciones/otro-gran-problema-telegram-no-tiene-nada-que-ver-rusia-bots-que-desnudan-a-personas-ia

  • October 26, 2024
  • 0

The year was 2019. Generative AI was still in its infancy, but terms like neural networks (for the better because of their potential) and deepfakes (often for the

The year was 2019. Generative AI was still in its infancy, but terms like neural networks (for the better because of their potential) and deepfakes (often for the worse) were already starting to get heard and made waves. One of the most notorious scandals of 2019 was DeepNude, a website that allowed any woman to undress by simply uploading a photo. Behind the scenes was a neural network trained on more than 10,000 photos of naked women.

The website had been live for months but only lasted a few hours after it was discovered. The developer, who said his name was Alberto and lived in Estonia, closed the platform, claiming that “the possibility of people abusing it” was “very high” (it could not be known) and “the world was not ready for it”. yet.”For DeepNude”. This was in 2019.

Today, in 2024, this technology has become what it is: a technological marvel whose greatness is matched only by the difficulty of stopping its misuse. Because AI has an endless number of positive use cases, but it can also be used for less ethical and moral purposes. Purposes like ripping people off via a Telegram bot. Like DeepNude but easier and visible to everyone. Because the world is still not ready for such a struggle in 2024.

Four million users. According to a study by WIRED magazine, these are bots that collect at least 50 Telegram bots a month whose sole purpose is to create images or videos of real naked people. Two of them have 400,000 monthly users, according to the magazine. Others, 14 to be exact, exceed 100,000 users.

We’re talking about thousands of people creating (potentially) nude images of others without their consent. This is clearly a violation of data protection or privacy, as well as privacy, dignity and personal image. And far from being an innocent thing, it is a practice that can (and does) have a real impact on people’s lives. According to Home Security Heroes’ State of Deepfakes research, deepfake pornographic content increased by 464% between 2022 and 2023. 99% of this content features women.

How do they work?. As detailed by WIRED, these bots are sold with messages like “I can do whatever you want with the face or outfit in the photo you give me,” and often require the user to purchase tokens with real money or cryptocurrencies. . Whether they produce the promised result or are a scam is another story. Some of these bots let you upload photos of people to train the AI ​​and create more accurate images. Others do not advertise as scraping bots, but link to bots that are capable of scraping.

basic problem. It’s not so much that such bots can be found and used on Telegram, but how complicated it is to stop this content. As for Telegram, which is a deep web in itself, the messaging application has already been the subject of discussions on such issues from time to time.

In fact, the last case happened recently: the arrest of its founder. Pavel Durov was arrested in France for clearly contributing to crimes on Telegram due to lack of moderation. They defended themselves on Telegram, claiming that “it is absurd to claim that a platform or its owner is responsible for the misuse of that platform.” After his arrest, Durov assured that he would make moderation one of the service’s priorities.

The main issue is how complicated it is to stop the creation and dissemination of such content.

However, we should also point out that, according to WIRED news, Telegram has eliminated the channels and bots reported by the magazine. Now, these channels are what they are, but of course they’re not all there. Telegram, as we mentioned, is a deep network in itself, providing the user with all the necessary tools to find content. Before going any further, a search engine.

It’s complicated to deal with. Fighting deepfakes is “fundamentally a lost cause.” These were Scarlett Johansson’s words in 2019. The actor was one of the first victims of pornographic deepfakes (of course, he is not the only one), and today, in 2024, the truth remains more or less the same. There have been some moves by big tech companies, but the reality is that deepfakes are still prevalent.

Example of a fake image created by artificial intelligence during the devastation of Hurricane Helene | Click on the image to see the original tweet

Example of a fake image created by artificial intelligence during the devastation of Hurricane Helene | Click on the image to see the original tweet

And today’s tools have made it even easier to do this. Want a photo of Bill Gates holding a gun? Taylor Swift in her underwear or supporting Donald Trump? For example, you can do this directly in Grok, in X’s AI. Although some platforms like Midjourney or DALL-E block controversial requests, anyone can train their own AI to do who knows what with a simple Internet search, free time, lots of pictures, and a bad idea.

examples. We can find as many as we want. The United States has the newest ones: deepfakes produced as a result of the devastation caused by Hurricane Helene. In South Korea, the deepfake porn problem has reached the highest levels and has become a matter of national interest. So much so that a few days ago, a series of laws were approved that stipulate prison terms and fines for the production and even viewing of this synthetic content. “Any person who possesses, purchases, stores or displays illegal synthetic sexual material will be imprisoned for up to three years or fined up to 30 million won (20,000 euros at the exchange rate),” the BBC reported. In fact, Telegram has also played a significant role in the spread of synthetic pornographic content in South Korea.

What has been tried. One industry approach is to mark AI-generated content with invisible watermarks. Currently it is up to the creator whether the content is synthetic or not (Instagram and TikTok have tools for this, for example), but watermarking will prevent or at least reduce the spread of false content and fake news. It will also allow early diagnosis.

But the reality is that its implementation globally is not the norm. The challenge becomes much greater if we talk about synthetic pornographic content. This is not just a matter of monitoring platforms, but also of early detection and prevention of damage. Watermark alone does not solve the problem as it needs to be applied.

Watermark for AI-generated content recommended by OpenAI | Image: OpenAI

Watermark for AI-generated content recommended by OpenAI | Image: OpenAI

For watermarking to be effective, it must be implemented in every synthetic content creation model and tool. Not just commercial ones, but also ones where the user can run it locally. In this way, all content produced by artificial intelligence will be marked at its source and it will be easier to detect it by the platforms’ systems. But saying it is one thing, doing it is another.

Image | Wikimedia Commons and Pixabay, edited by Xataka

in Xataka | We have a big problem with AI-generated images. Google believes it has found the solution

Source: Xataka

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version