May 1, 2025
Trending News

Research: Digital watermark protection can be easily bypassed

  • October 9, 2023
  • 0

Perhaps the most frightening aspect of artificial intelligence is its ability to produce fake images. Of course, some of them are ridiculous. Arnold Schwarzenegger’s face is superimposed over

Research: Digital watermark protection can be easily bypassed

Perhaps the most frightening aspect of artificial intelligence is its ability to produce fake images. Of course, some of them are ridiculous. Arnold Schwarzenegger’s face is superimposed over Clint Eastwood’s Dirty Harry, who points a gun at a fleeing suspect. Mike Tyson became Oprah. Donald Trump has turned into Bob Odenkirk from Better Call Saul. Nicolas Cage as Lois Lane in Superman.

But recent developments herald a more worrying trend as digital fraud becomes more harmful.

Just last week, actor Tom Hanks decried on social media an ad that used an AI-generated likeness of himself to promote a dental treatment plan. Popular YouTuber Mr. Beast, whose videos have reached more than 50 billion views since 2012, accidentally showed that he was offering the iPhone 15 Pro for $ 2.

Ordinary citizens also become targets. People’s faces appear in social media images without their consent. Of greatest concern is the rise of “revenge porn,” in which lovers post fabricated photographs of their former partners in inappropriate or indecent poses.

As a politically divided United States cautiously approaches the highly contentious 2024 presidential race, the possibility of fake images and videos promises an ugly election like no other.

Additionally, the spread of fake images undermines the legal system as we know it. As national nonprofit media outlet NPR recently reported, lawyers are taking advantage of an unhappy public, sometimes confused about what’s right and wrong and increasingly challenging the evidence presented in court.

Hani Farid, an expert in digital image analysis at the University of California, Berkeley, said: “That’s exactly what worried us, that when we enter the age of deepfakes, everyone can deny reality.”

“This is a classic liar’s dividend,” he said, referring to a term first used in a 2018 report on deepfake’s potential attack on privacy and democracy.

Major digital media companies (OpenAI, Alphabet, Amazon, DeepMind) have pledged to develop tools to combat disinformation. One important approach is the use of watermarks on AI-generated content. However, the article published on the preprint server on September 29 arXiv, contains disturbing news about the ability to block such digital abuse.

Professors at the University of Maryland conducted tests that showed security watermarks were easily bypassed.

“We do not currently have reliable watermarks,” said Sohail Feizi, co-author of the report “Reliability of AI Image Detectors: Key Limitations and Practical Attacks.”

Faizy said his team “knocked it all out”.

“The misuse of AI creates potential dangers related to national security issues such as disinformation, fraud, and even election manipulation,” Fazey warned. “Deepfakes can cause personal harm, from defamation of character to emotional distress, and impact both individuals and society as a whole. Therefore, identifying AI-generated content…becomes an extremely important problem to solve.”

The team used a process called diffusion cleaning, which applies Gaussian noise to the watermark and then removes it. It leaves a corrupted watermark that can bypass detection algorithms. The rest of the image is only minimally altered.

Additionally, they successfully demonstrated that attackers with access to black-box watermarking algorithms could install tags on fake photos to trick detectors into believing they are legitimate.

Better algorithms will surely emerge. As with viral attacks, the bad guys will always try to break through the defenses the good guys come up with, and the cat-and-mouse game will continue.

But Feizi expressed some optimism.

“Based on our results, developing a reliable watermark is a difficult but not necessarily impossible task,” he said.

For now, people need to exercise due care when viewing images that contain content that may be important to them. Being careful, double-checking sources, and a good dose of common sense are key. Source

Source: Port Altele

Leave a Reply

Your email address will not be published. Required fields are marked *