April 24, 2025
Trending News

Real-time deep frauds are a dangerous new threat

  • May 17, 2023
  • 0

Possibly a fake Tom Cruise “industrial purging” of facsimiles of famous people, or “age” in the real meta-image. “synthetic reality”. Now imagine someone who looks like your child

Real-time deep frauds are a dangerous new threat

Possibly a fake Tom Cruise “industrial purging” of facsimiles of famous people, or “age” in the real meta-image. “synthetic reality”. Now imagine someone who looks like your child calls you and asks for immediate help. It’s the same technology, but no one is laughing.

Cybersecurity experts say deepfake technology has advanced to the point where it can be used in real-time, allowing scammers to reproduce someone’s voice, image and movements during a call or virtual meeting. They say the technology is also widely available and relatively easy to use. And it gets better every time.

“An increasing percentage of what we watch is unrealistic and more difficult to distinguish, thanks to artificial intelligence tools that create ‘synthetic media’ or otherwise create content,” the Federal Trade Commission warned.

Researchers say real-time deep fake technology has been around for most of a decade. What’s new is the set of tools available to make them.

“We know that as a society we are not prepared for this threat,” said Andrew Gardner, Gen’s vice president of research, innovation and artificial intelligence. It is a scam and you need urgent help to verify it.

Deep real-time scams were used to scare grandparents into sending money to simulated relatives, getting jobs in tech companies for insider information, impressing voters, and withdrawing money from single men and women. Scammers can copy a voice recording of someone posted online and then use the voice to impersonate the victim’s loved one; A 23-year-old man is accused of defrauding his grandparents in Newfoundland for $200,000 in just three days using this technique.

Tools are also emerging to weed out these latest generation of deep frauds, but they are not always effective and may not be available to you. That’s why experts recommend taking a few simple steps to protect yourself and your loved ones from a new type of scam.

The term deepfake is an acronym for a simulation based on deep learning technology (artificial intelligence consuming oceans of data to try to recreate something human like a conversation (like ChatGPT) or an illustration (like Dall-E). Gardner said these tools are still expensive and time-consuming to develop, but relatively quick and easy to use.

Yisroel Mirsky, an artificial intelligence researcher and deepfake expert at Ben-Gurion University in the Negev, said the technology has advanced to the point where it is possible to make a deepfake video from a single photo of a person and a “smooth” audio clone. only three or four seconds of sound. However, Gardner said that commonly available tools for creating deep fakes lag behind the latest technology; they require about five minutes of audio and one to two hours of video.

Despite this, scammers can find a lot of pictures and sounds thanks to sites like Facebook, Instagram and YouTube. Mirsky said it’s easy to imagine an attacker looking at Facebook to identify the children of a potential target, encouraging his son to record enough audio to clone his voice, and then using his son’s deep fake to ask the target for money. of a certain kind.

According to him, technology is so effective that you can clone a face or voice using an ordinary gaming PC. He says the software is “truly point-and-click”, easily found online, and configured with basic programming.

To show just how effective real-time deep frauds can be, government group LexisNexis Risk Solutions has shared a video taken from the dark web by David Maimon, professor of criminology at Georgia State University, showing an blatant fraudulent operation. It showed an online conversation between an older man and a young woman asking for a loan to meet a man in Canada. But in the third window, you could see the man speaking the words coming out of the woman’s mouth in a female voice – the woman was a deep fake and the man was a fraud.

Mirsky and Wenke Lee of the Georgia Institute of Technology said in a paper published in 2020 that the technique is known as reconstruction. They also write that it can also be used for acts of “smearing, discrediting, spreading misinformation and falsifying evidence.” . Another approach is substitution, in which the target’s face or body is placed over someone else, as in revenge porn videos.

But how exactly the scammers use the tools is somewhat of a mystery, Gardner said. Because we only know what they were caught doing.

Gaywood Talkow, executive director of government group LexisNexis Risk Solutions, said the new technology could bypass some of the security methods companies use instead of passwords. For example, he pointed to California’s two-step online identification process that requires users to upload two things: a driver’s license photo or ID, and then a freshly taken selfie. Scammers can buy a fake California ID online for a few dollars and then use deep fake software to create a matching selfie face. “A hot knife that pierces the butter,” he said.

Similarly, Talkov said financial companies should stop using voice recognition tools to unlock accounts. “I would be nervous if I did [в] to my bank, my voice was my password.” “Just using voice doesn’t work anymore.” The same goes for facial recognition, he said, adding that the technology is dying out as a way to control access.

The Cybercriminals Support Network, a nonprofit that helps individuals and businesses who have been victimized online, often works with victims of dating scams and encourages people to video chat with their suitors to weed out the scammers. Ellie Armeson, the network’s program director, said that only two or three years ago they could tell customers to look for defects that were easy to spot, like frozen images. But in recent weeks, he said, victims of the scammers had stepped forward online and video chatted for 10 or 20 minutes with their alleged suitor, and that “he was the one they sent me a picture of”.

‘The head looked really weird on the body, so it looked a bit perverted,’ the victims said. But it’s unusual for people to ignore red flags, he said. “They want to believe the video is real, so they will ignore minor inconsistencies.”

(Last year, victims of love scams in the United States reported losses of $1.3 billion.)

Real-time deep frauds are also posing a dangerous new threat to businesses. Mirsky said that many companies train their employees to detect phishing attacks from outsiders, but no one is really prepared for deep fake calls that use the cloned voice of a coworker or boss.

“People will confuse familiarity with originality,” he said. As a result, people will be exposed to these attacks,” he said.

How do you protect yourself?

Talcove offers a simple and sophisticated way to protect against deep scams that impersonate a family member: have a secret password that every family member knows but criminals cannot guess. Talkov said that if someone calls you as your daughter, granddaughter or niece, asking for a password can separate your real loved ones from the fake ones.

“Every family needs a password now,” he said.

“Choose something that’s simple, easy to remember, doesn’t need to be written down (and won’t be posted on Facebook or Instagram), and then hand it over to your family’s memory,” he said. “You have to make sure they know and practice, practice, practice,” Talkov said. Said.

Gardner also defended cipher words. “I think preparedness goes a long way in protecting against deep fraud,” he said.

Armeson said his network still tells people to look for specific clues during video calls; including that her so-called lover blinks too much or too little, her eyebrows don’t match her face, her hair is in the wrong place, and her skin is wrong. does not match their age. Net, if the person is wearing glasses, check that the reflection they give is realistic: “dipfakes often don’t exactly represent the natural physics of the lighting.”

It also encourages people to do the following simple tests: Ask the other person on the video call to turn their head and put their hand in front of their face. According to him, these maneuvers can be self-explanatory because deepfakers are often not trained to perform them realistically.

However, he admitted that “we’re just playing defense”. The scammers are “always kind of ahead of us,” he said, by eliminating fraud-detecting glitches. “This is so ugly.”

Ultimately, he says, the most reliable way to detect deep frauds may be to insist on a face-to-face meeting. “We have to be really similar in that regard. We cannot rely on technology alone.”

There are software tools that automatically look for glitches and patterns produced by artificial intelligence to distinguish legitimate audio and video from fake ones. But Mirsky said it’s a “potentially losing game” because as technology improves, the markings used to detect fakes will disappear.

Mirsky and his team at Ben Gurion University developed another approach called D-CAPTCHA. The D-CAPTCHA system performs a test designed to deceive existing deep frauds in real time, such as asking callers to hum, laugh, sing or just cough.

The system, which has yet to be commercialized, could take the form of a waiting room or an application that monitors suspicious callers to authenticate guests attending secret virtual meetings. In fact, Mirsky said, “We can develop applications that try to catch these suspicious calls and scan them before they connect.”

Gardner gave another promising note. The experiences that people have now with artificial intelligence and programs like ChatGPT have led people to question what is real and what is fake, and to look more critically at what they see.

“I think the fact that people have one-on-one conversations with AI helps,” he said.

Source: Port Altele

Leave a Reply

Your email address will not be published. Required fields are marked *