In a research report, Google warns about the impact of generative AI on cyberattacks. 2024 already promises to be a challenging year for cybersecurity.
Generative AI has many useful applications, but the technology is not without risks. In the Cybersecurity Forecast 2024, Google’s security experts look ahead to what awaits us. Unfortunately, the report does not leave us optimistic about 2024. Google ranks generative AI as one of the top emerging security risks.
There have been concerns from the cybersecurity world for some time about the widespread adoption of generative AI tools, but Google says they won’t be widely used by cybercriminals until next year. Generative AI can serve them in various ways. Artificial tools primarily help to write a grammatically correct email. Previously, language errors were one of the first indicators to distinguish a fake email from a real one, but that is no longer the case.
With the help of generative AI, attackers can also be more targeted. All they need is the information from your publicly visible social media profiles, which they then feed to the AI. If scammers live and work where you live, they can approach you in a much more personal way. Tools like ChatGPT may have built-in controls to prevent illegal use, but asking the chatbot for help with a quote or standard email will rarely set off alarm bells.
Fragile trust
Google also fears that AI-generated content will increasingly find its way into public information. With generative AI, you can create a fake news report and footage by cleverly playing with prompts. This makes gullible people even more vulnerable to misinformation or can have the opposite effect and make people suspicious of everything they see, hear and read. This fear is not fiction at all: Adobe has been criticized for offering false images of the war in Gaza in its image database that have already been used by news media.
Attackers can already do a lot with the tools currently available, but Google assumes that they will also develop and offer AI applications on the Dark Web themselves. The barrier to accessing generative AI for bad purposes is becoming ever smaller. Just as we have “Ransomware-as-a-Service” today, “LLM-as-a-Service” will appear in the cybercrime environment in 2024.
AI as defender
Luckily, not everything is negative. Generative AI could become a dangerous weapon, but the technology can also benefit defenders. A key application of AI is to synthesize large amounts of data and contextualize it into threat intelligence to provide actionable detections or other analysis.
AI can therefore be a solution to the shortage of skilled workers in the industry. In our last round of discussion we asked what other experts thought about this.
Cyberwar in space
2024 already promises to be an exciting cyber year. Google is also anticipating the necessary geopolitical digital tensions. In addition to the military conflicts in Ukraine and Gaza, the numerous elections in Western countries and even the Olympic Games will provide a breeding ground for state-sponsored attacks and hacktivism.
Google points to the “big four”: China, Russia, North Korea and Iran. Cyber attacks do not have to be limited to Earth. Google believes that space infrastructure such as satellites will become an increasingly important target for such attacks.