Most artificial “intelligence” (AI) publications dealing with security and attacks focus on the study of attacks on machine learning algorithms and defenses against such attacks. For example, traditional
Most artificial “intelligence” (AI) publications dealing with security and attacks focus on the study of attacks on machine learning algorithms and defenses against such attacks. For example, traditional malicious attacks on AI systems occur when an attacker manipulates input data to trick machine learning algorithms, resulting in misclassification.
Nonetheless, numerous articles also explore AI as a potential weapon against information systems—potentially enabling faster, larger, and more widespread attacks—and as a tool to improve existing attack vectors.
In this article we look at the use of AI to facilitate attacks on computer systems. In particular, we describe how AI can change or is already changing different attack vectors.
Pessimistic predictions
In 2018, Brundage et al. noted that the increasing use of AI would bring three changes to the threat landscape:
A Expansion of existing threats: AI could reduce the cost of attacks by requiring fewer personnel, but also reach a wider range of potential targets.
Introducing new threats: AI systems could perform tasks that would normally be impossible for a human.
A Changing the typical nature of threats: Attacks enabled by the use of AI can be more effective, targeted and harder to attribute
These predictions are supported by a recent report from the UK’s National Cyber Security Center (NCSC), which predicts an increase in the number and effectiveness of AI-based cybersecurity threats.
For example, easy access to LLMs could allow adversaries to circumvent their own resource, capability, and/or knowledge limitations. In addition, the uncontrolled use of AI applications in internal projects or by less vigilant employees can create new attack surfaces and lead to the loss of personal data, intellectual property or confidential information.
Phishing and social engineering
As early as 1966, ELIZA, one of the first conversation agents, discovered that people could be deceived by machines. Natural language processing is an AI application where plain text is the data source from which models are extracted. Speech processing is successfully used for many applications. An example of this is detecting unwanted emails, but also bypassing spam filters.
Phishing is particularly suited to this latter approach because text models can be used to identify topics of interest to the target and generate sentences to which the target might respond. In J. Seymour and P. Tully, “Weaponizing data science for social engineering.”For example, the authors use a Markov model and a recurrent neural network to show that it is possible to automatically generate messages used in a phishing scheme on Twitter: the tool learns the next word based on the previous one Context in publishing the target to predict history. Each message is therefore tailored to a specific target, increasing the accuracy of the attack.
Given the ability of LLMs to better “understand” context and mimic human text better (sometimes even with fewer errors), we see such tools already being used to interpret plausible emails from colleagues, friends and family in the right way Tone of writing or popular e-commerce sites, possibly based on information from social media.
Worse still, it is now possible to use ChatGPT to generate not only phishing emails but also the associated website without any security knowledge. This is even more concerning considering that 94% of malware discovered is still sent via email.
Another example of using AI to facilitate phishing attacks is DeepFish. This software creates new synthetic phishing web addresses by learning models of the most effective web addresses in historical attacks. These addresses can then be used in phishing emails or other channels, for example for misleading advertising. Shortly after launching Bing Chat, Microsoft added the ability to insert ads into conversations. Unfortunately, ads carry inherent risk and can trick users into downloading software, visiting malicious websites, and installing malware directly from a Bing chat conversation.
Automatic hacking
AI makes it possible to carry out attacks at machine speed. Deephack, for example, is a software agent made up of a few hundred lines of Python that uses a neural network and learns to break into web applications through trial and error. It learns to exploit different types of vulnerabilities, which can open the door to a variety of new hacking systems.
DeepLocker goes one step further by hiding its malicious intentions and activating itself on specific targets. To determine whether the machine running DeepLocker’s code is a target or not, DeepLocker uses a complex artificial neural network instead of a simple list of rules. This prevents tools that statically and dynamically analyze the software from detecting the presence of malicious code. DeepLocker also uses another neural network to generate a key to encrypt or decrypt the malicious part of its code, making it difficult to detect.
Certain hacking operations could be simplified and accelerated using generative models. For example, malicious parties could use tools like PentestGPT. This tool can help manage various tasks in a penetration testing process such as: B. using certain tools (especially using commands with complex options, which are often difficult for a human) and suggesting the steps to follow.
According to the authors, this tool can even be a “intuition” about what to do in a particular break-in scenario. However, there are no effective recommendations for completing tasks independently. Additionally, the tool is unable to maintain a coherent understanding of the testing scenario. But Fang et al. have shown that agents based on LLMs such as ChatGPT can roam the web autonomously and penetrate flawed web applications unattended.
Finally, generative AI tools trained on sufficiently large vulnerability databases could also be used to automate code analysis to identify exploitable vulnerabilities, but the cost of building such models is high.
Payload and malicious code generation
In a cyberattack, the payload is the part of the attack that causes the damage (e.g. file deletion). This could be in a virus or worm, an attachment, or a query sent to a SQL database. According to Gupta et al. Payloads can be generated using a generative AI tool, sometimes in such a way that they cannot be recognized by a web application firewall (WAF).
A generative AI tool can also be used to write malware or ransomware: Guptal et al. conducted several tests with ChatGPT, which particularly convinced them to provide sample code for various malware such as NotPetya, REvil, Ryuk and WannaCry. The results are not immediately useful, but provide a high-level structure of the code that is fairly obvious to anyone who has ever programmed, but could lead to significant improvements in the years to come. Similar tests have also been carried out with similar results for viruses that exploit vulnerabilities such as: E.g. Meltdown, RowHammer and Specter.
However, Hutchins has serious doubts about the possibility of generating malicious software using AI, and in particular using tools like ChatGPT, which are certainly not capable of creating fully functional software but can, at best, consist of small, difficult-to-assemble building blocks. He also points out that this AI-generated code already exists on the internet.
Attacks on physical systems
Finally, assuming that physical systems (e.g. a cooling operating system) are less secure than the IT infrastructure and comparatively easier to exploit, it is possible to use malware to attack an IT infrastructure indirectly through the physical system, where the Malicious actions are disguised as random failures (e.g. a simulated overheating resulting in a true emergency shutdown). This is shown by Chung et al. Your tool automatically learns attack strategies based on measurements collected from the physical system.
Analysis of cyber attacks using AI
To enable security engineers to effectively study the classification of AI-based threats and their impacts and better understand attackers’ strategy, Nektaria et al. a framework for analyzing AI-based cyberattacks. It is based on Lockheed Martin’s existing and widely used “Cyber Kill Chain” framework and consists of three layers:
Attack phases and objectives: This first level is used to describe when an attacker can achieve their malicious goals depending on the lifecycle of the cyberattack. It represents the attacker’s intent and the type of AI technique used to carry out the malicious actions, depending on each phase in the cyberattack lifecycle.
Impact and classification of malicious AI: This second level is a classification based on the impact of the malicious use of AI techniques, showing the potential impact depending on the phase of the attack applied.
Classification of defense methods: Defending AI-based cyberattacks cannot be accomplished with a simple solution or single tool. To combat the “intelligence” of new methods, a defense-in-depth approach is required across the entire cyberattack lifecycle.
Diploma
The above examples make AI appear primarily as a new “productivity tool” for already well-motivated (professional or other) attackers. The biggest threat AI could pose to security is the large-scale discovery of entirely new classes of attacks. However, there is no evidence that such detection is more likely than detection by human actors.
However, many questions remain about how to prevent and mitigate these complex threats. However, a good threat analysis with an appropriate framework is a good starting point. In addition, we believe that an effective way to combat AI-powered counterparties will also be to leverage AI itself to be competitive in terms of reach, speed and scale. As we will see in a final article on this topic, AI could actually help automate cyber defense tasks such as vulnerability assessment, intrusion detection, incident response, and threat intelligence processing.
This is a post by Fabien AP Petitcolas, IT security specialist at Smals Research. This article was written on my own behalf and does not represent any position on behalf of Smals.Are you interested in working at Smals? Then take a look at the current extensive job offer.
As an experienced journalist and author, Mary has been reporting on the latest news and trends for over 5 years. With a passion for uncovering the stories behind the headlines, Mary has earned a reputation as a trusted voice in the world of journalism. Her writing style is insightful, engaging and thought-provoking, as she takes a deep dive into the most pressing issues of our time.