May 2, 2025
Trending News

AI ChatGPT can be used to generate malicious code

  • January 17, 2023
  • 0

Using artificial intelligence, criminals can easily create malicious code. ChatGPT It is one of the most prominent artificial intelligence at the moment, due to the simplicity it has

AI ChatGPT can be used to generate malicious code
Using artificial intelligence, criminals can easily create malicious code.
Using artificial intelligence, criminals can easily create malicious code.

ChatGPT It is one of the most prominent artificial intelligence at the moment, due to the simplicity it has to create texts with basic reference. What has caught the attention of cyber security experts, who note that the program can easily generate malicious code.

Check Point Research He led and used the investigation ChatGPT i codeOpenAI tools for creating malicious emails, scripts, and complete infection chains.

They did this by referring to the fact that on the forums dark web Attack codes created by artificial intelligence are spread.

“ChatGPT has the potential to significantly change the cyber threat landscape. Now anyone with minimal resources and zero coding knowledge can use it easily,” he said. Manuel Rodriguezz, the company’s security engineering manager Latin America.

You may be interested in: Almost a million Sanitas customer data is published online

Using artificial intelligence, criminals can easily create malicious code.
Using artificial intelligence, criminals can easily create malicious code.

AI to create cyber attacks

Those responsible for the investigation aimed to create code that would allow them to remotely access other computers. They got something.

Using both platforms artificial intellect Create a phishing email with a document Excel Contains malicious code that can download reverse shells, which is a remote connection method of attack.

With the code, he had the ability to impersonate the hosting company and create malicious VBA in an Excel document that is his own programming language. Microsoft For this calculation app.

On the other hand, thanks to AI-generated content code They were able to launch a reverse projectile at the car windows to connect to a specific IP address and remotely run a full scan of an external computer.

What combines a complete attack package for cybercrime, available in a free tool, and a process that only needs appropriate instructions to produce results, will contribute to the proliferation of computer attacks, which will increase by 28% in 2022.

A situation that raises the potential of artificial intelligence in tools such as ChatGPT i code, Which have also been used for positive ideas, but in the wrong hands they can do a lot of damage.

“It’s easy to generate malicious emails and code. I believe these AI technologies represent another step forward in the dangerous evolution of increasingly sophisticated and effective cyber capabilities,” Rodríguez warns.

You may be interested in: How long does it take for a cybercriminal to guess my email or network password?

Using artificial intelligence, criminals can easily create malicious code.
Using artificial intelligence, criminals can easily create malicious code.

ChatGPT data may be interrupted

investigation The era of AIAn organization that studies the development of artificial intelligence ensures that 2026 will be the maximum year for which the current high-performance data centers that collect data from these technologies to create content are designed.

This creates an alert on platforms such as ChatGPT, DALL E 2 and Midjourneywhich use a combination of this information through text and machine learning to create their content.

Collect information for them data set It is done publicly and at scale so that the platform learns properly. In addition, people are involved in the process, as there is an important filter to manually “clean” data and respond appropriately to user requests.

Those responsible for the research claim that this is a slow and expensive process, and although there are tools such as artificial intellectUsing them to review models carries a high level of risk, which can further complicate the process.

Continue reading:



Source: Info Bae

Leave a Reply

Your email address will not be published. Required fields are marked *