May 6, 2025
Trending News

OpenAI confirms use of ChatGPT to create malware

  • October 14, 2024
  • 0

The use of ChatGPT to create malware has been condemned since its launch as one of the malicious activities that abused the power of the chatbot. This has

OpenAI confirms use of ChatGPT to create malware

The use of ChatGPT to create malware has been condemned since its launch as one of the malicious activities that abused the power of the chatbot. This has been well known in security circles, but the recent OpenAI report represents the first official confirmation that conventional generative AI tools are being used to improve offensive cyber operations.

Putting it into context, which if you follow us, you already know. The last decade has left us with extraordinary advances in artificial intelligence. So much so that various groups of prominent experts have called for a moratorium on what they understand to be “a profound risk to society and humanity”. But there was no stopping at all. Moreover, the public launch of ChatGPT and the largest neural network of its kind was a turning point in the launch of these technologies. The rest of the competitors have also moved forward and we have passed the point of no return.

ChatGPT to create malware

OpenAI claims to have taken down dozens of malicious cyber operations that abused its chatbot debug and develop malwarespread false information, avoid detection by security systems or carry out phishing attacks.

He reported the first signs of this activity Proofpoint in April, who suspected that TA547 (aka “Scully Spider”) was deploying a PowerShell loader written by the AI ​​data and information thief Rhadamanthys for its final payload. Last month, scientists from HP Wolf reported a large-scale campaign against French users that used artificial intelligence tools to write scripts used in a multi-stage infection chain.

ChatGPT to create malware

OpenAI report confirms for the first time the misuse of its AI to create malwarefeaturing cases of threat actors from various countries using it to improve the effectiveness of their operations. The described cases may not provide new options for cybercriminals to develop malware, but they are proof that generative AI tools can make attack operations more efficient. Especially those with less resources and qualifications who help them at all stages, from planning to implementation.

Despite the closure of malicious operations published by OpenAI, it is not difficult to understand that the history of the common path AI and malware will continue. Suspected cases of AI-engineered malware have been detected in actual and ancillary attacks, such as the use of these technologies to create the most convincing emails in preparation for phishing and/or Ransomware campaigns. Or like the well-known network of fake sites created by the APT group using AI, with deepnude generators powered to infect visitors with malware to steal information.

Source: Muy Computer

Leave a Reply

Your email address will not be published. Required fields are marked *