WormGPT, the “dark side” of ChatGPT
- July 18, 2023
- 0
WormGPT is a chatbot similar to ChatGPT, but specifically designed to make it easier for cybercriminals to complete their “tasks”. Access to the program is sold on Deep
WormGPT is a chatbot similar to ChatGPT, but specifically designed to make it easier for cybercriminals to complete their “tasks”. Access to the program is sold on Deep
WormGPT is a chatbot similar to ChatGPT, but specifically designed to make it easier for cybercriminals to complete their “tasks”. Access to the program is sold on Deep Web forums and also confirms AI’s capability for malicious purposes.
Generative AI applications have reached the general public thanks to ChatGPT, Google Bard and more to come as we will have AI “even in soup”. And still unregulated as the world’s experts demanded, in an escalation that no one really knows how it will end. Or yes… when the machines are smarter than the humans who designed them and the science fiction seen in movies like Terminator become a reality.
The world of cybercrime is no stranger to the capabilities of artificial intelligence, and the development in question has attracted attention. Taken apart by SlashNext, it was created by GPT-J Open source with custom modules similar to ChatGPT, but more powerful and easier to use for creating malicious code for hacking use.
If generative AI is already used in Deepfakes, fake news and spam, WormGPT goes further and allows you to write malware in Python code and offers concrete help for cybercriminals. It has a number of features, including unlimited character support, chat memory retention, and code formatting options. Just as ChatGPT is constantly learning from your neural network (the largest of its kind, they say), WormGPT was trained for processing a wide range of data sources with a particular focus on malware-related sources.
A specific use case shows the potential of generative AI to refine an email that could be used in a phishing or business email targeting (BEC) attack. Cybercriminals can use such technology to automate the creation of highly persuasive fake emails tailored to the recipient, increasing the chances of an attack being successful.
In the forums where this development is advertised, it is also shown as manipulate interfaces like ChatGPT generate results that could include the disclosure of confidential information, the production of inappropriate content, or the execution of malicious code.
All this is very dangerous. It empowers big cybercriminals, but empowers less specialized ones as custom modules are sold for any task. At SlashNext, they say the proliferation of these practices underscores the growing challenges in protecting AI from criminals and amplifies the challenges of the cybersecurity ecosystem due to the increasing complexity and adaptability of these activities in a world shaped by AI.
Source: Muy Computer
Donald Salinas is an experienced automobile journalist and writer for Div Bracket. He brings his readers the latest news and developments from the world of automobiles, offering a unique and knowledgeable perspective on the latest trends and innovations in the automotive industry.