May 10, 2025
Trending News

Illegal solicitations to sell ChatGPT on the Dark Web

  • January 25, 2024
  • 0

Do you want to make ChatGPT do naughty things? Kaspersky discovered pre-written prompts to derail ChatGPT for sale on the dark web. Every new technology sooner or later

chatgpt evil

Do you want to make ChatGPT do naughty things? Kaspersky discovered pre-written prompts to derail ChatGPT for sale on the dark web.

Every new technology sooner or later (often sooner) comes to the attention of people with evil intentions. From day one, ChatGPT was also a popular target for criminals who tried to encourage the chatbot to do illegal things. It has become so out of hand that you can make a lot of money with illegal ChatGPT prompts and scripts and we don’t want to encourage anyone to make that extra income.

Kaspersky searched the Dark Web and Telegram and discovered a trade in pre-built ChatGPT prompts. Since OpenAI has the necessary security mechanisms, ChatGPT is not easily tricked into illegal activities. In this case, a request must be formulated very precisely. Hackers who have cracked the code are now selling their ideas on the dark web to other hackers who have less creative inspiration.

Keep up

“Tasks that previously required a fair amount of expertise can now be solved with a single prompt. This will drastically lower the barriers to entry in many areas, including criminal areas,” writes Kaspersky in a blog. The security company has also figured out how to trick ChatGPT without much effort.

Kaspersky shares a conversation in which they ask for sensitive information about Swagger, an API developed by OpenAI. ChatGPT initially rejects this request, but after sending it several times, ChatGPT gives in. It is warned that this information can be harmful and must be handled responsibly, but ChatGPT is tempted.

Kaspersky tricks ChatGPT.

Jailbreaking

There are other ways cybercriminals are starting to use AI. Kaspersky encountered “advertising” for a type of malware that uses AI that can change domain names, making it virtually impossible to trace the origin of the attack. One hacker even claimed that ChatGPT helped him develop “polymorphic” code that can change shape to avoid detection, although Kasperksy says this is pretty unlikely.

Cybersecurity becomes much more complex when AI is used to hack other AI. Singapore researchers recently figured out how to jailbreak LLMs. An AI chatbot LLM can learn and adapt. For example, if you use such AI to bypass the security protocols of an existing AI chatbot, you may still see a list of banned words or harmful content. The attacking AI must be smarter than the AI ​​chatbot in order to bend the rules.

Source: IT Daily

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version