Like everyone else, scammers immediately took an interest in ChatGPT, an advanced OpenAI artificial intelligence chatbot powered by Microsoft and launched in November. In a new security report released Wednesday by Meta, security analysts said they’ve detected around 10 types of malware since March alone, which appear to be ChatGPT and similar AI-based tools specifically aimed at hijacking online company accounts.
According to Guy Rosen, Meta’s head of information security, these scams can be carried out, for example, through web browser extensions (some available in official web stores) that offer ChatGPT-related tools and may even offer some features similar to ChatGPT. , but these Extensions ultimately prevent users from providing sensitive information
Malware is also detected using Bard
Meta’s head of information security said his team saw malware similar to ChatGPT apps, and once detected, they turned their bait to other popular products, such as Google’s AI-based Bard tool, to avoid detection.
Rosen said Meta detects more than 1,000 unique malicious URLs and blocks them from being distributed in their apps, and reports them to companies hosting the malware so they can take appropriate action.
Citing the crypto scam example, Rosen noted that a recent attack by cybercriminals follows a pattern in which they use the popularity of new or popular tech products to trick innocent users. Apparently, users need to be extra careful even when downloading AI extensions.