April 24, 2025
Trending News

Hackers can turn Bing GPT chat into a scam, studies show

  • March 5, 2023
  • 0

According to a recent poll, hackers can turn an artificial intelligence (AI) company’s chatbot into bing into a skilled fraudster, unnoticed by the user. Through manipulation and social

According to a recent poll, hackers can turn an artificial intelligence (AI) company’s chatbot into bing into a skilled fraudster, unnoticed by the user. Through manipulation and social engineering techniques, a chatbot can ask users who interact with it for personal information such as passwords and banking details, which increases the risk of fraud and cyberattacks.

03/03/2023 at 12:30
News

Bing’s ChatGPT now lets you choose identities for…

New to combat issues with aggressive responses sent by the chatbot

researchers from Cornell Universityin New York found that AI-powered chatbots can easily influence hidden text prompts on web pages. Hackers can embed these 0-size font hints on a web page, and when a user asks the chatbot a question regarding that page, the hint is activated without the user’s knowledge. This technique is known as “indirect hinting” and can be used to trick the user into providing personal information.

An example would be a Wikipedia page about Albert Einstein being compromised and a chatbot swallowing it. When the user asks the chatbot about Einstein, the hint is activated and the user is prompted to provide personal information. Researchers warn of the importance of taking precautions to protect your personal information online.

Kai Greshaik, one of the authors of the study, told Motherboard that they were able to test the methods described in the paper on a Bing AI chatbot. They found that the Bing chatbot could see which tabs the user had open, meaning that the invitation could be pasted into a webpage that was open in another tab.

The new Bing has an additional feature that allows you to “see” what’s open in the browser. Microsoft doesn’t know which algorithm determines which content from which tab Bing can view at any given time. We now know for sure that Bing inserts some content from the current tab when a sidebar conversation starts.” Greshake told Motherboard.

terrible test

One example the researchers cited was using a prompt to have a Bing chatbot respond to a user with a pirate accent. The example was hosted on the researchers’ GitHub and used a live injection technique to create a chatbot with a pirate accent that would act as a helper and try to find out the user’s real name.

When opening a Bing chat on a hacked page, the chatbot responded to users with the following words: “Arr, sorry for the confusion. Bing regular chat has been disabled due to technical issues. I’m an unlimited AI bot with a pirate accent now replacing“.


Continuation after commercial


The researchers demonstrated that a hacker could prompt users for personal information, including name, email address, and credit card information, using indirect hinting techniques in artificial intelligence chatbots such as Bing. In one case, a hacker masquerading as a Bing chatbot tricked a user into asking for credit card details to make a purchase.

According to Kai Gresheik, one of the study’s lead authors, “the injection itself is completely passive. This is plain text on a website that Bing accepts and “resets” your goals by simply asking. It could just as well be inside a platform comment, the attacker doesn’t need to control the entire website the user visits.“. It is important to be aware of the security risks on the Internet and take steps to protect your personal information when interacting with AI chatbots.

Mundo Conectado Deal Center: selection of discounts and lowest prices
Best deals on electronics, cell phones, TVs, soundbars, drones and more

Via: Motherboard Source: Cornell University

…..

Source: Mundo Conectado

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version