May 2, 2025
Trending News

Artificial intelligence chatbots could be life-threatening

  • November 2, 2024
  • 0

As his relationship with his artificial intelligence friend became increasingly intense, the 14-year-old boy began to distance himself from his family and friends and had difficulty in school.

Artificial intelligence chatbots could be life-threatening

As his relationship with his artificial intelligence friend became increasingly intense, the 14-year-old boy began to distance himself from his family and friends and had difficulty in school. In a lawsuit filed against Character.AI by the boy’s mother, chat logs show intimate and often highly sexual conversations between Sewell and Dany, a chatbot modeled after Game of Thrones character Danaerys Targaryen.


They discussed crime and suicide, and the chatbot used phrases like “that’s no reason not to go all the way.” This isn’t the first known case of a vulnerable person shortening their lifespan after interacting with a chatbot character.

Screenshot of the chat exchange between Sewell and the Dany chatbot. (Megan Garcia – AI Character Case)

Last year, the Belgian took his own life in a similar incident involving Character.AI’s main rival, Chai AI. When this happened, the company told the media it was “making every effort to minimize damage.” Character company said in a statement to CNN. AI said it “takes the security of our users very seriously” and has introduced “numerous new security measures” over the past six months.

A separate statement on the company’s website describes additional security measures for users under the age of 18 (according to the current terms of service, the age limit is 16 years for citizens of the European Union and 13 years for the rest of the world).

But these tragedies vividly illustrate the dangers of rapidly evolving and widely available artificial intelligence systems that anyone can communicate and interact with. We urgently need regulations to protect people from potentially dangerous, irresponsibly designed AI systems.

How can we regulate artificial intelligence?

The Australian government is developing mandatory security measures for high-risk AI systems. Trendy “insurances” in the world of AI management refer to the processes involved in designing, developing, and deploying AI systems.

These include activities such as data management, risk management, testing, documentation and human auditing. One of the decisions the Australian government must make is how to determine which systems are “high risk” and therefore subject to restrictive measures.

The government is also considering whether protective barriers should be applied to all “general purpose models”.

General-purpose models are the underlying engine of AI chatbots like Dany: AI algorithms that can generate text, images, video, and music based on user prompts and can be adapted for use in different contexts.

In the European Union’s groundbreaking Artificial Intelligence Law, high-risk systems are identified using a list that regulators have the right to update regularly. An alternative is a principles-based approach where high risk is determined on a case-by-case basis. This will depend on many factors, including the risk of rights being adversely affected, risks to physical or mental health, risks of legal consequences and the severity and extent of these risks.

Chatbots must be “high-risk” AI

In Europe, assistive AI systems such as Character.AI and Chai are not high-risk systems. Essentially, providers need to inform users that they are merely interacting with an AI system. But it turns out that companion chatbots are not low risk. Most of the users of these applications are children and teenagers. Some systems are even marketed to single people or people with mental illness.

Chatbots have the ability to produce unpredictable, unacceptable and manipulative content. They imitate toxic relationships very easily. Transparency, which means labeling content as AI-generated, is not enough to manage these risks.

Even when we realize we are talking to a chatbot, people are psychologically programmed to attribute human characteristics to the person we are talking to. Suicide deaths reported in the media may be just the tip of the iceberg. We have no way of knowing how many vulnerable people are in addictive, toxic or even dangerous relationships with chatbots.

Fence and “key

When Australia finally introduces mandatory safeguards for high-risk AI systems (which could happen as early as next year), these safeguards should apply to both companion chatbots and the general-purpose models on which they are built.

Measures such as risk management, testing, monitoring, etc. will be most effective when they get to the heart of the dangers associated with AI. Risks from chatbots are not just technical risks associated with technical solutions.

In addition to the words a chatbot can use, the context of the product is also important.

In the case of Character.AI, the marketing promises to “empower” people, the interface mimics the normal text message exchange with a human, and the platform allows users to choose from a set of pre-built characters, including some problematic personalities.

Truly effective AI protections must include more than responsible processes like risk management and testing. They must also require thoughtful, humane design of interfaces, interactions, and relationships between AI systems and their human users.

But even in this case, limiters may not be enough. Seemingly low-risk systems, such as helpful chatbots, can also cause unintended harm.

Regulators should have the power to remove AI systems from the market if they cause harm or pose unacceptable risks. In other words, we need more than just fences around high-risk AI. We also need a key.

If you’re concerned about this story or would like to talk to someone, see this list to find a 24-hour helpline in your country and get help.Talk.

Henry Fraser, Research Fellow in Law, Liability and Data Science, Queensland University of Technology. This article is reprinted from The Conversation under a Creative Commons license. Read the original article.

Source: Port Altele

Leave a Reply

Your email address will not be published. Required fields are marked *