April 19, 2025
Science

After all, are artificial intelligences that talk like humans conscious?

  • July 14, 2022
  • 0

Recently Google Engineer has been removed your work because you think that Artificial intelligence gained consciousness. But this raises the question: is it really possible? 06/24/2022 at 15:30

After all, are artificial intelligences that talk like humans conscious?

Recently Google Engineer has been removed your work because you think that Artificial intelligence gained consciousness. But this raises the question: is it really possible?

06/24/2022 at 15:30
News

People will have a second “I”, created by the mind …

Eric Schmidt also comments on the recent LaMDA AI case.

Blake Lemoine told the newspaper Washington Post: “I recognize a person when I talk to him. It doesn’t matter if he has a brain in his head or if he has billions of lines of code.” However, linguists Kyle Mahowald and Anna A. Ivanova, in a publication for Talk counter that strangeness and wonder come from the craftsmanship of the machine learn to be humanand not conscience itself. According to them, Google’s artificial intelligence, called LaMDA (Language Model for Conversational Applications) can indeed create extremely “human” sentences, but this is just the most sophisticated version of the technology that has been around since the 1950s and is known as n-gram models.

Google Engineer Fired For Believing AI Has Become Conscious

Google Engineer Fired For Believing AI Has Become Conscious
LaMDA chatbot wants the company to consider it an employee, not property

As this artificial intelligence learns with access to the endless data available on the Internet, it will have a much larger and more complex “baggage” of words and expressions to use. She can also learn relationships between words that are far apart, not just words that are neighbors. But its function remains the same: use statistics and probability “know” which word is the most “correct” in the process of constructing a sentence in a given context.

Despite these counterarguments, and that other researchers have also joined in the skepticism following the Google engineer incident, linguists say the debate about AI’s “consciousness” continues. And it may even be comical, but they also confirm that conversational models don’t have real feelings. Using another fragment Talk: “However, researchers have found that you can’t just trust a language model to tell you how you feel. Words can be misleading, and it’s all too easy to confuse fluent speech with fluent thinking.”


…..

Via: Tilt UOL Source: The Conversation

Source: Mundo Conectado

Leave a Reply

Your email address will not be published. Required fields are marked *