After all, are artificial intelligences that talk like humans conscious?
July 14, 2022
0
Recently Google Engineer has been removed your work because you think that Artificial intelligence gained consciousness. But this raises the question: is it really possible? 06/24/2022 at 15:30
Recently Google Engineer has been removed your work because you think that Artificial intelligence gained consciousness. But this raises the question: is it really possible?
06/24/2022 at 15:30
News
People will have a second “I”, created by the mind …
Eric Schmidt also comments on the recent LaMDA AI case.
Blake Lemoine told the newspaper Washington Post: “I recognize a person when I talk to him. It doesn’t matter if he has a brain in his head or if he has billions of lines of code.” However, linguists Kyle Mahowald and Anna A. Ivanova, in a publication for Talk counter that strangeness and wonder come from the craftsmanship of the machine learn to be humanand not conscience itself. According to them, Google’s artificial intelligence, called LaMDA (Language Model for Conversational Applications) can indeed create extremely “human” sentences, but this is just the most sophisticated version of the technology that has been around since the 1950s and is known as n-gram models.
“People are so accustomed to believing that free language comes from thought, from human feeling, that it is difficult to understand evidence to the contrary. […]. The human brain is programmed to determine the intent behind words. Every time you participate in a conversation, your mind automatically builds a mental model of your interlocutor.– Excerpt from original publication Talk.
“People are so accustomed to believing that free language comes from thought, from human feeling, that evidence to the contrary can be difficult to understand. […]. The human brain is programmed to determine the intent behind words. Every time you participate in a conversation, your mind automatically builds a mental model of your interlocutor.– Excerpt from original publication Talk.
Google Engineer Fired For Believing AI Has Become Conscious LaMDA chatbot wants the company to consider it an employee, not property
As this artificial intelligence learns with access to the endless data available on the Internet, it will have a much larger and more complex “baggage” of words and expressions to use. She can also learn relationships between words that are far apart, not just words that are neighbors. But its function remains the same: use statistics and probability “know” which word is the most “correct” in the process of constructing a sentence in a given context.
Despite these counterarguments, and that other researchers have also joined in the skepticism following the Google engineer incident, linguists say the debate about AI’s “consciousness” continues. And it may even be comical, but they also confirm that conversational models don’t have real feelings. Using another fragment Talk: “However, researchers have found that you can’t just trust a language model to tell you how you feel. Words can be misleading, and it’s all too easy to confuse fluent speech with fluent thinking.”
I’m Maurice Knox, a professional news writer with a focus on science. I work for Div Bracket. My articles cover everything from the latest scientific breakthroughs to advances in technology and medicine. I have a passion for understanding the world around us and helping people stay informed about important developments in science and beyond.