April 22, 2025
Blockchain

Experts question the ‘birth of mind’ of Google algorithm

  • June 16, 2022
  • 0

Many experts in the field of artificial intelligence have questioned Google engineer Blake Lemoine’s claims regarding the existence of “intelligence” in the algorithm. LAMDA. Bloomberg writes about it.

Experts question the ‘birth of mind’ of Google algorithm

Experts question the ‘birth of mind’ of Google algorithm
Experts question the ‘birth of mind’ of Google algorithm

Many experts in the field of artificial intelligence have questioned Google engineer Blake Lemoine’s claims regarding the existence of “intelligence” in the algorithm. LAMDA. Bloomberg writes about it.

According to Emily Bender, a professor of computational linguistics at the University of Washington, the adoption of such expressions by large corporations could remove them from responsibility for decisions made by algorithms.

“The problem is that the more this technology is sold as AI, let alone anything smart, the more people are willing to trust AI systems,” he said.

As an example, Bender cited student recruitment and evaluation programs that can contain biases depending on the dataset used to train the algorithm. Assuming the presence of intelligence in such systems, he says, artificial intelligence developers can distance themselves from direct liability for any shortcomings or biases in the software:

“The firm might say, ‘Oh, the program made a mistake. But no, it’s your company that created the software. You are responsible for the error. And the discourse about the “mind” is a harmful practice.”

Jana Eggers, CEO of AI startup Nara Logics, noted that LaMDA can mimic perceptions or emotions from the training data given to it:

“[Алгоритм] It’s been specially designed to look like he understands.”

According to UC Santa Cruz researcher Max Kreminski, the model’s architecture “lacks some fundamental capabilities of human consciousness.” He added that if LaMDA is like other major language models, it won’t be able to generate new information when interacting with users because “weight deployed neural networks are frozen.”

Also, the scientist believes that the algorithm cannot “think” in the background.

Georgia Institute of Technology professor Mark Riedl said he is not aware of the impact of AI systems’ responses or behavior on society. This believes the technology is vulnerable.

“The AI ​​system may not be toxic or biased, but it doesn’t understand that it’s okay to talk about suicide or violence in some situations,” Riedl said. Said.

Earlier, Lemoine said in an interview that he found signs of “mind” in the LaMDA artificial intelligence system. The company quickly denied the employee’s allegations and sent him on paid leave.

Recall that in May 2021 Google introduced the LaMDA speech model.

At its I/O conference in May 2022, the company announced the AI ​​Test Kitchen implementation with LaMDA 2.

Follow ForkLog AI on TikTok!

Source: Fork Log

Leave a Reply

Your email address will not be published. Required fields are marked *