OpenAI CEO and other experts warn of ‘risk of human extinction’ from unchecked artificial intelligence
May 31, 2023
0
Regulation of artificial intelligence should be a “global priority”says a group of 350 experts, engineers, executives and researchers in an open letter in which they rank uncontrolled AI
Regulation of artificial intelligence should be a “global priority”says a group of 350 experts, engineers, executives and researchers in an open letter in which they rank uncontrolled AI at the same level of risk to human survival as global pandemic or nuclear war. Almost nothing…
Should we develop non-human minds that could outnumber us, outsmart us, render us obsolete, and replace us? A big question, like the plot of a movie Terminator, have already been answered by numerous groups of experts who have been warning about the risks of artificial intelligence for a long time. There is a huge debate going on and we are at a critical moment. The world must decide what to do with this technology before there is no going back and in the future – which some researchers believe is too close – machines decide for us.
And that’s not science fiction. The last decade has seen extraordinary progress in the field of artificial intelligence. Some, such as those involved in medical research, the creation of new drugs, and the treatment of disease, hope to live longer and better lives. Others, such as those involving robot soldiers and autonomous weapons, are deeply troubling and directly threaten human survival. Not to mention the ethics and internal security of some development and other sections as important as those related to employment. We’ve already discussed it. When machines are able to do – almost – all kinds of work, what will we humans do?
AI risks: a new warning
There are more and more voices calling for the regulation of all these technologies. Last March, a group of 1,000 experts called for a six-month moratorium on the development of large artificial intelligence projects due to the “profound risks to society and humanity” they can pose without proper control and management. And it’s that we’ve reached the point where the most advanced development teams are immersed in an unbridled race to acquire the most powerful digital minds. And it is not clear that they can control them.
Now we get another open letter and it needs to be considered because it is signed by some of them the world’s leading AI experts. One of them is Geoffrey Hinton, dubbed the “godfather” of AI and a former Google employee who, along with other signatories, won the 2018 Turing Award, the Nobel Prize in computer science, precisely for his work on AI. Other notable signatories are Demis Hassabis (Google DeepMind) and Dario Amodei (Anthropic), in the elite of AI development.
The CEO of OpenAI also signs the letter. The company that made global headlines for the chatbot ChatGPT, which may seem like a game, but behind it is the largest neural network of its kind, constantly growing and evolving with every interaction of the millions of users who use it. they come from the negative part. Today disinformation and propaganda, tomorrow job cuts of all kinds and worse.
The letter is not useless: “Mitigating the risk of AI extinction should be a global priority, along with other societal risks such as pandemics and nuclear war”they assure in a document published on the non-profit organization’s website Center for AI Security. In a separate interview, Hinton stated that we are only a few years away from artificial intelligence surpassing the brain power of humans.
necessary regulation
At least it seems that these warnings serves governments to move. The European Union is working to regulate these technologies, and this month the most prominent signatories of the latest warning (Altman, Hassabis and Amodei) met with the President of the United States to discuss AI regulation.
The same OpenAI executive testified before the US Senate and warned that the technology’s risks were serious enough to justify government intervention.
It is clear that it is necessary to create a strong legal system around everything related to artificial intelligence and the complete assurance that the effects of the biggest projects are positive for humanity. And that its risks are manageable because a Terminator the reality is closer than we think. And this is what leading world experts say.
Donald Salinas is an experienced automobile journalist and writer for Div Bracket. He brings his readers the latest news and developments from the world of automobiles, offering a unique and knowledgeable perspective on the latest trends and innovations in the automotive industry.