A group of 1,000 people, including scientists, engineers, intellectuals, businessmen, politicians and big names in the world of technology, signed an open letter demanding suspension for six months development of the largest artificial intelligence projects before “a profound risk to society and humanity” which may arise without proper control and management.
An open letter (“Pause Giant Artificial Intelligence Experiments: An Open Letter”) published by a non-profit organization Future of Life Institute and includes well-known signatories to the general public such as Steve Wozniak or Elon Musk, but also many scientists and experts related to AI who they know well what we play. And not only because of the level that these technologies are reaching, but also because of the pessimism about the use that people will give them.
Artificial intelligence discussed
Advances in artificial intelligence have been huge in recent years and in many areas. Some are extremely promisingsuch as those for medical research that solve complex problems with the development of advanced algorithms to find cures for diseases. Others are scary, very scarysuch as those related to autonomous weapons and military robots. If you have seen sci-fi movies like Terminator You already know what we’re talking about and why technology and artificial intelligence experts have signed previous open letters and petitions to the UN to ban its use altogether.

There are serious concerns about the ethics and safety of some developments, as well as other more tangible ones such as those involving employment. Advances in artificial intelligence will mark the point of no return, and when machines are able to do – almost all kinds of jobs, the question is what will we humans do? It can be predicted that in a few decades, robots will be able to surpass humans in almost any task with a huge impact on society.
The above are just some of the areas of interest related to the advancement of AI, but until recently it seemed that we were still a bit far from its effects. Massive arrival of chatbots like ChatGPT show to the general public a huge capacity of technologies that are becoming smarter and faster, with a neural network of impressive proportions that will not stop growing through the thousands of companies that want to get on board and the millions of users that use it. its use.
Another open letter: Are we still on time?
The 1,000 signatory letter is consistent with other previous proposals. It is not required to eliminate the development of AI projects, but suspend them so that there is time to regulate them: “Society has paused over other technologies with potentially catastrophic impacts on society. We can do it here. Let’s enjoy the long summer break from artificial intelligence and not rush to fall unprepared.”.
The letter sets the level correctly in projects whose capacity exceeds GPT-4the latest version of the AI language model from the company behind ChatGPT, OpenAI, which they clearly warn against: “Artificial intelligence labs closed in a race out of control to develop and deploy increasingly powerful digital minds”even without their creators “can reliably understand, predict or control” these technologies.
“Should we develop non-human minds that could eventually surpass us, outsmart us, render us obsolete, and replace us? Should we risk losing control of our civilization?” they wonder, making it clear that this is not decided by big tech companies like OpenAI, Microsoft or Google (to name a few): “Such decisions should not be delegated to unelected tech leaders”.

The letter proposes that AI labs and independent experts work during this six-month hiatus “develop and implement a set of shared security protocols for advanced AI design and development that are rigorously controlled and monitored by independent third-party experts”.
In the expectation that the petition, like other previous ones of a similar nature, will fall to zero Call for government intervention to enforce this moratorium in case stakeholders refuse a temporary suspension of investigations, a solid legal system around everything related to artificial intelligence, and complete certainty that the impacts of the largest projects are positive and that their risks are manageable.
A huge debate, no doubt. How do you see it? Do you think we are on time or has the situation already gotten out of control?