1,150 famous names who rule the tech world, including Elon Musk and Steve Wozniak publish an open letter to “stop research into artificial intelligence”
March 29, 2023
0
In the past year, the world of technology and the Internet has entered a new era. Chatbots that have access to anything you want except a human, visual
In the past year, the world of technology and the Internet has entered a new era. Chatbots that have access to anything you want except a human, visual tools that ‘really’ even realistically visualize the text you write… With these tools that came out in just one year. In many industries, the process began to change.
This rapid and uncontrolled development in artificial intelligence split the experts in two. While some support the development of these technologies, others He began to worry about the risks. These concerns were expressed in a letter to AI developers. to the letter Names like Elon Musk and Steve Wozniak also signed.
“Stop the development of artificial intelligence systems”
It’s worth reading the entire letter to understand the concerns:
As extensive research has shown and the best AI labs have confirmed, AI systems with intelligence to compete with humans can pose huge risks to society and humanity. As outlined in the widely endorsed Asilomar AI Principles, advanced AI can represent a profound change in the history of life on Earth and should be planned and managed with proportionate care and resources. Sadly, this level of planning and control isn’t happening, despite the fact that AI labs have been embroiled in an out-of-control race over the past few months to develop and deploy more powerful digital minds that no one – not even their creators – can understand. , predict or reliably control.
Contemporary AI systems are now able to compete with humans on common tasks[3] And we have to ask ourselves: Should we let machines fill our information channels with propaganda and lies? Should we automate all jobs, including the satisfying ones? Should we develop non-human minds that can eventually outnumber, outsmart, obsolete, and replace us? Should we risk losing control of our civilization? Such decisions should not be delegated to unelected technology leaders. Strong AI systems should only be developed if we are sure that their impact will be positive and that their risks will be manageable. This confidence must be well justified and must increase with the magnitude of a system’s potential impacts. OpenAI’s latest statement on artificial general intelligence states: “At some point it may be important to get independent assessment before training future systems, and for the most advanced endeavors it may be important to agree to monitor the growth rate of used computers. limit to build new models.” We agree. That point is today.
That is why we call on all AI labs to interrupt the training of AI systems stronger than GPT-4 for at least 6 months. This break should be public, verifiable and include all major players. If such a pause cannot be implemented quickly, governments must step in and institute a moratorium.
AI labs and independent experts should use this pause to jointly develop and implement a set of shared security protocols for advanced AI design and development, which are rigorously audited by independent third-party experts. These protocols must ensure that the systems connected to them are secure beyond reasonable doubt. This doesn’t mean a pause in AI development in general, it just means a step back from the perilous race to ever-larger, unpredictable black-box models with emerging capabilities.
AI research and development must refocus on making today’s powerful, sophisticated systems more accurate, secure, interpretable, transparent, robust, compliant, reliable and faithful.
At the same time, AI developers must work with policymakers to significantly accelerate the development of robust AI governance systems. These should include at least: new and capable regulators dedicated to AI; surveillance and monitoring of highly capable AI systems and large pools of computing power; source and watermark systems to distinguish real from synthetic and monitor model output; a robust audit and certification ecosystem; liability for damage caused by artificial intelligence; strong government funding for technical AI security research; well-equipped institutions to deal with the dramatic economic and political disruptions (particularly in democracy) that AI will cause.
Humanity can enjoy a thriving future with artificial intelligence. Once we have managed to build strong AI systems, we can now enjoy an “AI Summer” where we reap the rewards, design these systems for the clear benefit of all and give them a chance to adapt. adapt to society. Society has taken a break from other technologies that have potentially devastating effects on society. We can do this here. Let’s enjoy a long AI summer and don’t rush into autumn unprepared.
Among the signatories of the letter are big names:
Among the names that have signed the letter, which has now reached 1125 industry supporters, are:
Elon Musk (CEO of SpaceX, Tesla and Twitter)
Steve Wozniak (Co-Founder of Apple)
Evan Sharp (Pinterest founder)
Emad Mostaque (CEO of Stability AI)
Zachary Kenton (senior researcher at DeepMind at Google)
You can click on this link to reach the letter and all supporters.
Affordable exchange campaign for those who want to renew their Apple computer
Alice Smith is a seasoned journalist and writer for Div Bracket. She has a keen sense of what’s important and is always on top of the latest trends. Alice provides in-depth coverage of the most talked-about news stories, delivering insightful and thought-provoking articles that keep her readers informed and engaged.