May 5, 2025
Trending News

Superintelligence on the OpenAI radar

  • July 6, 2023
  • 0

OpenAI will closely monitor the development from AI to superintelligence. In a blog post, some top OpenAI people address the evolution of AI and AGI (Artificial General Intelligence)

AI cybersecurity

OpenAI will closely monitor the development from AI to superintelligence.

In a blog post, some top OpenAI people address the evolution of AI and AGI (Artificial General Intelligence) to what is known as “superintelligence”. They indicate that this concept requires a specific approach and explain their ideas in the contribution.

The authors and their subject

It’s CEO Sam Altman, Chief Scientist Ilya Sutskever and President Greg Brockman discussing superintelligence. This trio also recently co-wrote a post on the impact of artificial intelligence on employment.

Superintelligence is the premise of artificial intelligence evolving to become a being far smarter than the most gifted humans. A “being” with abilities that go far beyond those of humans.

In any case, the authors assume that AI will surpass human competence in many areas within a decade. This development can bring tremendous benefits to humans, but we also need to be careful about the risks, say the OpenAI trio. While this may not be reactive, the evolution of AI needs to be closely monitored. Superintelligence also requires a separate and specific approach.

Three starting points

Altman, Sutskever and Brockman are aware that there are many ideas for successfully countering this development. They highlight three main ones.

An important aspect is good coordination between the various developers of this technology. For example, influential governments can set up an umbrella project under which relevant organizations work. Another option is a general agreement between developers and companies that will only allow the AI ​​to grow up to a certain level each year. The latter still means a great responsibility for anyone who subscribes to this agreement.

Secondly, the authors draw a comparison with the International Atomic Energy Agency. This is an official organization with the authority to carry out inspections or audits. The agency could also give permission to potentially exceed the annual level of the first idea. Such an authority could also impose restrictions and carry out safety checks or monitor energy consumption. The question is whether such an organization would also address data protection issues.

The second idea is an ambitious plan that Altman and co. are realizing. The basis would therefore be laid with companies that implement certain parts of such an agency in their working methods. Then whole nations can follow. The main goal is an agency that tackles problems and issues in general and does not always leave them to individual governments.

Idea number three is short and sounds simple, but it is an ever-evolving process: the engineering skills to make and keep superintelligence safe. According to the authors, this is an ongoing research project.

caveats

Despite these ideas, it remains important to OpenAI that developers retain sufficient freedom to continue their AI research to a certain level. In addition, the authors still place a lot of responsibility on the individual developers. Those concerns, coupled with OpenAI’s logical enthusiasm for artificial intelligence, mean that opinion pieces like this always seem a bit double-edged.

Source: IT Daily

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version