May 6, 2025
Trending News

OpenAI Board of Directors Gets ‘Anti-Sam Altman’ Power: Artificial Intelligence Studies Deemed Dangerous Can Be Rejected

  • December 19, 2023
  • 0

It has been on the agenda lately with the resignation of Sam Altman, then his return and the change of the board of directors. OpenAI One of the

OpenAI Board of Directors Gets ‘Anti-Sam Altman’ Power: Artificial Intelligence Studies Deemed Dangerous Can Be Rejected

It has been on the agenda lately with the resignation of Sam Altman, then his return and the change of the board of directors. OpenAI One of the claims about them was that they had developed a superior and dangerous artificial intelligence. Open AI, While it was not required to make a statement on the matter, it did establish regulations that prioritize the security of artificial intelligence.

OpenAI mainly “safety advisory group” He formed a team called. This team sits above the organization’s technical teams and will provide leadership advice to the technical teams. Moreover OpenAI Board of Directorshenceforth artificial intelligence projects that he considers dangerous veto not He got his rights.

A new “safety advisory group” was created and management was given veto power

OpenAI

While such changes happen from time to time in almost every company, OpenAI recently made such a change. the events that took place and it is of great importance as we consider the allegations. After all, OpenAI is a leading institute in the field of artificial intelligence and security measures in this area are of great importance.

The new regulations, announced by OpenAI in a blog post, “Preparedness framework” It was called. In November, two board members known to be opposed to aggressive growth, Ilya Sutskever and Helen Toner, were removed from the board. With the new regulation “devastating“A guide has also been created to define and analyze artificial intelligence and decide on the actions to take.

Under the new internal arrangement, the security system team will manage the models in production. ChatGPT Tasks such as limiting their systematic abuse in APIs will be the job of this team. The preparedness team will be responsible for preliminary modeling and will focus on identifying potential risks and taking precautions. The team, called the super orientation team, will develop theoretical guidelines for super artificial intelligence models. Each team identifies the models they are responsible for. “cyber security”, “persuasion”, “autonomy”, “CBRN” It will be evaluated in terms of (chemical, biological, radiological and nuclear threats).

OpenAI has not yet made these assessments. how do you While they have not shared any information on the matter, a cross-functional group of security advisors will generally oversee and help lead these efforts.

Source: Web Tekno

Leave a Reply

Your email address will not be published. Required fields are marked *