OpenAI is establishing a new independent committee that will monitor the safety criteria of AI models and have the power to delay new market launches.
OpenAI announced in a blog post that it is transforming its Safety and Security Committee (SSC) into an “independent board oversight committee.” This committee has the power to stop the deployment of AI models if they do not meet safety standards. OpenAI says the committee acts independently, although this is questionable. The members of this safety committee are also members of the company’s full board of directors.
“Independent Committee”
There are many changes on the table at OpenAI. It was recently announced that the company is planning to change from a non-profit organization to a for-profit organization. Now the company has announced in a blog post that it will convert its Safety and Security Committee (SSC) into an independent safety committee. The SSC has the authority to stop the introduction of AI models if the safety criteria are not met.
For example, “the SSC reviewed the security criteria used by OpenAI to assess OpenAI o1’s suitability for launch, as well as the results of OpenAI o1’s security assessments,” according to OpenAI. The new independent committee will be led by Zico Kolter and will send “regular briefings on security issues” to OpenAI’s board. Although OpenAI portrays the committee as independent, its members are part of the larger body of OpenAI’s board. CEO Sam Altman himself is not on the panel.