Researchers call for more transparency about AI risks
- June 5, 2024
- 0
Current and former employees of OpenAI and Google Deepmind are calling for more transparency in an open letter. The risks of AI should not be swept under the
Current and former employees of OpenAI and Google Deepmind are calling for more transparency in an open letter. The risks of AI should not be swept under the
Current and former employees of OpenAI and Google Deepmind are calling for more transparency in an open letter. The risks of AI should not be swept under the carpet.
The open letter was published on July 4 with the unmistakable title A right to warn about advanced artificial intelligence. Thirteen researchers from the AI āāworld supported the letter, most of whom have a history with OpenAI. Four scientists still working at the company also signed the letter anonymously. The thirteen are complemented by (former) employees of Google Deepmind and Anthropic.
Although the letter does not refer to OpenAI, Google and Co. verbatim, the scientists clearly direct their message to their (former) employers. It is argued that companies that develop AI systems have a significant amount of risk data, but do not always publish it.
There is also a lack of a legal framework that obliges AI developers to communicate about it to the outside world. As a result, AI often remains in the “black box”. The European AI law is trying to change this, but it will take several years before the law can make a noticeable difference. The company’s own employees are “among the few people who can hold them accountable to the public,” the letter says.
The scientists also make some suggestions for achieving greater transparency in AI development. This starts with the technology companies themselves, which must promote a culture of open criticism. Employees who wish to report whistleblowers must be able to do so anonymously and without fear of retaliation from their employer.
Recently, a former OpenAI board member who has not yet cooperated with this letter spoke about her time at the company. In it, she attacks CEO Sam Altman, accusing him of lying and bullying, among other things. OpenAI is also said to have inserted a clause into employees’ contracts to pressure them not to criticize the company, although the company has since removed this clause.
OpenAI has established a Safety Advisory Board to oversee the development of new AI projects. Altman sits on this nine-member board, along with three board members and five engineers.
Source: IT Daily
As an experienced journalist and author, Mary has been reporting on the latest news and trends for over 5 years. With a passion for uncovering the stories behind the headlines, Mary has earned a reputation as a trusted voice in the world of journalism. Her writing style is insightful, engaging and thought-provoking, as she takes a deep dive into the most pressing issues of our time.