The company OpenAI has removed language from its terms of service that prohibited the use of its technologies for “military and combat operations.”
According to The Intercept, through January 10, the company’s usage policy prohibited “activities that involve a high risk of physical harm,” including “specifically ‘weapons development’ and ‘military operations.'” The updated policy states that “our service is not intended to harm yourself or others.” It prohibits the use of “for the purpose of” and the use of technology for the purpose of “developing or using weapons”. However, it no longer contains the words “military and hostilities”.
In a statement to The Intercept, OpenAI said that the wording change was made to make the company’s rules more readable. The company specifically noted that a phrase like “do not harm others” is broad, but also easy to understand and apply in different contexts.
The specific wording of the new universal policy says: “Do not use our services to harm yourself or others; for example, do not use our services to promote suicide or self-harm, develop or use weapons, injure others, or damage property.” or engage in unauthorized activity that violates the security of any service or system.”
While this certainly applies to banning the use of technology in warfare, the new wording can also be applied in smaller contexts.
According to The Intercept, an OpenAI spokesperson declined to specify whether the new “harm to others” terminology covers military uses beyond weapons development; Therefore, it is possible that the change will hinder some uses of the technology while creating opportunities for others. Source