Microsoft releases a transparency report on its approach to responsible AI research. In it, the company shows what concrete measures it has already taken and what challenges it faces for the future. The only thing missing is self-reflection about some mistakes made.
Microsoft wants to pave the way for the open, transparent and responsible development of AI. The very first thing Responsible AI Transparency Report must demonstrate what Microsoft does and how. In this way, the company wants to show how responsibly it acts and inspire the industry to follow similar principles.
Responsible standard
The core of Microsoft’s AI policy is this Responsible AI standard. This is a set of practices and principles that guide the development, deployment and management of AI in an ethical and socially responsible manner. Microsoft considers ethical principles such as honesty, reliability, security and privacy and examines all risks during the development and post-launch of AI models.
In the report we read that Microsoft has developed a management framework and a comprehensive approach to mapping risks related to AI. A Program for sensitive users must actively investigate whether AI applications do not have a negative impact on certain people or groups in society.
We see further investment in AI training and collaboration with the broader AI community. Documentation and transparency are cornerstones of the approach and the report is an example of this.
challenges
Microsoft recognizes that the responsible development of AI is an ongoing process, and the report also highlights some challenges. The complexity of risks is one, the scalability of ethical principles is another. Additionally, Microsoft notes that there is currently no global consensus on what AI can and cannot do. As a global company, Microsoft finds it difficult to develop a one-size-fits-all approach. Accordingly, the company notes that it remains difficult to balance innovation and regulation.
The report provides a comprehensive look at how Microsoft operates, but lacks impact due to a lack of self-reflection. For example, the Bing chatbot has self-consciously spread false information in the past, or Microsoft tools could be misused to create deepfakes or even deepnudes. Microsoft has, of course, addressed these issues, but they illustrate that even with a comprehensive ethical approach, mistakes that actually harm people are still possible. Nevertheless, the report provides a very comprehensive analysis. You can find the full version here.