Microsoft’s AI research department accidentally leaked around 38 TB of sensitive internal data via Azure Cloud since July 2020. This happened during a post on open source AI models via GitHub.
Since July 2020, Microsoft’s AI research division has exposed approximately 38 TB of sensitive and internal data via Azure Storage. This happened during a collaboration on open source AI models via GitHub. A team member accidentally shared a URL to a misconfigured and therefore unprotected Azure Blob storage.
Configuration error
The leak was discovered by the American security company wizard. The startup found Teams conversations, file backups and many passkeys, including private copies.
The cause could be a SAS token (Shared Access Signature) Are. This is an Azure feature that, ironically, is actually designed to control access to data. Instead of restricting access, this token opened the door to tons of internal Microsoft data. Wiz researcher Hillai Ben-Sasson was surprised on-line about how this could have happened.
Microsoft has since published a blog through its MSRC team (Microsoft Security Response Center), in which the company emphasizes that no customer or user data was leaked. According to Mozilla, you shouldn’t worry about this.
Wiz notified the MSRC on June 22nd. Microsoft immediately blocked the token and the leak was said to have been closed on June 24th. On July 7th there was a new token.
We’re repeating ourselves, but this incident also highlights how important employee training is, even at tech icons like Microsoft. As this case shows, a relatively “small” oversight can have major consequences.