Accidental Exposure of 38TB of Data by Microsoft's AI Research Team
June 22, 2023
Microsoft's AI research team accidentally exposed 38TB of sensitive data while publishing open-source training material on GitHub. The exposure included secrets, private keys, passwords, and internal Microsoft Teams messages. The team utilized Azure's Shared Access Signature (SAS) tokens for sharing, which were misconfigured, leading to the wide exposure of data.
- Alleged deployer
- microsoft
- Alleged developer
- microsoft's-ai-research-division
- Alleged harmed parties
- microsoft, microsoft-employees, third-parties-relying-on-the-confidentiality-of-the-exposed-data
Source
Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/571
Data source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.