Google Urged to Alter Autocomplete Function in Japan for Safe and Secure AI: A Case Study on Responsible AI Governance
This AI incident involving Google's autocomplete function highlights the importance of responsible AI governance. The case study maps to the Govern function in Project Cerebellum's Trusted AI Model (TAIM).JOIN US This incident underscores the need for trustworthy AI and proper AI governance, emphasizing that prevention of harm is crucial in safe and secure AI development. Understanding and addressing such issues are vital for effective AI governance and building a responsible AI ecosystem.
Source
Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/45
Data source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.