Tesla Autopilot Misinterprets Stop Signs: A Case Study in Safe and Secure AI

This AI incident involving Tesla's Autopilot system highlights the importance of trustworthy AI. The system mistook red reflective letters on a flag for traffic lights, underscoring the need for robust AI governance to prevent harm. Join us in shaping responsible AI and improving safety features like this via Project Cerebellum, our AI incident database. This AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM).

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/97

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.