Misinterpretation of Visual Cues in Tesla's Full Self-Driving Technology: A Case Study on AI Misperception
Exploring an incident where Tesla's Full Self-Driving technology misidentified objects such as the moon, billboards, and Burger King signs. This AI incident maps to the Govern function in HISPI Project Cerebellum Trusted AI Model (TAIM). Understanding these occurrences can help establish guardrails for AI, ensuring safe and secure autonomous driving systems. Ready to contribute to responsible AI development? JOIN US
Source
Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/145
Data source
Incident data is from the AI Incident Database (AIID).
When citing the database as a whole, please use:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
Pre-print on arXiv · Database snapshots & citation guide
We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.