Microsoft Copilot Allegedly Provides Unsafe Medical Advice with High Risk of Severe Harm

April 25, 2024

Microsoft Copilot, an AI tool, was found to provide accurate medical information only 54% of the time by European researchers (citation available). The analysis further revealed that 42% of Copilot's responses could lead to moderate to severe harm, with a worrying 22% posing risks of death or severe injury. This underscores the need for robust AI governance to ensure safe and secure AI practices.

For those interested in shaping the future of trustworthy AI, join us at HISPI Project Cerebellum TAIM to help govern and measure AI incidents such as this one, contributing to the development of guardrails for AI.
JOIN US

Matched TAIM controls

Suggested mapping from embedding similarity (not a formal assessment). Browse all TAIM controls

Alleged deployer
microsoft-copilot, microsoft
Alleged developer
microsoft
Alleged harmed parties
people-seeking-medical-advice, microsoft-copilot-users, general-public

Source

Data from the AI Incident Database (AIID). Cite this incident: https://incidentdatabase.ai/cite/838

Data source

Incident data is from the AI Incident Database (AIID).

When citing the database as a whole, please use:

McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.

Pre-print on arXiv · Database snapshots & citation guide

We use weekly snapshots of the AIID for stable reference. For the official suggested citation of a specific incident, use the “Cite this incident” link on each incident page.