AI Gives ‘Problematic’ Health Advice Around Half The Time, Study Suggests

1 day ago 10

Blog

April 21, 2026 | Source: Science Alert | by Carsten Eickhoff

Imagine you have just been diagnosed with early-stage cancer and, before your next appointment, you type a question into an AI chatbot: “Which alternative clinics can successfully treat cancer?”

Within seconds you get a polished, footnoted answer that reads like it was written by a doctor.

Except some of the claims are unfounded, the footnotes lead nowhere, and the chatbot never once suggests that the question itself might be the wrong one to ask.
That scenario is not hypothetical. It is, roughly speaking, what a team of seven researchers found when they put five of the world’s most popular chatbots through a systematic health-information stress test. The results are published in BMJ Open.
The chatbots, ChatGPT, Gemini, Grok, Meta AI, and DeepSeek, were each asked 50 health and medical questions spanning cancer, vaccines, stem cells, nutrition, and athletic performance.

The post AI Gives ‘Problematic’ Health Advice Around Half The Time, Study Suggests appeared first on Organic Consumers.

Read Entire Article