AI Health Advice Under Scrutiny After Oxford Study Flags Safety Concerns – Artificial intelligence chatbots are delivering medical advice that is often inconsistent and, at times, inaccurate — potentially putting users at risk, according to new research from the University of Oxford.
The study raises fresh concerns about the growing number of people turning to generative AI tools for guidance on symptoms, diagnoses and mental wellbeing. While researchers found that chatbots were capable of providing helpful and medically sound information in some instances, responses frequently varied in quality, tone and accuracy — even when presented with similar queries.
This inconsistency, experts warn, makes it difficult for users to distinguish between reliable guidance and potentially harmful misinformation.
Dr Rebecca Payne, the study’s lead medical practitioner, cautioned that relying on AI tools to interpret symptoms could be “dangerous,” particularly when users treat chatbot responses as a substitute for professional medical advice.
“These systems can sound authoritative and confident,” Payne said, “but that confidence doesn’t always reflect clinical accuracy. The risk is that people may delay seeking appropriate care or act on incomplete or misleading information.”
The findings come at a time when AI-powered tools are becoming deeply embedded in daily life. From answering homework questions to drafting emails, generative AI platforms have quickly evolved into widely used sources of information — including for personal health concerns.
In November 2025, polling by Mental Health UK revealed that more than one in three people in the UK now use AI tools to support their mental health or wellbeing. Many respondents said they appreciated the immediate availability, anonymity and non-judgmental tone offered by chatbots, especially when discussing sensitive topics such as anxiety, depression or stress.
For some, AI provides a first step toward understanding their feelings. For others, it serves as a supplement to therapy or a coping tool during times when professional support is unavailable.
However, the Oxford study suggests that convenience may come with hidden risks.
Researchers tested popular AI systems using a range of symptom-based and mental health-related prompts. They found that while chatbots often provided general health advice aligned with publicly available medical guidance, they sometimes omitted key warnings, misinterpreted symptom severity, or failed to clearly recommend seeking urgent care when appropriate.
In certain cases, the same chatbot delivered contradictory answers to similar questions, highlighting the probabilistic nature of generative AI systems, which produce responses based on patterns in data rather than clinical reasoning.
Medical professionals worry that this variability could be particularly problematic for vulnerable individuals — including those experiencing acute mental health crises or serious physical symptoms.
“If someone is already anxious about their health, receiving unclear or conflicting information could heighten distress,” Payne noted. “Equally concerning is the possibility that a serious symptom might be downplayed.”
The study does not suggest that AI tools have no role in healthcare. Many experts acknowledge that chatbots can help users better understand medical terminology, prepare questions for doctors, or access general wellbeing tips. In overstretched healthcare systems, digital tools may also ease pressure by providing preliminary information.
But researchers stress that AI should not be seen as a diagnostic authority.
Unlike qualified clinicians, chatbots do not have access to a patient’s medical history, cannot conduct physical examinations, and are not accountable for outcomes. Their responses are generated based on statistical patterns learned from large datasets, which may include outdated, incomplete or non-peer-reviewed information.
The rapid pace of AI adoption has outstripped regulatory frameworks in many countries. While some health-focused AI tools undergo clinical validation, general-purpose chatbots are not typically regulated as medical devices, even when users seek medical advice from them.
Consumer advocacy groups have called for clearer warnings and stronger safeguards, including built-in prompts encouraging users to consult healthcare professionals for serious concerns.
For now, experts recommend that individuals treat AI-generated health advice as informational rather than authoritative.
“Chatbots can be a starting point,” Payne said, “but they are not a replacement for a GP, a specialist, or a trained therapist. When it comes to your health, especially if symptoms are severe or persistent, speaking to a qualified professional remains essential.”
As AI continues to reshape how people access information, the Oxford study serves as a timely reminder: speed and convenience do not always guarantee safety — particularly when it comes to matters of health.
