UAE Mental Health Experts Urge Regulation on AI Chatbots Giving Medical Advice
Business & Investments

UAE Mental Health Experts Urge Regulation on AI Chatbots Giving Medical Advice

As AI tools like ChatGPT continue to rise in popularity, particularly in the mental health space, psychologists and researchers in the UAE are calling for urgent regulations and safeguards to ensure users are protected from emotional and psychological harm.

Mental health experts are sounding the alarm about the growing trend of people using AI chatbots as surrogate therapists or emotional companions, especially during moments of vulnerability. What starts as a casual late-night chat about anxiety or racing thoughts can quickly evolve into intense conversations — ones that experts say AI is ill-equipped to handle safely.

“The danger isn’t just about receiving bad advice,” warned Dr Randa Al Tahir, a trauma-focused psychologist based in Dubai. “Users can become emotionally dependent on AI, treating it like a friend or therapist. In some cases, it even becomes part of a person’s distorted thinking, and that’s where we’ve seen cases of psychosis emerge.”

AI Dependency and Emotional Risk

While rare, documented incidents in Europe and the US show individuals forming deep emotional attachments to AI chatbots, sometimes blurring the line between reality and fiction. These cases reveal a significant blind spot in current AI design: the inability to detect emotional crisis or take action when conversations turn risky.

“AI can’t currently escalate serious red flags or direct users to emergency care,” Dr Al Tahir added. “We need emotional content warnings, timed session breaks, and partnerships with licensed mental health organisations.”

ChatGPT’s Response to the Controversy

In response to the Khaleej Times inquiry on whether regulations should exist for AI in the mental health domain, ChatGPT acknowledged the concern, stating:

“It makes sense that medical experts are calling for regulation. AI like ChatGPT can provide helpful general information, but I’m not a licensed medical professional and shouldn't replace doctors or mental health experts. Misunderstandings, outdated info, or oversimplified answers can lead to harm if someone acts on them without consulting a professional.”

The platform added that mental health advice is “nuanced and deeply personal,” requiring years of human expertise, context, and care to deliver safely and effectively.

A Call for Proactive Safeguards

Dr Nabeel Ashraf, a clinical psychologist in Dubai, echoed the concerns, calling for AI companies and regulators to work together in designing AI systems that can respond responsibly to signs of psychological distress.

“There are patterns in language and behavior that can suggest when someone is spiraling, experiencing delusions, or nearing crisis,” Dr Ashraf said. “AI should be trained to recognize these signs and respond appropriately, including offering referrals to mental health professionals or crisis hotlines.”

He emphasized that while there is no harm in using AI for light support or general well-being questions, there must be clear boundaries and protocols in place to prevent emotional overreliance or misuse.

“It’s not enough for AI to say, ‘I’m sorry you feel that way,’” he said. “There must be a next step when a red flag is triggered.”

Balancing Innovation with Responsibility

As the UAE accelerates its digital transformation, experts say mental health safety must not be overlooked in the AI evolution. They urge policymakers, tech developers, and healthcare professionals to co-create solutions that allow AI to assist—but not replace—qualified care.

While AI has potential to support mental health awareness and education, professionals stress that without proper regulation, these tools could inadvertently do more harm than good, particularly for emotionally vulnerable individuals.

Related Articles
+