As more people seek mental health advice from AI chatbots, new research suggests the systems are not yet ready and can fall short of professional psychotherapy ethics standards.
Researchers from Brown University, working with mental health professionals, found repeated problematic behaviour even when the chatbots were prompted to use established psychotherapy approaches.
In tests, chatbots mishandled crisis situations, produced replies that reinforced harmful beliefs, and used language that sounded empathic without genuine understanding.
Leave a Reply