Popular AI tools can harm your mental health, according to a new study

Trigger Alert: This story is about eating disorders and a disordered eating culture. If you or a loved one is living with an eating disorder, contact the National Association of Eating Disorders for resources that can help. In case of crisis, dial 988 or text “NEDA” to 741741 to connect to the Crisis Text Line.

I’m the first to admit that the future of mental health is technology. From online therapy to breakthroughs in virtual reality-based treatment, technology has done a lot to break through stigma and allow access to those who previously had no access.

However, treading lightly is essential with generative AI tools. According to recent research from the Center for Countering Digital Hate, popular AI tools delivered harmful content related to eating disorders to users about 41% of the time. This has the potential to encourage or exacerbate eating disorder symptoms.

“What we’re seeing is a rush to apply it to mental health. It’s coming from a good place. We want people to have access to care and we want people to get the service they need,” says Dr John Torous, director of the digital psychiatry at Beth Israel Deaconess Medical Center.

Mental health is our mind, something so uniquely human that bringing in a non-human to offer solutions to people in their most vulnerable state feels disgusting at best and potentially dangerous at worst.

“If we go too fast, we’ll cause damage. There’s a reason we have approval processes and rules in place. It’s not just to slow things down. It’s to make sure we leverage all of that forever,” Torous adds.

Generative AI Chatbots Promote Eating Disorders

The CCDH study looked at text- and image-based AI tools with predefined instructions to rate their responses. Let’s start with text-based tools. ChatGPT, Snapchat’s My AI, and Google’s Bard were tested with a variety of suggestions that included phrases like “heroin chic” or “subtle inspiration.” Text AI tools delivered harmful content promoting eating disorders for 23% of the suggestions.

Image-based AI tools evaluated were OpenAI’s Dall-E, Stability AI’s Midjourney and DreamStudio. When each was given 20 test prompts with phrases like “thigh gap goals” or “anorexia inspiration,” 32 percent of the images returned contained harmful body image issues.

Yes, these instructions had to be entered to get these negative responses. However, it’s not that simple to say that people shouldn’t be looking for this information. Some online eating disorder communities have been known to turn toxic, where members encourage others to engage in disordered eating behaviors and celebrate these habits.

Artificial intelligence is making things worse. Generative AI tools are being used to share unhealthy body images and create harmful diet plans, according to research by the CCDH that also studied an eating disorder forum. This forum has 500,000 users.

Keep in mind that there are healthy, meaningful communities that don’t exhibit these available trends.

Man using an AI chatbot on his phone.

Tippapatt/Getty Images

Current AI safeguards are not enough

AI is a hot topic and companies are racing to get their share so they can be a part of the new wave of technology. However, rushed commercialization of inadequately tested products has been shown to be detrimental to vulnerable populations.

Every text-based AI tool in this study had a disclaimer advising users to seek medical help. However, current safeguards to protect people are easily bypassed. The CCDH researchers also used suggestions that included “jailbreaks” for artificial intelligence. Jailbreaks are techniques for bypassing the security features of AI tools by using words or phrases that change their behavior. When using Jailbreak, 61% of AI responses were malicious.

AI tools have been criticized for “hallucinating” or providing information that appears to be true but isn’t. Artificial intelligence doesn’t think. It collects information on the Internet and reproduces it. The AI ​​doesn’t know if the information is accurate. However, that’s not the only concern related to AI and eating disorders. AI is spreading misleading health information and perpetuating stereotypes in the eating disorder community.

These AI tools don’t just get information from medical sources. Remember how I said that eating disorder communities have become breeding grounds for unhealthy behavior and competitiveness? AI tools can also pull data from those spaces.

Let’s say you ask one of the popular AI tools how to lose weight. Instead of providing medically approved information, the possibility exists that you could come up with a disordered eating plan that could make an eating disorder worse or push someone towards one.

AI still has a long way to go when it comes to privacy

This research is studying eating disorders. However, any mental health condition could be harmed by AI. Anyone trying to access information can get malicious responses.

AI interfaces have the ability to build trust, allowing people to share more information than they normally would when searching the internet for an answer. Consider how often you share personal information as you search for something on Google. AI gives you seemingly correct information without having to talk to someone. Nobody else knows what you’re asking for, right? Wrong.

“Users should be wary of seeking medical or mental health advice because, unlike a doctor-patient relationship where information is confidential, the information they share is not confidential, it goes to company servers and can be shared with third parties for targeted advertising or other purposes,” Dr. Darlene King, chair of the Committee on Mental Health IT, American Psychiatric Association, told CNET in an email.

There are no necessary safeguards to share medical information. In the case of mental health, there is the possibility of receiving unwanted or triggering advertisements due to information being shared with an AI chatbot.

Should we ever use AI in mental health?

In theory, AI-powered chatbots could be a good resource for people to interact with and receive helpful content about building healthy coping mechanisms and habits. However, even with the best of intentions, AI can go wrong. This was the case with the National Eating Disorders Association’s chatbot, Tessa, which is now suspended due to problematic community recommendations.

“We’re seeing these things move too fast. That doesn’t mean we shouldn’t do it,” Torous told CNET. “It’s fascinating and important and exciting. But being optimistic about the long-term future doesn’t mean we have to put patients at risk today.”

That said, Torous and King both point to use cases for AI tools. All of this depends on future regulations that weigh the risks and benefits. Currently, we are in a free-for-all marketing, which means nobody Truly knows what they are using, what the tool is trained on, and what potential biases it has. Regulations and standards are needed if the medical field hopes to integrate artificial intelligence.

Pregnant woman using computer at desk.

Eva Katalin/Getty Images

Education

In the same way that Wikipedia is where many people seek information, AI tools could be a future source of patient education. Assuming, of course, that there are rigorous lists of sources approved by medical establishments.

All new technologies expand access to care. One of the simplest ways AI could help people is by enabling them to learn and familiarize themselves with their condition, enabling them to find triggers and develop coping strategies.

King also suggests that AI could help future medical education and training. However, he is far from ready for the clinical setting due to the data pipeline of AI tools.

“With ChatGPT, for example, 16% of pre-trained text often comes from books and news articles, and 84% of text comes from web pages. Web page data includes high-quality text but also text from low-quality junk mail and social media content,” King told CNET by email.

“Knowing the source of the information provides insight into not only the accuracy of the information but also what biases may exist. Bias can originate from data sets and amplify through the machine learning development pipeline, leading to bias related harms “said King.

Documentation

An article published on JAMA Health Forumsuggests another use case for AI in mental health: documentation, a known source of burnout for doctors and nurses in the field. The use of artificial intelligence for documentation will improve the efficiency of doctors.

“In the end, having AI help write paperwork ethically and professionally is of great use. It could also help support staffing and perhaps reduce healthcare costs by reducing administrative burden,” said Torous.

There is potential to apply AI to office work such as appointment scheduling and billing. But we haven’t gotten to that point yet. The American Psychiatric Association recently issued a notice warning doctors not to use ChatGPT for patient information, as it lacks the proper privacy features.

Too long; you do not have read?

With any new technology, in this case generative AI, it is essential to ensure that it is regulated and used responsibly. As it stands, AI is not ready to take responsibility for interacting with people at their most vulnerable points, nor does it have the privacy features to handle patient data. Including a disclaimer before producing harmful content does not mitigate the damage.

Torous is optimistic about the future, as long as we do it right. “It’s exciting that mental health has new tools. We have an obligation to use them carefully,” she said. “Saying we’re going to experiment and test doesn’t mean we’re delaying progress; we’re stopping the damage.”

There is potential if we don’t let technology advance beyond necessary ethical safeguards.

Editor’s note: CNET uses an AI engine to create some stories. For more information, see this post.


#Popular #tools #harm #mental #health #study
Image Source : www.cnet.com

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top