Mental health apps: the AI therapist cannot see you now
Generative artificial intelligence impresses users with its ability to answer online queries convincingly. Could the nascent technology assist or replace human therapists in helping people overcome mental illness?
About one in five US adults suffers from mental health problems. Roughly one in 20 has a serious mental illness. But a shortage of mental health professionals, long waiting lists and high costs mean many people never get the care they need.
Some Americans have reported experimenting with the ChatGPT chatbot as an unofficial therapist. The technology has clear potential to provide a “listening service”. This could expand the franchise of mental health apps, which are booming.
But unsupervised GAI “self-medication” could also be very dangerous, as mental health practitioners have warned. It could, for example, convince users that delusions were real or low self-esteem was justified.
Money is already pouring into mental health apps. Mental health tech groups have raised nearly $8bn in capital since the start of 2020, according to PitchBook.
The category includes meditation apps such as Calm and Headspace. Their relaxation and mindfulness tools can produce mental health benefits but are no substitute for therapy.
Telehealth companies such as Talkspace and Amwell, meanwhile, connect users to therapists online. They have been criticised for not having enough qualified professionals to meet demand. Talkspace and Amwell have shed about 90 per cent of their market value since going public in 2020.
Many mental health apps already use AI at some level. An example is Woebot. This chatbot aims to deliver cognitive behavioural therapy via brief, daily conversations. Most of Woebot’s conversations are pre-written by trained clinicians.
Proponents of generative AI chatbots say they could produce dynamic conversations indistinguishable from a dialogue with another human. The technology plainly cannot do this at the moment.
It is not even clear whether existing mental health apps help many users. Unsupervised generative AI could actively harm them. No one should risk their mental stability with ad hoc experimentation.
Investors have a corresponding duty of care. They should only put money into apps overseen by responsible physicians and which are seeking regulatory approval as healthcare devices. The medical principle “do no harm” applies.
Read the full article Here