Should chatbot psychologists be part of the health system?
By Amy Sarcevic
Monday, 08 April, 2024
This year, an announcement that chatbot psychologists could become part of Australia鈥檚 healthcare system within the next two years sparked controversy in medical and public arenas.
The bots, which mimic conversation with users via a voice- or text-based interface, can be programmed with human-like qualities, and deliver structured mental health programs.
But is the interaction meaningful enough to generate therapeutic outcomes?
Dr Cathy Kezelman AM, CEO and Executive Director of , believes AI has a place in health care, but is concerned about its impact on people with complex trauma.
鈥淐omplex trauma results from harmful interpersonal interactions and, for this reason, requires safe human relationships to facilitate the healing process.
鈥淪upporting people to feel heard, and regain any trust that has been lost from their primary betrayal, is, in my view, only possible when a committed human being walks alongside them.
鈥淢y concern with machines is that they will miss out on a lot of the sensitivities that are important for the therapeutic alliance and minimising the risk of additional trauma.鈥
Too attached?
Digital mental health specialist Dr Simon D鈥橝lfonso from the shares similar concerns, but warns the opposite scenario could pose more harm.
When the world鈥檚 first chatbot 鈥楨LIZA鈥 was introduced in 1966, even its creator, Joseph Weizenbaum, was surprised to see people attribute human-like feelings towards it.
Known as the 鈥楨LIZA effect鈥, researchers have since warned about the dangers of projecting empathy and semantic comprehension onto programs with a textual interface.
鈥淭he trouble with some of the less structured bots is that people can get tangled in unconstrained conversations and be led into an emotional rabbit hole,鈥 D鈥橝lfonso said.
鈥淭hey can go too far in attaching human characteristics onto a non-sentient system, and often the depth of their exchanges isn鈥檛 justified by what the chatbot is capable of.鈥
Another danger of getting too wrapped up with a bot is a sense of loss when its functionality changes.
鈥淪ometimes, manufacturers can suddenly change the parameters of these platforms and the user ends up feeling devastated,鈥 D鈥橝lfonso said.
Lacking substance?
Even with more structured bot varieties, D鈥橝lfonso is concerned about the potential for adverse consequences.
鈥淲e are seeing increased scope for bots to converse in an open-ended fashion, as natural language processing models develop. But, it鈥檚 unlikely you鈥檒l ever find something with the cognitive complexity, semantic sophistication and emotional richness to carry out anything like a substantive psychotherapy dialogue.
鈥淎 lot of psychotherapy is reliant on facial exchanges, interpersonal presence and non-verbal cues. Acoustic and paralinguistic qualities 鈥 like a person鈥檚 pitch and intonation 鈥 all convey important information and play a role in the therapeutic alliance.鈥
That said, the notion of a digital therapeutic alliance (DTA) has recently emerged as a research topic. Although different to the traditional therapeutic alliance, research suggests DTA is a genuine phenomenon, which can develop with mental health apps or chatbots under the right conditions.
Studies show that by offering self-guided therapy, these technologies can effectively treat anxiety and depression.
D鈥橝lfonso said the best results are likely to come if bots encourage users to set goals and tasks and offer a personalised experience. This type of structured intervention could even foster an emotional connection with the bot, he said.
鈥淭here are going to be instances where a human client wants to interact with a chatbot and might even develop a bond with them.
鈥淥f course, it won鈥檛 be an authentic two-way bond, because the bots aren鈥檛 capable of that. But it might just be enough to deliver therapeutic results, when used in conjunction with a goal-based framework.鈥
The slot machine effect?
Kezelman agrees that chatbots could elicit feelings of connection, but draws a parallel with social media to highlight the risks.
鈥淵ou only have to walk past a bus stop to see how attached people can get to their technologies. But we know that many of these attachments can also be harmful.鈥
Indeed, research shows that social media has addictive properties, meaning some continue to use it despite negative consequences.
One study revealed that time spent on social media was linked with depression and suicidality. Despite this, some remained hooked, given its dopaminergic effects.
D鈥橝lfonso agrees and warns about the potential for a 鈥榮lot machine effect鈥, where people keep using a bot to feed their curiosity about its next move.
An adjunct solution?
Despite the potential for harm, both experts agree that chatbots could play a useful role in a constrained healthcare system.
鈥淚t鈥檚 certainly an appealing option when there are challenges around availability and cost with traditional therapy 鈥 I鈥檓 just not sure it should be the patient鈥檚 primary relationship,鈥 Kezelman said.
Meanwhile, D鈥橝lfonso sees the bots as more of an interim solution.
鈥淪omeone could chat a bit with the bot and then go and see their therapist. I don鈥檛 see them as a comprehensive replacement.鈥
A Day in the Life of a rehabilitation physician and burnout coach
Dr Jo Braid is a rehabilitation physician and coach dedicated to transforming burnout recovery...
A Day in the Life of an advanced exercise physiologist
Luke Snabaitis is the first exercise physiologist in Queensland Health history to...
In conversation with AHPA CEO Bronwyn Morris-Donovan
Among the many reforms 黑料吃瓜群网 Professions Australia's Bronwyn Morris-Donovan is...