Two US startups are developing chatbots entirely based on artificial intelligence, that are designed to be your psychologist, well-being coach or just your friend.

Are chatbots about to step in as carers?

Figures published by the National Institute of Mental Health reveal that every year 6.7% of the US population, some 16.1 million people, are affected by depression. And for those people who do fall into this state of deep anxiety, a few reassuring words or even just having someone there to listen can sometimes do a lot of good.

In the United States there already exist on-demand care and therapy services like those provided by Ginger.io. This San Francisco-based startup has created a mobile app that enables you to talk with psychologists and other types of well-being coaches. After subscribing to the service, a user can get in touch 24/7 with real human healthcare practitioners. And this distinction does matter. At the Virtual Assistant Summit hosted by RE•WORK in San Francisco in late January, a key question raised was whether artificial intelligence, particularly in the form of conversational assistants, could help people suffering from mental illness or at least provide a kind of moral support on a daily basis to a large number of them.

Virtual Assistants are by definition digital companions that are designed to mimic human conversation. The most sophisticated VAs, such as Amy, the assistant developed by x.ai, may even cause some confusion, as the way they express themselves is so close to human discourse. And if VAs are able to behave like human beings throughout an entire dialogue, does that mean they also possess the potential to listen and analyse in the same way as healthcare professionals who have been trained over many years to treat patients suffering from depression?

Chatbots: instantly available psychologists 

This is the view of Danny Freed, founder of a startup called Joy.  Just 23 years of age, Danny has created a chatbot, also named Joy, which collects information on the mental health of the user so as to monitor his/her well-being. “Close to one out of two Americans will suffer some kind of mental illness during their lives but people often find it hard to express their feelings, or admit that they’re unwell. Moreover, it’s still a challenge to get treatment,” points out Danny Freed. At any moment of the day, Joy users can share their lows and highs with the assistant and be rewarded with attention.

One can only applaud the intentions behind this chatbot but there are nevertheless still a few unanswered questions. How can a machine accurately assess a person’s state of mind? Danny Freed explains that, based on the message written by the user, Joy is able to work out a hypothesis as to how the person feels at that moment and will then, as a second step, ask the user for confirmation, sending a question such as: “I get the impression you’re angry. Is that right?” If the subsequent reply is in the negative, the chatbot will then come back with a straight question: “Sorry. I didn’t understand you clearly. How do you feel in fact?” However, even if Joy can obtain well-formulated replies, how can she be sure that the person’s feelings really are as described? And on top of that, what sort of responsibility does Joy have when dealing with a depressed person who seems to be on the verge of committing suicide? These are certainly issues for which the designer-developers need to provide solutions. And is the company currently recruiting trained psychologists? It’s probably time they did.

Examples of interactions generated by Replika

A chatbot that sounds just like you

At the Virtual Assistant Summit, another startup, answering to the name Replika, caused quite a stir. This company provides an ongoing coaching service via a chatbot that – to say the least – cultivates a high degree of familiarity. This chatbot is able to draw on a given person’s conversation history so as to learn and reproduce his or her modes of expression. This means that the virtual coach can mimic the writing style of a friend or family member or indeed your own style. Eugenia Kuyda, serial entrepreneur and founder of this fledgling firm, had the idea of developing the Replika algorithm after she suddenly lost a close friend. Wishing to go on chatting with the deceased friend, she came up with the idea of creating a chatbot that would imitate the way he used to write.  However, though the invention may well have helped Eugenia Kuyda with her grieving process, we are entitled to ask what sort of effect this type of technique might have on fragile persons. If a chatbot really can mimic a dead person’s style and phrasing, isn’t there a danger that bereaved persons might remain in denial about the death? And what about basic respect for the dead? Would most people agree upfront to a sort of ‘continued existence’ through their own former words, based on an artificial intelligence tool? We cannot be sure of that.

So, all this reminds us that when it comes to using a machine to try to boost a person’s mental well-being, it is still vital to advance with caution.

 

By Pauline Canteneur