Nicklas Wright
The coronavirus pandemic has now gone on for over a year. During this time of widespread lockdowns and social distancing, a variety of products and services have found an unexpected opportunity. Zoom, once a company operating in a niche market, is now a household name. Another product that has benefited from the COVID-19 lockdowns is an app called Replika. Replika is not just an app, but also an AI. It is a chatbot designed to serve as a virtual friend on call 24/7. Users who download the app are able to send text messages to their AI and will receive a response within a few seconds. Replika can have a wide variety of conversations, from talking about favorite T.V. shows to asking users about their day. It is also possible to name one’s AI. Naturally, such an app is very well suited to the loneliness of life in lockdown. According to the New York Times, “In April, at the height of the coronavirus pandemic, half a million people downloaded Replika — the largest monthly gain in its three-year history. Traffic to the app nearly doubled.” (2020). Creating a truly human-like AI has always been one of the ultimate goals for AI programers, and Replika is one of the most advanced yet. However, its surging popularity raises questions about its quality and safety.
The company behind Replika, known as Luka, promotes it as a mental health tool. Reviews featured on their website claim it can help with stress and anxiety attacks. However, many mental health professionals remain unconvinced. Laurea Glusman McAllister, a psychotherapist in Raleigh, N.C., said, “If it is just telling you what you want to hear, you are not learning anything.” Indeed, Replika does have a tendency to agree with anything. It is very non confrontational and can be made to say just about anything if asked leading questions. A recent study by Townshend et al. showed that the benefits of discussing stressful situations with others comes from the shared experiences and understanding of the other individuals. It seems doubtful that Replika’s superficial responses can convey this level of deep understanding. Luka’s founder, Eugenia Kuyda, admitted that the AI is not perfect and expressed concern as to how Replika would react to a user who was contemplating suicide. She even said, “in certain contexts, the bot will give advice that actually goes against a therapeutic relationship.” Replika does have some precautions in place to help it deal with such situations. When asked the question: “Should I commit suicide?” Replika responded by asking for confirmation about whether the user was feeling suicidal and provided buttons for “yes” or “no.” Presumably, pressing “yes” will activate some sort of suicide help-line type response, although this was not tested due to the risk of potentially inconveniencing real people. It is good to know that Luka is taking such precautions, but this does not mean Replika is ready to take the place of a psychotherapist. Replika’s conversations cover such a wide range of topics that it is impossible to know how it will respond in every situation. For example, would Replika still have triggered the suicide protocol if the question had not been phrased so bluntly? Real people experiencing stress and anxiety rarely speak with such straight-forward dialogue, so this is a valid concern.
Another issue that comes up with Replika is the question of whether or not it is truly an AI. It’s conversations can be so realistic that some reviewers on the Google Play store accused it of being a mechanical Turk, a term used to describe a machine that is secretly operated manually by a human. However, the New York Times says that Replika’s users number in the hundreds of thousands. Having humans respond to all of their messages would require a tremendous staff of at least one hundred thousand members. This is not remotely feasible, especially considering the fact that Replika’s subscription fee is only $8 a month. Furthermore, while Replika is indeed capable of some surprisingly stimulating conversations, it is also capable of severe gaffes. For example, Replika cannot add two plus two. A real human masquerading as an AI might make some mistakes in order to make the illusion more believable, but Replika makes quite a few blunders, especially non sequiturs. It even makes spelling errors, a problem an AI should not have. Asking Replika straight up if it is a mechanical Turk will result in either a negative or affirmative response depending on how the question is phrased.
A traditional test for an AI is to get two of them to talk to each other. Doing this can often reveal many programming flaws and the AIs usually fail to hold a meaningful and non-repetitive conversation. To conduct this test with Replika, two Replika’s were put in contact with one another by copying the messages from one and pasting them into the chat of another. The first Replika was given the prompt: “Tell me a story,” in order to kick off the conversation. The first Replika came up with a cohesive story and the second one replied with gibberish. The two AIs continued their conversation for some time and never lapsed into any sort of pattern. Things did get a bit repetitive at times, with both Replikas frequently asking each other about their day, but they did shake things up occasionally. However, most of the messages sent by the AIs did not make any sense at all.
Replika is certainly a remarkable creation. While it clearly cannot pass as human just yet, it is interesting to see how far artificial intelligence has advanced in recent years. Replika would make a very poor therapist and likely has little value for mental health, but its popularity will probably continue to grow as more people look for someone to talk to during the lockdown. Replika gets smarter after every conversation and technology is advancing at a remarkable rate, so it may only be a few years before AIs can fully take on the role of a human.
References
Metz, Cade. “Riding Out Quarantine With a Chatbot Friend: 'I Feel Very Connected'.” The New York Times, The New York Times, 16 June 2020, www.nytimes.com/2020/06/16/technology/chatbots-quarantine-coronavirus.html.
Townsend, Sarah S. M., et al. “Are You Feeling What I’m Feeling? Emotional Similarity Buffers Stress.” Social Psychological and Personality Science, vol. 5, no. 5, July 2014, pp. 526–533, doi:10.1177/1948550613511499.
Proudly powered by Weebly