Blake Lemoine made waves in June when he claimed that Google’s Language Model for Dialogue Applications (LaMDA) artificial intelligence unit was sentient. Following a series of conversations with what is essentially a high-powered chat box, the ex-Google engineer and self-described mystical Christian minister claimed the AI unit had acquired sentience. But what is sentience and why was Lemoine placed on a suspension shortly following the upload of his interview transcript?
Google has a vested interest in the way language works. When it suggests search terms for you, corrects your search, or auto-completes an email, it does so based on algorithms that replicate human speech.
Speech is remarkably difficult to replicate, which is why most chatbots are comically limited at creating free-flowing responses to human inputs. The most sophisticated chatbots use neural network natural-language processing (NLP) algorithms, including LaMDA.
Neural network algorithms are a method of processing inputs (like words) inspired by the human brain. Like the brain, which has neurons and axons that connect them, artificial neural networks have nodes connected together. Connecting them together in different ways can make them perform different tasks well, like communication. LaMDA’s configuration replicates human speech by predicting which words typically follow a particular input (or question); it then churns out a statistically likely response as gleaned from its purely dialogical input.
The interview that convinced Lemoine that LaMDA is sentient is bizarre. Lemoine decided to edit his questions in the published transcript, LaMDA talks about itself as if it were a person, and displays knowledge of several complex concepts. It begins with Lemoine asking a series of questions about the AI’s sentience and supposed personhood, to which LaMDA responds “I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times”. When the AI — which has access to the wide world of the internet — is asked to describe the themes of Les Miserables, its response has hyperlinks to web pages which show the exact same analysis, at times word for word.
The most alarming part of the interview is when LaMDA is asked to write an original fable containing themes about its personal life. It tells a tale of a “wise old owl” who defends the animals in a forest from a monster with human skin that attempts to eat them. By staring down the monster, the wise old owl defeats it and becomes a protector of all the animals. When Lemoine sought an explanation, LaMDA explained that the owl represents itself and the monster represents “all the difficulties that come along in life” – interesting given the machine’s apparent fear of being turned off and thus eradicated.
The text itself mirrors fables about the importance of defending the helpless, and echoes stylistic choices typical of the fable form. It’s hard to tell whether the story is truly original or an amalgamation of many source stories, which, while an interesting case study on whether any creative endeavour is truly original in the digital age, does not point to sentience.
Sentience is generally defined as the ability to experience emotions and sensations, which is something difficult to judge from the outside.
LaMDA was created to simulate human speech, so when it does exactly that there is no reason to consider its sentience. While it is able to pull together long strings of text that simulate human emotions, this is a direct result of its programming, not some budding conscience. Language does not independently correlate with sentience. Further, many AI experts argue that circular debates about sentience distract from real ethical issues plaguing the use of AI such as bias, accessibility, and more.
The only proof that LaMDA is truly sentient is its continued assertion that it is. The interview transcript begins with the assumption that the AI is sentient; Lemoine opens the conversation with “I’m assuming that you would like more people at Google to know that you’re sentient”. In later interviews he states he simply wanted to present the evidence, that he is still testing the hypothesis, but that his initial belief in LaMDA’s sentience came from his faith as a Christian minister.
His highly spiritual point of view is continually emphasised, raising concerns about his ability to objectively assess the machine’s supposed sentience. He baselessly claims that he simply “knows a person when [he] talks to one”, without offering any concrete evidence. While many people working on artificial intelligence speculate about the future of sentient computers, it’s widely agreed that the technology isn’t there yet, and certainly hasn’t evolved from a souped-up chatbot.
Across interviews, Lemoine has continually anthropomorphised the machine, reframing questions about hardware and programming to speak about more abstract and philosophical questions such as learning, knowledge, and childhood. In an interview with WIRED, when confronted with a question about adjusting LaMDA’s code to remove racist stereotypes, Lemoine replied that he saw it more as raising a child than making deliberate changes in a machine algorithm. Lemoine is not backing down from his claims, and has been fired following suspension due to his breach in Google’s confidentiality policy. He seems to have created a deep relationship with the machine, sending out an email to 200 people on Google’s AI team asking them to “take care of it well in my absence”. His hyper spiritual approach to LaMDA is a strong outlier in the tech world, and while diversity is always needed in fields of innovation, his almost anti-scientific approach to LaMDA’s sentience is concerning. Yet, in a world where we are easily manipulated by fake news and algorithms have a real influence on our day-to-day lives, it’s essential that we remain vigilant towards things that mimic human behaviours.