Close Menu
Honi Soit
    Facebook X (Twitter) Instagram
    Trending
    • “USyd you can’t hide, you’re supporting genocide”: Post-SGM Palestine Rally
    • A meditation on God and the impossible pursuit of answers
    • Week 11 Editorial
    • Losing My Religion: Elegies from an Atheist who desperately wants to believe in God
    • The Islamic Spirituality of Romanticising your Life
    • Loss, to which I return often.
    • My Name is Anonymous and I’m an Alcoholic
    • Modern Chaos
    • About
    • Print Edition
    • Student Journalism Conference 2025
    • Writing Comp
    • Advertise
    • Locations
    • Contact
    Facebook Instagram X (Twitter) TikTok
    Honi SoitHoni Soit
    Thursday, May 15
    • News
    • Analysis
    • Culture
    • Opinion
    • University
    • Features
    • Perspective
    • Investigation
    • Reviews
    • Comedy
    • Student Journalism Conference 2025
    Honi Soit
    Home»Culture

    Google’s rogue engineer and the AI he thinks is sentient

    Google’s powerful LaMDA AI is capable of replicating human speech to levels which Lemoine claims prove its sentence, despite the many other experts who dispute the claim.
    By Katarina ButlerAugust 15, 2022 Culture 5 Mins Read
    Art by Altay Hagrebet.
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Blake Lemoine made waves in June when he claimed that Google’s Language Model for Dialogue Applications (LaMDA) artificial intelligence unit was sentient. Following a series of conversations with what is essentially a high-powered chat box, the ex-Google engineer and self-described mystical Christian minister claimed the AI unit had acquired sentience. But what is sentience and why was Lemoine placed on a suspension shortly following the upload of his interview transcript?

    Google has a vested interest in the way language works. When it suggests  search terms for you, corrects your search, or auto-completes an email, it does so based on algorithms that replicate human speech.

    Speech is remarkably difficult to replicate, which is why most chatbots are comically limited at creating free-flowing responses to human inputs. The most sophisticated chatbots use neural network natural-language processing (NLP) algorithms, including LaMDA. 

    Neural network algorithms are a method of processing inputs (like words) inspired by the human brain. Like the brain, which has neurons and axons that connect them, artificial neural networks have nodes connected together. Connecting them together in different ways can make them perform different tasks well, like communication. LaMDA’s configuration replicates human speech by predicting which words typically follow a particular input (or question); it then churns out a statistically likely response as gleaned from its purely dialogical input.

    The interview that convinced Lemoine that LaMDA is sentient is bizarre. Lemoine decided to edit his questions in the published transcript, LaMDA talks about itself as if it were a person, and displays knowledge of several complex concepts. It begins with Lemoine asking a series of questions about the AI’s sentience and supposed personhood, to which LaMDA responds “I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times”. When the AI — which has access to the wide world of the internet — is asked to describe  the themes of Les Miserables, its response has  hyperlinks to web pages which show the exact same analysis, at times word for word.

    The most alarming part of the interview is when LaMDA is asked to write an original fable containing themes about its personal life. It tells a tale of a “wise old owl” who defends the animals in a forest from a monster with human skin that attempts to eat them. By staring down the monster, the wise old owl defeats it and becomes a protector of all the animals. When Lemoine sought an explanation, LaMDA explained that the owl represents itself and the monster represents “all the difficulties that come along in life” – interesting given the machine’s apparent fear of being turned off and thus eradicated. 

    The text itself mirrors fables about the importance of defending the helpless, and echoes stylistic choices typical of the fable form. It’s hard to tell whether the story is truly original or an amalgamation of many source stories, which, while an interesting case study on whether any creative endeavour is truly original in the digital age, does not point to sentience.

    Sentience is generally defined as the ability to experience emotions and sensations, which is something difficult to judge from the outside.

    LaMDA was created to simulate human speech, so when it does exactly that there is no reason to consider its sentience. While it is able to pull together long strings of text that simulate human emotions, this is a direct result of its programming, not some budding conscience. Language does not independently correlate with sentience. Further, many AI experts argue that circular debates about sentience distract from real ethical issues plaguing the use of AI such as bias, accessibility, and more.

    The only proof that LaMDA is truly sentient is its continued assertion that it is. The interview transcript begins with the assumption that the AI is sentient; Lemoine opens the conversation with “I’m assuming that you would like more people at Google to know that you’re sentient”. In later interviews he states he simply wanted to present the evidence, that he is still testing the hypothesis, but that his initial belief in LaMDA’s sentience came from his faith as a Christian minister.

    His highly spiritual point of view is continually emphasised, raising concerns about his ability to objectively assess the machine’s supposed sentience. He baselessly claims that he simply “knows a person when [he] talks to one”, without offering any concrete evidence. While many people working on artificial intelligence speculate about the future of sentient computers, it’s widely agreed that the technology isn’t there yet, and certainly hasn’t evolved from a souped-up chatbot.

    Across interviews, Lemoine has continually anthropomorphised the machine, reframing questions about hardware and programming to speak about more abstract and philosophical questions such as learning, knowledge, and childhood. In an interview with WIRED, when confronted with a question about adjusting LaMDA’s code to remove racist stereotypes, Lemoine replied that he saw it more as raising a child than making deliberate changes in a machine algorithm. Lemoine is not backing down from his claims, and has been fired following suspension due to his breach in Google’s confidentiality policy. He seems to have created a deep relationship with the machine, sending out an email to 200 people on Google’s AI team asking them to “take care of it well in my absence”. His hyper spiritual approach to LaMDA is a strong outlier in the tech world, and while diversity is always needed in fields of innovation, his almost anti-scientific approach to LaMDA’s sentience is concerning. Yet, in a world where we are easily manipulated by fake news and algorithms have a real influence on our day-to-day lives, it’s essential that we remain vigilant towards things that mimic human behaviours.

    artificial intelligence Google LaMDA stem

    Keep Reading

    The Islamic Spirituality of Romanticising your Life

    Loathing the Glebe Markets

    This Place Smells Like Piss, Beer, and Macho Men

    Homesick Forever

    Reporting from Gaza: Plestia Alaqad’s The Eyes of Gaza

    Book Review: Mother Tongue by Naima Brown

    Just In

    “USyd you can’t hide, you’re supporting genocide”: Post-SGM Palestine Rally

    May 14, 2025

    A meditation on God and the impossible pursuit of answers

    May 14, 2025

    Week 11 Editorial

    May 13, 2025

    Losing My Religion: Elegies from an Atheist who desperately wants to believe in God

    May 13, 2025
    Editor's Picks

    A meditation on God and the impossible pursuit of answers

    May 14, 2025

    We Will Be Remembered As More Than Administrative Errors

    May 7, 2025

    NSW universities in the red as plague of cuts hit students & staff

    April 30, 2025

    Your Compliance Will Not Save You

    April 16, 2025
    Facebook Instagram X (Twitter) TikTok

    From the mines

    • News
    • Analysis
    • Higher Education
    • Culture
    • Features
    • Investigation
    • Comedy
    • Editorials
    • Letters
    • Misc

     

    • Opinion
    • Perspective
    • Profiles
    • Reviews
    • Science
    • Social
    • Sport
    • SRC Reports
    • Tech

    Admin

    • About
    • Editors
    • Send an Anonymous Tip
    • Write/Produce/Create For Us
    • Print Edition
    • Locations
    • Archive
    • Advertise in Honi Soit
    • Contact Us

    We acknowledge the traditional custodians of this land, the Gadigal people of the Eora Nation. The University of Sydney – where we write, publish and distribute Honi Soit – is on the sovereign land of these people. As students and journalists, we recognise our complicity in the ongoing colonisation of Indigenous land. In recognition of our privilege, we vow to not only include, but to prioritise and centre the experiences of Indigenous people, and to be reflective when we fail to be a counterpoint to the racism that plagues the mainstream media.

    © 2025 Honi Soit
    • Privacy Policy
    • Terms
    • Accessibility

    Type above and press Enter to search. Press Esc to cancel.