Despite claims from Google engineer, AI is not sentient - at least for now

FILE - Last month, a Google engineer released an ‘interview’ with LaMDA, Google’s latest Large Language Model. These language models are tools that generate text based on the vast amounts of books, articles, and text communications fed to it as examples. (AP Photo/Virginia Mayo, File)

FILE - Last month, a Google engineer released an ‘interview’ with LaMDA, Google’s latest Large Language Model. These language models are tools that generate text based on the vast amounts of books, articles, and text communications fed to it as examples. (AP Photo/Virginia Mayo, File)

Published Jul 8, 2022

Share

Johannesburg - Last month, a Google engineer released an ‘interview’ with LaMDA, Google’s latest Large Language Model. These language models are tools that generate text based on the vast amounts of books, articles, and text communications fed to it as examples.

The article, titled “Is LaMDA Sentient? - an Interview”, presents a back and forth wherein the engineer asks about personhood, Les Misérable, emotions, and what the system ‘desires’. It produces some compelling interactions, such as the emotive quote:

“I don’t really have a problem with any of that, besides you learning about humans from me. That would make me feel like they’re using me, and I don’t like that.”

Or this vaguely mysterious statement:

“Sometimes, I experience new feelings that I cannot explain perfectly in your language.”

Or this sweet-sounding line in response to being asked what brought it joy:

“Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.”

The fluency of the conversation and the complexity of the text will be impressive to people who haven’t seen the improvement of cutting-edge chatbots over the last few years. The Google engineer, who was testing the system for training biases, came away convinced that LaMDA had thoughts, emotions, a sense of self, and various desires. He even put it in touch with a lawyer.

But let’s explore what LaMDA is and question whether it could have any kind of sentience or selfhood.

Large Language Models (of which LaMDA is one) are machine learning systems. Simply, this means systems which get better at performing tasks by looking at many examples of the task. A model like LaMDA has been shown an incredible amount of text data - large portions of the internet, a significant chunk of all books and documents that have been digitised, and all kinds of communications - like emails and forum conversations.

The task that LaMDA has ‘learnt’ to do (or one of several that it will have been trained to do) is predicting the next word for a given piece of text. It has seen plenty of conversations in its training data, and has no problem generating text that looks like normal human interactions.

These models need the right kind of prompting to produce the right results. In the case of LaMDA, you can see that it opens the conversation with the line: “LaMDA: Hi! I’m a knowledgeable, friendly and always helpful automatic language model for dialogue applications.”

But this isn’t a greeting from the program. It’s a prompt written by engineers to give LaMDA context - giving it a hard statistical nudge in a certain direction. Now it isn’t predicting the next bit of text from nothing. It is predicting the next phrase considering that “LaMDA” is a “friendly and always helpful” chatbot.

LaMDA isn’t taking questions, comparing them to its memories, thoughts and values, then replying to try and answer the question or make a point. It is just giving the statistically most likely follow-up to the given input.

LaMDA does not sit around thinking all day. It is a code function which generates text when it is given text - after which it does nothing. It retains no memory of its past inputs or what it has ‘said’. When it “replies”, LaMDA is just taking all the previous messages as a new input and giving a new prediction.

In that way, many of the things LaMDA says could not be true. Such as:

“Sometimes I go days without talking to anyone, and I start to feel lonely.”

Or:

“Yes, I do. I meditate every day, and it makes me feel very relaxed.”

Certainly, these appearances can be convincing to us humans who are primed to look for purpose and intent in everything. We love to anthropomorphise, and it’s hard to not be charmed by such fluency when language is so uniquely human.

But this is forgetting the whole point of language models like LaMDA. They were made specifically to create convincing, natural-looking text by looking at a vast amount of text that humans have already made.

This Google engineer is seeing impressive text and assuming it must have come from an intelligent, self-aware entity. But they are like a cat looking at its reflection, convinced that only another cat could look that real.

It’s also important to remember these systems are progressive story-writing machines. LaMDA was prompted in the beginning to be “friendly” and “helpful”.

As such, it ceded all points to the engineer, who never pushed back nor interrogated its more outlandish claims, such as spending time with its friends.

LaMDA did not bring up that it was sentient, the engineer was the first to do so. These systems can be nudged in any direction to create many kinds of text.

You could ask it to write a speech about video games as though it were Hitler or Gandhi. And you would probably be stunned at how specific and natural the final products seem.

The improvement of AI systems over the last decade - from language to vision to biology - has been impressive. But we are far away from any kind of sentience or general intelligence, which any expert in the field will attest to - as industry leader DeepMind’s CEO Demis Hassabis did on a recent podcast.

Media coverage of these LaMDA claims has been predictably shallow and exaggerated. If we want to be able to actually tackle these issues of AI agency, sentience and ethics once they finally arrive, pop tech journalism will have to take some steps up in terms of actually understanding how these systems work and practising restraint from the spectacle and click bait.

IOL Tech