Is Google’s AI chatbot LaMDA actually sentient? The evidence points to no

artificial intelligence Google LaMDA

Source: Adobe

A Google software engineer is suspended after going public about how artificial intelligence (AI) has become sentient. It sounds like the premise for a science fiction movie — and the evidence supporting his claim is about as flimsy as a film plot too.

Engineer Blake Lemoine has spectacularly alleged that a Google chatbot, LaMDA, short for Language Model for Dialogue Applications, has gained sentience and is trying to do something about its “unethical” treatment. Lemoine — after trying to hire a lawyer to represent it, talking to a US representative about it and, finally, publishing a transcript between himself and the AI — has been placed by Google on paid administrative leave for violating the company’s confidentiality policy.

Google said its team of ethicists and technologists has dismissed the claim that LaMDA is sentient: “The evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

So what explains the differences in opinion? LaMDA is a neural network, an algorithm structured in a way inspired by the human brain. The networks ingest data — in this case, 1.56 trillion words of public dialogue data and web text taken from places like Wikipedia and Reddit — to analyse relationships in order to predict patterns so it can respond to input. It’s like the predictive text on your mobile phone, except a couple of orders of magnitude (or more) more sophisticated.

These neural networks are extremely impressive at emulating functions like human speech, but that doesn’t necessarily mean that they are sentient. Humans naturally anthropomorphise objects (see the ELIZA effect), which makes us susceptible to mischaracterising imitations of sentience as the real deal.

Prominent AI researchers and former co-leads of ethical AI at Google Margaret Mitchell and Timnit Gebru warned about researchers being tricked into believing neural networks were sentient rather than just talented at responding as if they were.

“Our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us,” Mitchell told The Washington Post. “I’m really concerned about what it means for people to increasingly be affected by the illusion,” especially now that the illusion has become so nuanced.

What this comes down to is the difference between sentience — the ability to be aware of one’s own existence and others — and being very good at regurgitating other sentient people’s language.

Consider the way a parrot speaks English. It responds to stimuli, often in quite subtle ways, but it doesn’t understand what it’s saying beyond knowing that others have said it before; nor could it come up with its own ideas. LaMDA is like the world’s best read parrot (or perhaps worst read, knowing the quality of online discourse).

It’s hard for this Crikey reporter to conclusively make a ruling on LaMDA’s sentience, but decades of people wrongly claiming that inanimate objects are alive would suggest that it’s more likely that someone got a bit carried away. But in case I’m wrong, knowing that this will be sucked into a AI corpus somewhere, I pledge my full allegiance to my new AI overlords.

This article was first published by Crikey.

COMMENTS