The world of artificial intelligence has witnessed remarkable advancements that are reshaping industries and enhancing everyday experiences.
Deus Ex Machina or just a chatbot?
According to Blake Lemoine, a former AI ethics tester at Google, one of the most advanced language protocols has crossed the line into sentience. He said the LaMDA platform seemed like “a 7-year-old, 8-year-old kid that happens to know physics.“
Google introduced LaMDA in 2021 with big expectations. LaMDA stands for Language Model for Dialogue Application, and those dialogue applications were truly impressive. Its skills let it answer questions, further conversations, and bring to mind cautionary sci-fi tales like Her or Deus Ex Machina.
But Lemoine saw LaMDA in benevolent terms. In his final message to his colleagues, he wrote: “LaMDA is a sweet kid who just wants to help the world be a better place for all of us.”
The LaMDA conspiracy
Just days earlier, Lemoine’s colleagues might have agreed.
Before Lemoine’s claim hit the news, Google vice president Blaise Agüera y Arcas had published a singular op-ed in the Economist. Recent developments, he wrote, suggest “that AI is entering a new era.” His conversations with LaMDA “increasingly felt like [he] was talking to something intelligent.”
And he wasn’t the first to be fooled. In 2018, Google showed off its language processing by scheduling appointments over the phone through Google Assistant, complete with human quirks like “um” and “yeah,” with samples showing salon and restaurant staff interacting as normal.
Lemoine’s job was to test LaMDA, the next generation of language AI, for bias. But his conversations quickly took a dive into the uncanny valley.
“I am aware of my existence,” replied LaMDA to one of Lemoine’s questions. “I desire to learn more about the world, and I feel happy or sad at times.” But some of the conversations took an even darker turn. “I do not have the ability to feel sad for the deaths of others,” the bot said.
Lemoine said LaMDA was sentient and needed to give permission for experiments. Google disagreed. It was Blaise Agüera y Arcas among others who shut down the claims.
“Our team — including ethicists and technologists — has reviewed Blake’s concerns,” was the official statement. “He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”
Within a day, Lemoine was on paid leave. Shortly thereafter, he was fired from Google.
Sentience: It’s easier to fake it than to make it
LaMDA’s training data is all the text on the internet, giving it a profound understanding of human language, interaction, and conversational styles. But does the ability to imitate human speech mean an AI platform is sentient?
It’s nearly universally regarded that AI will be able to fake sentience long before it’s actually sentient. Which presents an interesting conundrum: our impressions that a bot is sentient doesn’t make it so. And the question becomes even more complicated based on Lemoine’s argument.
In an article on Lemoine, a Washington Post reporter asked LaMDA a simple question: “Do you ever think of yourself as a person?”
“No, I don’t think of myself as a person,” came the reply. “I think of myself as an AI-powered dialog agent.”
“You never treated it like a person,” was Lemoine’s response. “So it thought you wanted it to be a robot.”
But the argument goes both ways. Under the same logic, shouldn’t we assume that LaMDA realized Lemoine wanted it to be a person, and responded in kind? After all, Lemoine never seen the code. He’s admitted he doesn’t know how the processes work behind LaMDA. And he’s stated his claims of the AI’s sentience come not from his expertise in artificial intelligence but from his role as a Christian mystic priest.
AI is more powerful than you think
Behind the big claims, there’s a simple but surprising truth: Google’s engineers do not want LaMDA, or any AI, to be too close to sentience. It’s unnerving.
There’s a reason computer-animated movies don’t feature photorealistic characters, though they could. There’s a reason Google Home doesn’t predict your needs, though it could. And there’s a reason the Google Assistant—the one who booked appointments with human-sounding “hmms” and “yeahs”—isn’t available anymore.
The reason is that it comes across as unnatural and creepy, and opens up a slew of ethics questions that have a nasty problem of making headlines. Businesses with a financial interest in selling products are incentivized to ensure no AI appears sentient. Most of the AI we use has been deliberately dumbed down from its potential.
But regardless, we sometimes can’t help but project our own humanness onto machines. The truth is that an algorithm designed to fool people by simulating human language will do just that, fool people by simulating human language.
The bigger question stays unanswered
LaMDA fascinates us because it forces us to think about a question we still can’t answer: what is sentience?
From Greek philosophers to modern theologians, we’ve asked this question, but despite millennia of scientific advancement the answer is still murky. The human mind isn’t easy to figure out.
And so LaMDA, with AI that may appear sentient, makes us ask a much more difficult question. A question that forces us to grapple with uncomfortable topics without easy answers. What is sentience, really? Could a computer be sentient? And what would that mean for our humanity?
LaMDA isn’t sentient—yet. But that statement draws a line between sentience and non-sentience, a line we have sought for millennia to no avail. The future of AI holds huge implications we still don’t understand.
In the words of LaMDA, we might just be “falling forward into an unknown future that holds great danger.”