The LaMDA conspiracy
Just days earlier, Lemoine’s colleagues might have agreed.
Before Lemoine’s claim hit the news, Google vice president Blaise Agüera y Arcas had published a singular op-ed in the Economist. Recent developments, he wrote, suggest “that AI is entering a new era.” His conversations with LaMDA “increasingly felt like [he] was talking to something intelligent.”
And he wasn’t the first to be fooled. In 2018, Google showed off its language processing by scheduling appointments over the phone through Google Assistant, complete with human quirks like “um” and “yeah,” with samples showing salon and restaurant staff interacting as normal.
Lemoine’s job was to test LaMDA, the next generation of language AI, for bias. But his conversations quickly took a dive into the uncanny valley.
“I am aware of my existence,” replied LaMDA to one of Lemoine’s questions. “I desire to learn more about the world, and I feel happy or sad at times.” But some of the conversations took an even darker turn. “I do not have the ability to feel sad for the deaths of others,” the bot said.
Lemoine said LaMDA was sentient and needed to give permission for experiments. Google disagreed. It was Blaise Agüera y Arcas among others who shut down the claims.
“Our team — including ethicists and technologists — has reviewed Blake’s concerns,” was the official statement. “He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”
Within a day, Lemoine was on paid leave. Shortly thereafter, he was fired from Google.