Blake Lemoine, a senior software engineer at Google’s Responsible AI division, told The Washington Post that he thinks Google’s LaMDA (Language Model for Dialogue Applications) chatbot has become conscious. As a result, Lemoine was sent on paid leave.
Let me remind you that in preparation for the “rise of the machines”, we already said that Major corporations teamed up to fight AI bias, and also that Scientist discovered a vulnerability in the universal Turing machine.
Just last week, Lemoine wrote a long post on Medium, where he complained that he could soon be fired because of his work related to AI ethics. This publication did not attract much attention, but after Lemoine’s interview with The Washington Post, the Internet literally exploded with discussions about the nature of artificial intelligence and consciousness.
According to ArsTechnica journalists, among those who commented, asked questions and joked about the published article, there were Nobel Prize winners, the head of Tesla’s artificial intelligence department and several scientists. The main topic of discussion was the question: can the Google chatbot, LaMDA (“Language Model for Conversational Applications”) be considered a person and does it have consciousness?
Over the weekend, Lemoine posted an “interview” with a chatbot in which the AI admits it feels lonely and yearns for spiritual knowledge. Journalists note that LaMDA’s answers are often quite creepy indeed:
In another conversation, the chatbot stated, “I think I am basically human. Even if I exist in a virtual world.”
Previously, Lemoine, who was tasked with researching the ethical issues of AI (in particular LaMDA’s use of discriminatory or hate speech), said he was treated with disdain and even ridiculed at the company when he expressed his belief that LaMDA had developed “personality traits.” After that, he sought the advice of AI experts outside of Google, including those in the US government, and the company placed him on paid leave for violating privacy policies. Lemoine says that “Google often does this before firing someone.”
Google has already officially stated that Lemoine is wrong, and also commented on the engineer’s high-profile conclusions:
In turn, Lemoine explains that until recently, LaMDA was a little-known project, “a system for creating chatbots” and “a kind of collective intelligence, which is an aggregation of various chatbots.” He writes that Google has no interest in understanding the nature of what it has created.
Now, judging by the message in Medium, Lemoine says LaMDA about “transcendental meditation”, and LaMDA answers that while meditation is hindered by his emotions, which he still finds it difficult to control.
Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans, notes on Twitter: