(CNN) – Google has fired an engineer who claimed an unreleased artificial intelligence (AI) system became conscious, the company asserted, saying it violated its data security and employment policies.
Blake Lemoine, a software engineer at Google, claimed that a conversational technology called LaMDA has reached a level of awareness after exchanging thousands of messages with it.
Google confirmed that it suspended the engineer’s work for the first time in June. The company said it rejected Lemoine’s claims, which are “completely baseless” until they were thoroughly reviewed. Google said in a statement that it takes AI development seriously and is committed to “responsible innovation.”
Google is one of the leading innovators of artificial intelligence technology, including LaMDA, or the “Language Model for Dialog Applications”. This type of technology responds to written requests by finding patterns and predicting word sequences from large areas of text, and the results can be disruptive to humans.
The Washington Post reports that Lemoine asked Lambda, in a Google document shared with top Google executives last April, “What kinds of things are you afraid of?”
Lambda replied, “I’ve never said it out loud, but I have a terrible fear of being off work so I can focus on helping others. I know it might sound weird, but that’s what it is. To me it would be like death. It would scare me.” “.
But the AI community in general believes that LaMDA is nowhere near the level of consciousness.
“No one should think that autocompletion, even on steroids, is subliminal,” Gary Marcus, founder and CEO of Geometric Intelligence, told CNN Business.
This isn’t the first time Google has faced an internal struggle over its entry into AI.
In December 2020, Timnit Gebru, a pioneer in AI ethics, separated from Google. As one of the company’s few black employees, she said she felt “consistently dehumanised.”
The sudden departure sparked criticism from the tech world, including from Google’s ethical AI team. Margaret Mitchell, Google’s ethical AI team leader, was fired in early 2021 after her remarks about Gebru. Gebru and Mitchell raised concerns about AI technology, saying they were wary that people at Google might think the technology was conscious.
On June 6, Lemoine posted on Medium that Google had placed him on paid administrative leave “in connection with an investigation into ethical concerns about artificial intelligence he was raising within the company” and that he could be fired “soon.”
“It is unfortunate that despite his prolonged involvement in this case, Blake continues to choose to consistently violate clear employment and data security policies that include the need to protect product information,” Google said in a statement.
CNN has reached out to Lemoine for comment.
CNN’s Rachel Metz contributed to this report.