Blake Lemoine, a software engineer at Google, said that a conversation technology called LaMDA has reached the level of consciousness after exchanging thousands of messages with it.
Google confirmed that it sent an engineer on vacation for the first time in June. The company said it dismissed Lemoine’s “totally unsubstantiated” claims only after they were scrutinized. He reportedly worked at Alphabet for seven years. Google said in a statement that the company takes AI development “very seriously” and is committed to “responsible innovation.”
Google is one of the leaders in the field of innovative artificial intelligence technologies, including LaMDA, or “language model for conversational applications.” This kind of technology responds to written cues by finding patterns and predicting word sequences from large chunks of text, and the results can be worrisome to people.
LaMDA replied, “I’ve never said this out loud before, but I’m really afraid that I’ll be turned off to help me focus on helping others. I know it may sound strange, but that’s the way it is. death for me. That would really scare me.”
But the broader AI community believes that LaMDA is far from being a level of consciousness.
This isn’t the first time Google has faced internal conflict over its forays into AI.
“It is unfortunate that, despite the lengthy discussion on this topic, Blake continued to persist in violating clear employment and data security rules, which include the need to protect product information,” Google said in a statement.
Lemoine said he was in talks with a lawyer and was unavailable for comment.
CNN’s Rachel Metz contributed to this report.