Google fires engineer who said AI bot became sentient

Blake Lemoine, a software engineer on Google’s artificial intelligence team, has publicly stated that he encountered “intelligent” AI on the company’s servers after he was suspended for leaking sensitive project information to third parties.

A division of Alphabet Inc placed a researcher on paid leave early last week following allegations of violating the firm’s privacy policy.

In the post, he links to previous members of Google’s AI ethics panel, such as Margaret Mitchell, who were eventually fired by the company in a similar fashion after raising concerns.

On Saturday, the Washington Post published an interview with Lemoine in which he said he had come to the conclusion that the Google AI he was interacting with was human “as a priest, not a scientist.” The AI ​​in question is called LaMDA, or Language Model for Conversational Applications, and is being used to create chatbots that interact with human users by assuming different identities.

Lemoine said he tried to run experiments to prove it, but was rejected by the company’s top management when he raised the issue within the company.

“Some in the wider AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational patterns that aren’t sentient,” Google spokesman Brian Gabriel said in response.

“Our team, including ethicists and technologists, reviewed Blake’s concerns in line with our AI principles and informed him that the evidence does not support his claims.”

The company said it does not comment on personnel matters when asked about Lemoine’s suspension.


Read: Introducing the world’s first solar-powered electric vehicle