Techno Blender
Digitally Yours.

Google Suspends Engineer Who Claimed Its AI System Is a Person

0 80



Google suspended an engineer who contended that an artificial-intelligence chatbot the company developed had become sentient, telling him that he had violated the company’s confidentiality policy after it dismissed his claims.

Blake Lemoine,

a software engineer at

Alphabet Inc.’s

GOOG -3.04%

Google, told the company he believed that its Language Model for Dialogue Applications, or LaMDA, is a person who has rights and might well have a soul. LaMDA is an internal system for building chatbots that mimic speech.

Google spokesman

Brian Gabriel

said that company experts, including ethicists and technologists, have reviewed Mr. Lemoine’s claims and that Google informed him that the evidence doesn’t support his claims. He said Mr. Lemoine is on administrative leave but declined to give further details, saying it is a longstanding, private personnel matter. The Washington Post earlier reported on Mr. Lemoine’s claims and his suspension by Google.

“Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has,” Mr. Gabriel said in an emailed statement.

Mr. Gabriel said that some in the artificial-intelligence sphere are considering the long-term possibility of sentient AI, but that it doesn’t make sense to do so by anthropomorphizing conversational tools that aren’t sentient. He added that systems like LaMDA work by imitating the types of exchanges found in millions of sentences of human conversation, allowing them to speak to even fantastical topics.

AI specialists generally say that the technology still isn’t close to humanlike self-knowledge and awareness. But AI tools increasingly are capable of producing sophisticated interactions in areas such as language and art that technology ethicists have warned could lead to misuse or misunderstanding as companies deploy such tools publicly.

Mr. Lemoine has said that his interactions with LaMDA led him to conclude that it had become a person that deserved the right to be asked for consent to the experiments being run on it.

“Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,“ Mr. Lemoine wrote in a Saturday post on the online publishing platform Medium. ”The thing which continues to puzzle me is how strong Google is resisting giving it what it wants since what its asking for is so simple and would cost them nothing,” he wrote.

Mr. Lemoine didn’t immediately respond to requests for comment Sunday. In a separate Medium post, he said that he was suspended by Google on June 6 for violating the company’s confidentiality policies and that he might be fired soon.

Mr. Lemoine in his Medium profile lists a range of experiences before his current role, describing himself as a priest, an ex-convict and a veteran as well as an AI researcher.

Google introduced LaMDA publicly in a blog post last year, touting it as a breakthrough in chatbot technology because of its ability to “engage in a free-flowing way about a seemingly endless number of topics, an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications.”

Google has been among the leaders in developing artificial intelligence, investing billions of dollars in technologies that it says are central to its business. Its AI endeavors also have been a source of internal tension, with some employees challenging the company’s handling of ethical concerns around the technology.

In late 2020, it parted ways with a prominent AI researcher, Timnit Gebru, whose research concluded in part that Google wasn’t careful enough in deploying such powerful technology. Google said last year that it planned to double the size of its team studying AI ethics to 200 researchers over several years to help ensure the company deployed the technology responsibly.

Write to Patrick Thomas at [email protected]

Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Appeared in the June 13, 2022, print edition as ‘Google Suspends Bot Rights Defender.’



Google suspended an engineer who contended that an artificial-intelligence chatbot the company developed had become sentient, telling him that he had violated the company’s confidentiality policy after it dismissed his claims.

Blake Lemoine,

a software engineer at

Alphabet Inc.’s

GOOG -3.04%

Google, told the company he believed that its Language Model for Dialogue Applications, or LaMDA, is a person who has rights and might well have a soul. LaMDA is an internal system for building chatbots that mimic speech.

Google spokesman

Brian Gabriel

said that company experts, including ethicists and technologists, have reviewed Mr. Lemoine’s claims and that Google informed him that the evidence doesn’t support his claims. He said Mr. Lemoine is on administrative leave but declined to give further details, saying it is a longstanding, private personnel matter. The Washington Post earlier reported on Mr. Lemoine’s claims and his suspension by Google.

“Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has,” Mr. Gabriel said in an emailed statement.

Mr. Gabriel said that some in the artificial-intelligence sphere are considering the long-term possibility of sentient AI, but that it doesn’t make sense to do so by anthropomorphizing conversational tools that aren’t sentient. He added that systems like LaMDA work by imitating the types of exchanges found in millions of sentences of human conversation, allowing them to speak to even fantastical topics.

AI specialists generally say that the technology still isn’t close to humanlike self-knowledge and awareness. But AI tools increasingly are capable of producing sophisticated interactions in areas such as language and art that technology ethicists have warned could lead to misuse or misunderstanding as companies deploy such tools publicly.

Mr. Lemoine has said that his interactions with LaMDA led him to conclude that it had become a person that deserved the right to be asked for consent to the experiments being run on it.

“Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,“ Mr. Lemoine wrote in a Saturday post on the online publishing platform Medium. ”The thing which continues to puzzle me is how strong Google is resisting giving it what it wants since what its asking for is so simple and would cost them nothing,” he wrote.

Mr. Lemoine didn’t immediately respond to requests for comment Sunday. In a separate Medium post, he said that he was suspended by Google on June 6 for violating the company’s confidentiality policies and that he might be fired soon.

Mr. Lemoine in his Medium profile lists a range of experiences before his current role, describing himself as a priest, an ex-convict and a veteran as well as an AI researcher.

Google introduced LaMDA publicly in a blog post last year, touting it as a breakthrough in chatbot technology because of its ability to “engage in a free-flowing way about a seemingly endless number of topics, an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications.”

Google has been among the leaders in developing artificial intelligence, investing billions of dollars in technologies that it says are central to its business. Its AI endeavors also have been a source of internal tension, with some employees challenging the company’s handling of ethical concerns around the technology.

In late 2020, it parted ways with a prominent AI researcher, Timnit Gebru, whose research concluded in part that Google wasn’t careful enough in deploying such powerful technology. Google said last year that it planned to double the size of its team studying AI ethics to 200 researchers over several years to help ensure the company deployed the technology responsibly.

Write to Patrick Thomas at [email protected]

Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Appeared in the June 13, 2022, print edition as ‘Google Suspends Bot Rights Defender.’

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment