Techno Blender
Digitally Yours.
Browsing Tag

LaMDA

Will the Dystopian Future Befall if LAMDA Gets Lab-Grown Brain?

What if Google’s new Language Model for Dialogue Applications (LaMDA) gets a Lab-Grown Brain? You might have come across one or more recent articles centered on an impressive bit of AI software called LaMDA, and/or an impassioned Google employee named Blake Lemoine. Originally tasked with monitoring if the company’s new Language Model for Dialogue Applications (LaMDA) veered into pesky problems like offensive conversations or hate speech, Lemoine soon came to believe that the chatbot qualifies as a self-aware,…

LaMDA is an ‘AI baby’ that will Outrun its Parent Google Soon

In the absence of a functional definition for sentience, a psychological trait to date was presumed to be applicable only to human beings, it is highly arguable if a chatbot can be declared sentient entirely based on one instance of conversation. Earlier this month Google employed, Blake Lemoine, and declared LaMDA, a Google-developed conversational bot as sentient for which he had to take the ire of Google. If one carefully looks at the conversation between Lemoine and LaMDA, the distinctly distributed words suggest…

Did LaMDA Deceive Lemoine and Made Him Tell It is Sentient?

How did the AI chatbot LaMDA fool Lemoine into thinking it is Sentient? A Google engineer named Blake Lemoine became so enthralled by an AI chatbot that he may have sacrificed his job to defend it. “I know a person when I talk to it,” he told The Washington Post for a story published last weekend. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code.” After discovering that he’d gone public with his claims, Google put Lemoine on administrative leave.…

Sentient? Google LaMDA feels like a typical chat bot

LaMDA is a software program that runs on Google TPU chips. Like the classic brain in a jar, some would argue the code and the circuits don't form a sentient entity because none of it engages in life.   Tiernan Ray for ZDNetGoogle engineer Blake Lemoine caused controversy last week by releasing a document that he had circulated to colleagues in which Lemoine urged Google to consider that one of its deep learning AI programs, LaMDA, might be "sentient." Google replied by officially denying the likelihood of

The 11 Best (and Worst) Sentient Robots From Sci-Fi

Maybe the biggest distinguishing factor between LaMDA and most of the “sentient” AI’s on this list is the presence of a personality. If you read the transcripts of the conversation Lemoine had with his chatbot, it’s pretty one-note; but the sci-fi flicks on this list have given us AI’s that might be rational, but also show signs of being conniving (like Ava), murderous (like Skynet), or needy (HAL).That said, there are some good bots out there—take CHAPPiE, a decommissioned police bot from the 2015 film of the same name.

Blake Lemoine Says Google’s LaMDA AI Faces ‘Bigotry’

But it calls itself a person.Person and human are two very different things. Human is a biological term. It is not a human, and it knows it’s not a human.It’s a very strange entity you’re describing because the entity is bound by algorithmic biases that humans put in there.You’re right on point. That’s exactly correct.But I get the sense you’re implying that it’s possible for LaMDA to overcome those algorithmic biases.We’ve got to be very careful here. Parts of the experiments I was running were to determine whether or…

LaMDA and the Sentient AI Trap

Now head of the nonprofit Distributed AI Research, Gebru hopes that going forward people focus on human welfare, not robot rights. Other AI ethicists have said that they’ll no longer discuss conscious or superintelligent AI at all.“Quite a large gap exists between the current narrative of AI and what it can actually do,” says Giada Pistilli, an ethicist at Hugging Face, a startup focused on language models. “This narrative provokes fear, amazement, and excitement simultaneously, but it is mainly based on lies to sell…

Is there a cause for worry if AI turns sentient?

When Google engineer Blake Lemoine said its AI model LaMDA had turned sentient or was self-aware, Google said it had found the claim hollow and baseless, sending him on ‘paid administrative leave’. Mint explores the fear of AI and why tech companies react defensively When Google engineer Blake Lemoine said its AI model LaMDA had turned sentient or was self-aware, Google said it had found the claim hollow and baseless, sending him on ‘paid administrative leave’. Mint explores the fear of AI and why tech companies react…

Google Disagrees With Engineer Who Claimed LaMDA AI Chatbot Had Become Sentient, Sent Him on Leave

Google has seen a huge turmoil in the company after a senior software engineer was suspended on June 13 for sharing transcripts of a chat with a “sentient” artificial intelligence (AI). Blake Lemoine, the 41-year-old engineer, was placed on paid leave after violating Google's confidentiality policy. He had published transcripts of chats between him and the company's LaMDA (Language Model For Dialogue Applications) chatbot development system. Lemoine defined the system he's been working on since last fall as “sentient”…

Google AI Claims to Be Sentient in Leaked Transcripts, But Not Everybody Agrees

A senior software engineer at Google was suspended on Monday (June 13) after sharing transcripts of a conversation with an artificial intelligence (AI) that he claimed to be "sentient", according to media reports. The engineer, 41-year-old Blake Lemoine, was put on paid leave for breaching Google's confidentiality policy.  "Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers," Lemoine tweeted on Saturday (June 11) when sharing the transcript of his…