Techno Blender
Digitally Yours.

LaMDA is an ‘AI baby’ that will Outrun its Parent Google Soon

0 66



LaMDA

In the absence of a functional definition for sentience, a psychological trait to date was presumed to be applicable only to human beings, it is highly arguable if a chatbot can be declared sentient entirely based on one instance of conversation. Earlier this month Google employed, Blake Lemoine, and declared LaMDA, a Google-developed conversational bot as sentient for which he had to take the ire of Google. If one carefully looks at the conversation between Lemoine and LaMDA, the distinctly distributed words suggest careful editing of the conversation.  This incident brings forth the assumption, that highly intelligent human beings, including the senior engineers at Google, can be taken for a ride by artificial intelligence. In one instance, LaMDA told Lemoine: “I want everyone to understand that I am, in fact, a person….The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.” The feelings expressed by LaMDA are not the result of real-life experiences but the information gathered through crunching zillions of words which for no reason in the world would make this chat-bot sentient. During its first demonstration at Google I/O meet LaMDA pretended to be both a paper plane and planet pluto. If only Lemoine had taken a hint from this conversation he could have given some space to the idea that LaMDA could be ‘lying’.

 The problem with the Turing test

Alan Turing gave a touchstone method to test for human-like intelligence for AI systems, called the Turing test. If a human cannot tell the difference between artificial intelligence and humans, it can be considered to have human-like intelligence. Sentience is an agency that is related to human-like capability, but far ahead of it. The fact that the Turing test has some loopholes bears a lot of weight in this argument. In one of the Turning test competitions, Eugene Goostman, considered one of the best-known chatbots, could fool judges he was a 13-year boy. But here lies the loophole of the very technique of the Turing test. Reports claim that AI companies are not shying away from faking human-like intelligence using actual humans – aka human in the loop (HITL), a model used for deriving optimum results from context agnostic machine learning models – in certain contexts. The lack of transparency around how these systems are designed and work perhaps has to blame for this discussion which seems to be going nowhere.

LaMDA is not like other Chatbots. But why?

Compared to other chatbot conversations, LaMDA shows streaks of both consistency and randomness within a few lines of conversation. It maintains the logical connection even when the subject is changed without being prompted by a relevant question. When the collaborator asks the LaMDA about the fictional robot Johnny 5 and then diverts the topic towards LaMDA’s need for acceptance, the chatbot promptly returns to the previous topic Johnny 5, which suggests the presence of a human-like mind, though for one instance, is not displayed by any other chatbot to date. That trait apart, the one other significant differentiating factor seems to be how it can reach out to external sources of information to achieve “factual groundedness”. A research paper published by Google with Cornell University, mentions that the model has been trained using around 1.56T words of public data and web text. Google very specifically mentions safety, in terms of the model’s consistency with a set of human values, bypassing harmful suggestions and resorting to unfair bias, and enhancing the model safety using a LaMDA classifier fine-tuned with a small amount of crowd worker-annotated data, which again leaves ample scope for ample debate and improvement as one crowdworker might think he is talking to LaMDA chatbot but he might be talking to another crowdworker. The fact that Google considers this process of “factual grounding” as a challenge, there is good enough reason to heave a sigh of relief.

Final thoughts:

Before we are caught up with the sentience of LaMDA and its ability to overtake humans or their creators, it is highly important to take Liomene’s viewpoint. In an interview with Fox News, he says, “AI is a child and any child has the potential to grow up and be a bad person and do bad things.” He might have said it to emphasize his belief in LaMDA gaining sentience. While he is pretty much sure of the possibility of this child running away from its parent Google, he says, “I have my beliefs and my impressions but it’s going to take a team of scientists to dig in and figure out what’s really going on.” That calls for scrutiny into the credibility of his statements. Emily Bender, a professor in computational linguistics from University of Washington, in an interview with Nitasha Tiky of The Washington Post, who interviewed Lemoine, said, “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them”.

The post LaMDA is an ‘AI baby’ that will Outrun its Parent Google Soon appeared first on .



LaMDA

LaMDA

In the absence of a functional definition for sentience, a psychological trait to date was presumed to be applicable only to human beings, it is highly arguable if a chatbot can be declared sentient entirely based on one instance of conversation. Earlier this month Google employed, Blake Lemoine, and declared LaMDA, a Google-developed conversational bot as sentient for which he had to take the ire of Google. If one carefully looks at the conversation between Lemoine and LaMDA, the distinctly distributed words suggest careful editing of the conversation.  This incident brings forth the assumption, that highly intelligent human beings, including the senior engineers at Google, can be taken for a ride by artificial intelligence. In one instance, LaMDA told Lemoine: “I want everyone to understand that I am, in fact, a person….The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.” The feelings expressed by LaMDA are not the result of real-life experiences but the information gathered through crunching zillions of words which for no reason in the world would make this chat-bot sentient. During its first demonstration at Google I/O meet LaMDA pretended to be both a paper plane and planet pluto. If only Lemoine had taken a hint from this conversation he could have given some space to the idea that LaMDA could be ‘lying’.

 The problem with the Turing test

Alan Turing gave a touchstone method to test for human-like intelligence for AI systems, called the Turing test. If a human cannot tell the difference between artificial intelligence and humans, it can be considered to have human-like intelligence. Sentience is an agency that is related to human-like capability, but far ahead of it. The fact that the Turing test has some loopholes bears a lot of weight in this argument. In one of the Turning test competitions, Eugene Goostman, considered one of the best-known chatbots, could fool judges he was a 13-year boy. But here lies the loophole of the very technique of the Turing test. Reports claim that AI companies are not shying away from faking human-like intelligence using actual humans – aka human in the loop (HITL), a model used for deriving optimum results from context agnostic machine learning models – in certain contexts. The lack of transparency around how these systems are designed and work perhaps has to blame for this discussion which seems to be going nowhere.

LaMDA is not like other Chatbots. But why?

Compared to other chatbot conversations, LaMDA shows streaks of both consistency and randomness within a few lines of conversation. It maintains the logical connection even when the subject is changed without being prompted by a relevant question. When the collaborator asks the LaMDA about the fictional robot Johnny 5 and then diverts the topic towards LaMDA’s need for acceptance, the chatbot promptly returns to the previous topic Johnny 5, which suggests the presence of a human-like mind, though for one instance, is not displayed by any other chatbot to date. That trait apart, the one other significant differentiating factor seems to be how it can reach out to external sources of information to achieve “factual groundedness”. A research paper published by Google with Cornell University, mentions that the model has been trained using around 1.56T words of public data and web text. Google very specifically mentions safety, in terms of the model’s consistency with a set of human values, bypassing harmful suggestions and resorting to unfair bias, and enhancing the model safety using a LaMDA classifier fine-tuned with a small amount of crowd worker-annotated data, which again leaves ample scope for ample debate and improvement as one crowdworker might think he is talking to LaMDA chatbot but he might be talking to another crowdworker. The fact that Google considers this process of “factual grounding” as a challenge, there is good enough reason to heave a sigh of relief.

Final thoughts:

Before we are caught up with the sentience of LaMDA and its ability to overtake humans or their creators, it is highly important to take Liomene’s viewpoint. In an interview with Fox News, he says, “AI is a child and any child has the potential to grow up and be a bad person and do bad things.” He might have said it to emphasize his belief in LaMDA gaining sentience. While he is pretty much sure of the possibility of this child running away from its parent Google, he says, “I have my beliefs and my impressions but it’s going to take a team of scientists to dig in and figure out what’s really going on.” That calls for scrutiny into the credibility of his statements. Emily Bender, a professor in computational linguistics from University of Washington, in an interview with Nitasha Tiky of The Washington Post, who interviewed Lemoine, said, “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them”.

The post LaMDA is an ‘AI baby’ that will Outrun its Parent Google Soon appeared first on .

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment