What Happens When Machine Learning Goes Too Far?


New research explores the potential risks and ethical implications of machine sentience, emphasizing the importance of understanding and preparing for the emergence of consciousness in AI and machine learning technologies. It calls for careful consideration of the ethical use of sentient machines and highlights the need for future research to navigate the complex relationship between humans and these self-aware technologies. Credit: SciTechDaily.com

Every piece of fiction carries a kernel of truth, and now is about the time to get a step ahead of sci-fi dystopias and determine what the risk in machine sentience can be for humans.

Although people have long pondered the future of intelligent machinery, such questions have become all the more pressing with the rise of artificial intelligence (AI) and machine learning. These machines resemble human interactions: they can help problem solve, create content, and even carry on conversations. For fans of science fiction and dystopian novels, a looming issue could be on the horizon: what if these machines develop a sense of consciousness?

Researchers published their results in the Journal of Social Computing.

While there is no quantifiable data presented in this discussion on artificial sentience (AS) in machines, there are many parallels drawn between human language development and the factors needed for machines to develop language in a meaningful way.

The Possibility of Conscious Machines

“Many of the people concerned with the possibility of machine sentience developing worry about the ethics of our use of these machines, or whether machines, being rational calculators, would attack humans to ensure their own survival,” said John Levi Martin, author and researcher. “We here are worried about them catching a form of self-estrangement by transitioning to a specifically linguistic form of sentience.”

The main characteristics making such a transition possible appear to be: unstructured deep learning, such as in neural networks (computer analysis of data and training examples to provide better feedback), interaction between both humans and other machines, and a wide range of actions to continue self-driven learning. An example of this would be self-driving cars. Many forms of AI check these boxes already, leading to the concern of what the next step in their “evolution” might be.

This discussion states that it’s not enough to be concerned with just the development of AS in machines, but raises the question of if we’re fully prepared for a type of consciousness to emerge in our machinery. Right now, with AI that can generate blog posts, diagnose an illness, create recipes, predict diseases, or tell stories perfectly tailored to its inputs, it’s not far off to imagine having what feels like a real connection with a machine that has learned of its state of being. However, researchers of this study warn, that is exactly the point at which we need to be wary of the outputs we receive.

The Dangers of Linguistic Sentience

“Becoming a linguistic being is more about orienting to the strategic control of information, and introduces a loss of wholeness and integrity…not something we want in devices we make responsible for our security,” said Martin. As we’ve already put AI in charge of so much of our information, essentially relying on it to learn much in the way a human brain does, it has become a dangerous game to play when entrusting it with so much vital information in an almost reckless way.

Mimicking human responses and strategically controlling information are two very separate things. A “linguistic being” can have the capacity to be duplicitous and calculated in their responses. An important element of this is, at what point do we find out we’re being played by the machine?

What’s to come is in the hands of computer scientists to develop strategies or protocols to test machines for linguistic sentience. The ethics behind using machines that have developed a linguistic form of sentience or sense of “self” are yet to be fully established, but one can imagine it would become a social hot topic. The relationship between a self-realized person and a sentient machine is sure to be complex, and the uncharted waters of this type of kinship would surely bring about many concepts regarding ethics, morality, and the continued use of this “self-aware” technology.

Reference: “Through a Scanner Darkly: Machine Sentience and the Language Virus” by Maurice Bokanga, Alessandra Lembo and John Levi Martin, December 2023, Journal of Social Computing.
DOI: 10.23919/JSC.2023.0024




New research explores the potential risks and ethical implications of machine sentience, emphasizing the importance of understanding and preparing for the emergence of consciousness in AI and machine learning technologies. It calls for careful consideration of the ethical use of sentient machines and highlights the need for future research to navigate the complex relationship between humans and these self-aware technologies. Credit: SciTechDaily.com

Every piece of fiction carries a kernel of truth, and now is about the time to get a step ahead of sci-fi dystopias and determine what the risk in machine sentience can be for humans.

Although people have long pondered the future of intelligent machinery, such questions have become all the more pressing with the rise of artificial intelligence (AI) and machine learning. These machines resemble human interactions: they can help problem solve, create content, and even carry on conversations. For fans of science fiction and dystopian novels, a looming issue could be on the horizon: what if these machines develop a sense of consciousness?

Researchers published their results in the Journal of Social Computing.

While there is no quantifiable data presented in this discussion on artificial sentience (AS) in machines, there are many parallels drawn between human language development and the factors needed for machines to develop language in a meaningful way.

The Possibility of Conscious Machines

“Many of the people concerned with the possibility of machine sentience developing worry about the ethics of our use of these machines, or whether machines, being rational calculators, would attack humans to ensure their own survival,” said John Levi Martin, author and researcher. “We here are worried about them catching a form of self-estrangement by transitioning to a specifically linguistic form of sentience.”

The main characteristics making such a transition possible appear to be: unstructured deep learning, such as in neural networks (computer analysis of data and training examples to provide better feedback), interaction between both humans and other machines, and a wide range of actions to continue self-driven learning. An example of this would be self-driving cars. Many forms of AI check these boxes already, leading to the concern of what the next step in their “evolution” might be.

This discussion states that it’s not enough to be concerned with just the development of AS in machines, but raises the question of if we’re fully prepared for a type of consciousness to emerge in our machinery. Right now, with AI that can generate blog posts, diagnose an illness, create recipes, predict diseases, or tell stories perfectly tailored to its inputs, it’s not far off to imagine having what feels like a real connection with a machine that has learned of its state of being. However, researchers of this study warn, that is exactly the point at which we need to be wary of the outputs we receive.

The Dangers of Linguistic Sentience

“Becoming a linguistic being is more about orienting to the strategic control of information, and introduces a loss of wholeness and integrity…not something we want in devices we make responsible for our security,” said Martin. As we’ve already put AI in charge of so much of our information, essentially relying on it to learn much in the way a human brain does, it has become a dangerous game to play when entrusting it with so much vital information in an almost reckless way.

Mimicking human responses and strategically controlling information are two very separate things. A “linguistic being” can have the capacity to be duplicitous and calculated in their responses. An important element of this is, at what point do we find out we’re being played by the machine?

What’s to come is in the hands of computer scientists to develop strategies or protocols to test machines for linguistic sentience. The ethics behind using machines that have developed a linguistic form of sentience or sense of “self” are yet to be fully established, but one can imagine it would become a social hot topic. The relationship between a self-realized person and a sentient machine is sure to be complex, and the uncharted waters of this type of kinship would surely bring about many concepts regarding ethics, morality, and the continued use of this “self-aware” technology.

Reference: “Through a Scanner Darkly: Machine Sentience and the Language Virus” by Maurice Bokanga, Alessandra Lembo and John Levi Martin, December 2023, Journal of Social Computing.
DOI: 10.23919/JSC.2023.0024

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@technoblender.com. The content will be deleted within 24 hours.
artificial intelligenceLatestlearningMachinemachine learningSciencetsinghua universityTutorial
Comments (0)
Add Comment