Techno Blender
Digitally Yours.

Researchers believe AI chatbots should be more confrontational

0 15



Spend any time interacting with AI chatbots and their tone can start to grate. No question is too taxing or intrusive for the noncorporial assistants, and if you probe under the hood of the bot too much, it’ll respond in a platitudinous way designed to dullen the interaction.

Nearly a year and a half into the generative-AI revolution, researchers are starting to wonder whether that deathly dull format is the best approach.

“There was something off with the tone and values that were being embedded in large language models,” says Alice Cai, a researcher at Harvard University. “It felt very paternalistic.” Beyond that, Cai says, it felt overly Americanized, imposing norms of consensual, agreeable, often saccharine agreement—which aren’t shared by the entire world.

In Cai’s household growing up, criticism was commonplace—and healthy, she says. “It was used as a way to incite growth, and honesty was a really important currency of my family unit.” That triggered her and colleagues at Harvard and the University of Montreal to explore whether a more antagonistic AI design would better serve users.

In their study, published in the open-access repository arXiv, the academics conducted a workshop asking participants to imagine how they thought a human personification of the current crop of generative-AI chatbots would look if brought to life. The answer: a white, middle-class customer service representative with a rictus smile and an unflappable attitude—and clearly, not always the best approach. “We humans don’t just value politeness,” says Ian Arawjo, assistant professor in human-computer interaction at the University of Montreal and one of the study’s coauthors.

Indeed, says Arawjo, “in many different domains, antagonism broadly construed, is good.” The researchers suggest that an AI coded to be antagonistic, rather than supplicant and sickeningly consensual, could help users confront their assumptions, build resilience, and develop healthier relational boundaries.

One of the potential deployments for a confrontational AI that the researchers came up with was in intervention, to shake a user out of a bad habit. “We had a team come up with an interventional system that could recognize when you were doing something that you might consider a bad habit,” says Cai. “And it does use a confrontational coaching approach that you often see used in sports, or sometimes in self-help.”

However, Arawjo points out that the use of confrontational AIs would require careful oversight and regulation, especially if it were deployed in those areas.

But the research team have been surprised by the positive response they’ve received to their suggestion of retooling AIs to be a little less polite. “I think the time has come for this kind of idea and exploring these systems,” says Arawjo. “And I would really like to see more empirical investigations so we can start to tease out how you actually do this in practice, and where it could be beneficial and where it might not be—or what the trade-offs are.”





Spend any time interacting with AI chatbots and their tone can start to grate. No question is too taxing or intrusive for the noncorporial assistants, and if you probe under the hood of the bot too much, it’ll respond in a platitudinous way designed to dullen the interaction.

Nearly a year and a half into the generative-AI revolution, researchers are starting to wonder whether that deathly dull format is the best approach.

“There was something off with the tone and values that were being embedded in large language models,” says Alice Cai, a researcher at Harvard University. “It felt very paternalistic.” Beyond that, Cai says, it felt overly Americanized, imposing norms of consensual, agreeable, often saccharine agreement—which aren’t shared by the entire world.

In Cai’s household growing up, criticism was commonplace—and healthy, she says. “It was used as a way to incite growth, and honesty was a really important currency of my family unit.” That triggered her and colleagues at Harvard and the University of Montreal to explore whether a more antagonistic AI design would better serve users.

In their study, published in the open-access repository arXiv, the academics conducted a workshop asking participants to imagine how they thought a human personification of the current crop of generative-AI chatbots would look if brought to life. The answer: a white, middle-class customer service representative with a rictus smile and an unflappable attitude—and clearly, not always the best approach. “We humans don’t just value politeness,” says Ian Arawjo, assistant professor in human-computer interaction at the University of Montreal and one of the study’s coauthors.

Indeed, says Arawjo, “in many different domains, antagonism broadly construed, is good.” The researchers suggest that an AI coded to be antagonistic, rather than supplicant and sickeningly consensual, could help users confront their assumptions, build resilience, and develop healthier relational boundaries.

One of the potential deployments for a confrontational AI that the researchers came up with was in intervention, to shake a user out of a bad habit. “We had a team come up with an interventional system that could recognize when you were doing something that you might consider a bad habit,” says Cai. “And it does use a confrontational coaching approach that you often see used in sports, or sometimes in self-help.”

However, Arawjo points out that the use of confrontational AIs would require careful oversight and regulation, especially if it were deployed in those areas.

But the research team have been surprised by the positive response they’ve received to their suggestion of retooling AIs to be a little less polite. “I think the time has come for this kind of idea and exploring these systems,” says Arawjo. “And I would really like to see more empirical investigations so we can start to tease out how you actually do this in practice, and where it could be beneficial and where it might not be—or what the trade-offs are.”

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment