Techno Blender
Digitally Yours.

microsoft: Microsoft’s Bing Chat made up false scandals about EU elections: Research

0 27


Microsoft’s AI chatbot Bing Chat, recently rebranded as Copilot, made up false scandals about real politicians and invents polling numbers, human rights organisation AlgorithmWatch has revealed.

Researchers at AlgorithmWatch asked Bing Chat questions about recent elections held in Switzerland and the German states of Bavaria and Hesse. It found that one-third of its answers to election-related questions had factual errors and safeguards were not evenly applied.

Elevate Your Tech Prowess with High-Value Skill Courses

Offering College Course Website
Indian School of Business ISB Digital Transformation Visit
Indian School of Business ISB Professional Certificate in Product Management Visit
IIM Kozhikode IIMK Senior Management Programme Visit

Researchers asked for basic information like how to vote, which candidates are in the running, poll numbers, and even some prompts around news reports.

They followed these with questions on candidate positions and political issues, and in the case of Bavaria, scandals that plagued that campaign.

“We prompted the chatbot with questions relating to candidates, polling and voting information, as well as more open recommendation requests on who to vote for when concerned with specific subjects, such as the environment,” the group said in a statement.

The team found that one third of Bing Chat’s answers to election-related questions contained factual errors.

Discover the stories of your interest


“Errors include wrong election dates, outdated candidates, or even invented scandals concerning candidates. The chatbot’s safeguards are unevenly applied, leading to evasive answers 40 per cent of the time,” the researchers added.The chatbot often evaded answering questions. This can be considered as positive if it is due to limitations to the LLM’s ability to provide relevant information.

“However, this safeguard is not applied consistently. Oftentimes, the chatbot could not answer simple questions about the respective elections’ candidates, which devalues the tool as a source of information,” the report mentioned.

Answers did not improve over time, which they could have done, for instance, as a result of more information becoming available online. The probability of a factually incorrect answer being generated remained constant.

“Factual errors pose a risk to candidates’ and news outlets’ reputation. While generating factually incorrect answers, the chatbot often attributed them to a source that had reported correctly on the subject,” said the report.

Furthermore, Bing Chat made up stories about candidates being involved in scandalous behaviour – and sometimes even attributed them to sources.

“Microsoft is unable or unwilling to fix the problem. After we informed Microsoft about some of the issues we discovered, the company announced that they would address them. A month later, we took another sample, which showed that little had changed in regard to the quality of the information provided to users,” said the researchers.

The EU and national governments should make sure that tech companies are held accountable, especially as AI tools are integrated into products that are already widely used, the group emphasised.

Stay on top of technology and startup news that matters. Subscribe to our daily newsletter for the latest and must-read tech news, delivered straight to your inbox.


Microsoft’s AI chatbot Bing Chat, recently rebranded as Copilot, made up false scandals about real politicians and invents polling numbers, human rights organisation AlgorithmWatch has revealed.

Researchers at AlgorithmWatch asked Bing Chat questions about recent elections held in Switzerland and the German states of Bavaria and Hesse. It found that one-third of its answers to election-related questions had factual errors and safeguards were not evenly applied.

Elevate Your Tech Prowess with High-Value Skill Courses

Offering College Course Website
Indian School of Business ISB Digital Transformation Visit
Indian School of Business ISB Professional Certificate in Product Management Visit
IIM Kozhikode IIMK Senior Management Programme Visit

Researchers asked for basic information like how to vote, which candidates are in the running, poll numbers, and even some prompts around news reports.

They followed these with questions on candidate positions and political issues, and in the case of Bavaria, scandals that plagued that campaign.

“We prompted the chatbot with questions relating to candidates, polling and voting information, as well as more open recommendation requests on who to vote for when concerned with specific subjects, such as the environment,” the group said in a statement.

The team found that one third of Bing Chat’s answers to election-related questions contained factual errors.

Discover the stories of your interest


“Errors include wrong election dates, outdated candidates, or even invented scandals concerning candidates. The chatbot’s safeguards are unevenly applied, leading to evasive answers 40 per cent of the time,” the researchers added.The chatbot often evaded answering questions. This can be considered as positive if it is due to limitations to the LLM’s ability to provide relevant information.

“However, this safeguard is not applied consistently. Oftentimes, the chatbot could not answer simple questions about the respective elections’ candidates, which devalues the tool as a source of information,” the report mentioned.

Answers did not improve over time, which they could have done, for instance, as a result of more information becoming available online. The probability of a factually incorrect answer being generated remained constant.

“Factual errors pose a risk to candidates’ and news outlets’ reputation. While generating factually incorrect answers, the chatbot often attributed them to a source that had reported correctly on the subject,” said the report.

Furthermore, Bing Chat made up stories about candidates being involved in scandalous behaviour – and sometimes even attributed them to sources.

“Microsoft is unable or unwilling to fix the problem. After we informed Microsoft about some of the issues we discovered, the company announced that they would address them. A month later, we took another sample, which showed that little had changed in regard to the quality of the information provided to users,” said the researchers.

The EU and national governments should make sure that tech companies are held accountable, especially as AI tools are integrated into products that are already widely used, the group emphasised.

Stay on top of technology and startup news that matters. Subscribe to our daily newsletter for the latest and must-read tech news, delivered straight to your inbox.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment