Techno Blender
Digitally Yours.

Machines can make better decisions than humans, but how do we know when they’re actually accurate?

0 39


Credit: Pixabay/CC0 Public Domain

Machines can make better decisions than humans, but humans often struggle to know when the machine’s decision making is actually more accurate and end up overriding the algorithm decisions for worse, according to new research by ESMT Berlin.

This phenomenon is known as algorithm aversion, and is often attributed to an inherent mistrust in machines. However, systematically overriding an algorithm may not necessarily stem from algorithm aversion. This new research shows that the very context in which a human decision maker works can also prevent the decision maker from learning whether a machine produces better decisions.

These findings come from research by Francis de Véricourt and Huseyin Gurkan, both professors of management science at ESMT Berlin. The researchers wanted to determine under which conditions a human decision maker, supervising a machine making critical decisions, could properly assess whether the machine produces better recommendations. To do this, the researchers set up an analytical model where a human decision maker supervised a machine tasked with important decisions, such as whether to perform a biopsy on a patient. The human decision maker then made the best choice based on the information they received from the machine for each task.

The researchers found that if a human decision maker heeded the machine’s recommendation and it proved correct, the human would trust the machine more. But the human sometimes did not observe whether the machine’s recommendation was correct—this happened, for instance, when the human decision maker decided not to take follow-up actions. In this case, there was no change in trust and no lessons learned for the human decision maker. This interaction between the human’s decision and the human’s assessment of the machine creates biased learning. Hence, over time, they might not learn how to effectively use machines.

These findings clearly show that it is not always an inherent mistrust against the machines that means humans override algorithmic decisions, but over time, this biased learning can be reinforced by consistent overriding, which might result in incorrectly and ineffectively using machines in decision making.

“Often, we see a tendency for humans to override algorithms, which can be typically attributed to an intrinsic mistrust of machine-based predictions,” says Prof. de Véricourt. “This bias, however, may not be the sole reason for inappropriately and systematically overriding an algorithm. It may also be the case that we are simply not learning how to effectively use machines correctly, when our learning is based solely on the correctness of the machine’s predictions.”

These findings show that trust in a machine’s decision making ability is key to ensuring that we effectively learn how to utilize them, and that the accuracy of their usage also improves.

“Our research shows that there is clearly lack of opportunities for human decision makers to learn from a machine’s intelligence unless they account for its advice continually,” says Prof. Gurkan. “We need to adopt ways of complete learning with the machines constantly, not just selectively.”

The researchers say that these findings shed light on the importance of collaboration between humans and machines and guides us on when (and when not) to trust machines. By studying such situations, we can learn when it is best to listen to the machine and when it is better to make our own decisions. The framework set out by the researchers can help humans to better leverage machines in decision making.

The work is published in the journal Management Science.

More information:
Francis de Véricourt et al, Is Your Machine Better Than You? You May Never Know, Management Science (2023). DOI: 10.1287/mnsc.2023.4791

Provided by
European School of Management and Technology (ESMT)

Citation:
Machines can make better decisions than humans, but how do we know when they’re actually accurate? (2023, June 14)
retrieved 15 June 2023
from https://phys.org/news/2023-06-machines-decisions-humans-theyre-accurate.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.




robot math
Credit: Pixabay/CC0 Public Domain

Machines can make better decisions than humans, but humans often struggle to know when the machine’s decision making is actually more accurate and end up overriding the algorithm decisions for worse, according to new research by ESMT Berlin.

This phenomenon is known as algorithm aversion, and is often attributed to an inherent mistrust in machines. However, systematically overriding an algorithm may not necessarily stem from algorithm aversion. This new research shows that the very context in which a human decision maker works can also prevent the decision maker from learning whether a machine produces better decisions.

These findings come from research by Francis de Véricourt and Huseyin Gurkan, both professors of management science at ESMT Berlin. The researchers wanted to determine under which conditions a human decision maker, supervising a machine making critical decisions, could properly assess whether the machine produces better recommendations. To do this, the researchers set up an analytical model where a human decision maker supervised a machine tasked with important decisions, such as whether to perform a biopsy on a patient. The human decision maker then made the best choice based on the information they received from the machine for each task.

The researchers found that if a human decision maker heeded the machine’s recommendation and it proved correct, the human would trust the machine more. But the human sometimes did not observe whether the machine’s recommendation was correct—this happened, for instance, when the human decision maker decided not to take follow-up actions. In this case, there was no change in trust and no lessons learned for the human decision maker. This interaction between the human’s decision and the human’s assessment of the machine creates biased learning. Hence, over time, they might not learn how to effectively use machines.

These findings clearly show that it is not always an inherent mistrust against the machines that means humans override algorithmic decisions, but over time, this biased learning can be reinforced by consistent overriding, which might result in incorrectly and ineffectively using machines in decision making.

“Often, we see a tendency for humans to override algorithms, which can be typically attributed to an intrinsic mistrust of machine-based predictions,” says Prof. de Véricourt. “This bias, however, may not be the sole reason for inappropriately and systematically overriding an algorithm. It may also be the case that we are simply not learning how to effectively use machines correctly, when our learning is based solely on the correctness of the machine’s predictions.”

These findings show that trust in a machine’s decision making ability is key to ensuring that we effectively learn how to utilize them, and that the accuracy of their usage also improves.

“Our research shows that there is clearly lack of opportunities for human decision makers to learn from a machine’s intelligence unless they account for its advice continually,” says Prof. Gurkan. “We need to adopt ways of complete learning with the machines constantly, not just selectively.”

The researchers say that these findings shed light on the importance of collaboration between humans and machines and guides us on when (and when not) to trust machines. By studying such situations, we can learn when it is best to listen to the machine and when it is better to make our own decisions. The framework set out by the researchers can help humans to better leverage machines in decision making.

The work is published in the journal Management Science.

More information:
Francis de Véricourt et al, Is Your Machine Better Than You? You May Never Know, Management Science (2023). DOI: 10.1287/mnsc.2023.4791

Provided by
European School of Management and Technology (ESMT)

Citation:
Machines can make better decisions than humans, but how do we know when they’re actually accurate? (2023, June 14)
retrieved 15 June 2023
from https://phys.org/news/2023-06-machines-decisions-humans-theyre-accurate.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment