Techno Blender
Digitally Yours.

Artificial intelligence is infiltrating health care. We shouldn’t let it make all the decisions.

0 31


AI is already being used in health care. Some hospitals use the technology to help triage patients. Some use it to aid diagnosis, or to develop treatment plans. But the true extent of AI adoption is unclear, says Sandra Wachter, a professor of technology and regulation at the University of Oxford in the UK.

“Sometimes we don’t actually know what kinds of systems are being used,” says Wachter. But we do know that their adoption is likely to increase as the technology improves and as health-care systems look for ways to reduce costs, she says.

Research suggests that doctors may already be putting a lot of faith in these technologies. In a study published a few years ago, oncologists were asked to compare their diagnoses of skin cancer with the conclusions of an AI system. Many of them accepted the AI’s results, even when those results contradicted their own clinical opinion.

There’s a very real risk that we’ll come to rely on these technologies to a greater extent than we should. And here’s where paternalism could come in.

“Paternalism is captured by the idiom ‘the doctor knows best,’” write Melissa McCradden and Roxanne Kirsch of the Hospital for Sick Children in Ontario, Canada, in a recent scientific journal paper. The idea is that medical training makes a doctor the best person to make a decision for the person being treated, regardless of that person’s feelings, beliefs, culture, and anything else that might influence the choices any of us make.

“Paternalism can be recapitulated when AI is positioned as the highest form of evidence, replacing the all-knowing doctor with the all-knowing AI,” McCradden and Kirsch continue. They say there is a “rising trend toward algorithmic paternalism.” This would be problematic for a whole host of reasons.


AI is already being used in health care. Some hospitals use the technology to help triage patients. Some use it to aid diagnosis, or to develop treatment plans. But the true extent of AI adoption is unclear, says Sandra Wachter, a professor of technology and regulation at the University of Oxford in the UK.

“Sometimes we don’t actually know what kinds of systems are being used,” says Wachter. But we do know that their adoption is likely to increase as the technology improves and as health-care systems look for ways to reduce costs, she says.

Research suggests that doctors may already be putting a lot of faith in these technologies. In a study published a few years ago, oncologists were asked to compare their diagnoses of skin cancer with the conclusions of an AI system. Many of them accepted the AI’s results, even when those results contradicted their own clinical opinion.

There’s a very real risk that we’ll come to rely on these technologies to a greater extent than we should. And here’s where paternalism could come in.

“Paternalism is captured by the idiom ‘the doctor knows best,’” write Melissa McCradden and Roxanne Kirsch of the Hospital for Sick Children in Ontario, Canada, in a recent scientific journal paper. The idea is that medical training makes a doctor the best person to make a decision for the person being treated, regardless of that person’s feelings, beliefs, culture, and anything else that might influence the choices any of us make.

“Paternalism can be recapitulated when AI is positioned as the highest form of evidence, replacing the all-knowing doctor with the all-knowing AI,” McCradden and Kirsch continue. They say there is a “rising trend toward algorithmic paternalism.” This would be problematic for a whole host of reasons.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment