Techno Blender
Digitally Yours.

Algorithmic Bias in Healthcare and Some Strategies for Mitigating It | by Zoumana Keita | Dec, 2022

0 49


Image by National Cancer Institute on Unsplash

I don’t know about you, but my first contact with AI was in movies full of intelligent robots. Their actions made me believe that the whole world would be conquered by those angry machines.

But my belief has changed ever since because now, I believe that AI can do better than that trying to destroy the world.

AI can improve healthcare and save millions of lives around the world.

There are multiple definitions of what AI is, and most of them converge on the same meaning. Here is my definition of AI in three bullet points.

  • It’s the process of educating algorithms to act like humans using large and complex data sets.
  • It learns from the data during the training process to be stable for better decision-making.
  • These models once trained on very large patient data can make even more accurate predictions than actual human experts.

In healthcare, for instance, AI is being used to provide a proper diagnosis, and treatment recommendations, and all of that is followed by a clear explanation of the results.

Illustration of AI in Healthcare (Image by Author)

Geoffrey Hinton, known as the godfather of deep learning put radiologists on notice with this statement in 2016:

People should stop training radiologists now. It is just completely obvious that within 5 years deep learning is going to do better than radiologists

Then, a year later, CheXNet and 6 radiologists from Stanford University performed 50 chest x-rays for pneumonia in a recent test after being trained on thousands of images (pneumonia and no- pneumonia). And, guess what: ChexNet won with 81% accuracy against 79% accuracy of the 6 Radiologists .

ChexNet Vs. 6 Radiologists in chest x-rays diagnosis (Image by Author)

The model was given 10s of thousands of images (pneumonia and no-pneumonia) for training purposes.

This is one of the many projects highlighting how AI can perform tasks normally performed by doctors. The performance tells how much these algorithms are getting better at diagnosing diseases, and even sometimes with much greater performance as illustrated above.

Wait, are you telling us that AI is the summit of all perfections in healthcare?

Despite all these promises, the use of AI is not without legal risks. We can illustrate from these two cases how AI resulted in discrimination against individuals in healthcare. You can have more details of these two cases from the news on How to mitigate algorithmic bias in healthcare.

As you might know, racial bias is still an issue, but it becomes even worse when it is transmitted into healthcare because here, it is a matter of life or death.

First case → UnitedHealth Group

Optum (UnitedHealth group) developed a commercial algorithm in order to determine which patients would require extra medical care (patients with the greatest medical need).

A bias in the algorithm reduced the number of black patients identified for extra care by more than half, and falsely conclude that black patients are healthier than equally sick White patients.

Why did that happen?

Any explicit racial information (like Skin Color, postal code, etc.) was actually used in the algorithm training process. But race correlated with other factors such as historical healthcare expenditures to evaluate future healthcare needs, which made it reflect economic inequality rather than the true medical needs of patients.

According to the Kaiser Family Foundation (KFF)…

  • Households are more likely to allot about 1% of the annual budget to health care.
  • Average health care spending for low-income housing is $235 compared to $2401 for private insurance.

Knowing that Optum is powering health care by connecting and serving the whole health system across 150 countries, we can already imagine the impact such bias can have on people’s life.

Second case → Drug discovery for Covid-19

During Covid-19, an AI system was developed to triage patients and expedite the discovery of a new vaccine.

The AI system was able to predict with 70 to 80% accuracy which patients are likely to develop severe lung disease.

The issue with this AI system is that the triage process was solely based on the symptoms and preexisting conditions of patients, which can be biased because of the disparities based on race and social economic status.

AI models have rapidly evolved over the years. During the same time, we have also seen many missteps. The biases in the AI systems are ultimately contributing to creating an unfair society.

The following five strategies (not an exhaustive list) can be adapted to mitigate those biases.

  1. Collecting and using diverse training data: One of the main causes of bias in algorithms is the use of training data that is not representative of the real-world population. To mitigate this, it is important to collect and use diverse training data that accurately reflects the demographics, backgrounds, and characteristics of the population the algorithm will be used on.
  2. Testing the algorithm for bias: After an algorithm has been trained, it is important to test it for bias to ensure that it is making fair and unbiased decisions. This can be done using a variety of methods, including conducting bias audits and using fairness metrics to measure the algorithm’s performance.
  3. Using algorithmic fairness techniques: There are several algorithmic fairness techniques that can be used to mitigate bias in algorithms, including pre-processing algorithms that adjust the data to reduce bias, in-processing algorithms that make adjustments during the training process, and post-processing algorithms that adjust the output of the algorithm to make it fairer.
  4. Ensuring transparency and accountability: Another important step in mitigating bias in algorithms is to ensure that they are transparent and accountable. This means providing clear explanations of how the algorithm works, regularly reviewing and updating the algorithm to remove any biases that may have been introduced, and providing mechanisms for individuals to challenge the decisions made by the algorithm.
  5. Engaging with diverse stakeholders: Finally, it is important to engage with diverse stakeholders, including the individuals and communities that may be affected by the algorithm, in order to understand their perspectives and incorporate their feedback into the design and implementation of the algorithm. This can help to ensure that the algorithm is fair and unbiased and that it accurately reflects the needs and concerns of the population it will be used on.

Biases exist in different shapes and forms and can be challenging to tackle and can occur at different points in the development lifecycle.

  • Historical bias: this arises when the data collected to train an AI system no longer reflects the current reality. For instance, even though the gender pay gap is still an issue, it was worse in the past.
  • Representation bias: this type of bias results depending on how the training data is defined and sampled from the population. An example of this scenario is the data used for training the first facial recognition system, mostly relying on white faces, which lead the model to have a hard time recognizing black faces and other dark-skinned faces.
  • Measurement bias: occurs when training data features or measurements differ from real-world data. It is a common issue for image recognition systems where the data is mainly collected from one type of camera while real-world data is from multiple types of cameras.
  • Coding/human bias: this happens mostly when scientists dive deep into a project with their subjective thoughts about their study. For instance: “non-white patients receive fewer cardiovascular interventions and fewer renal transplants”, and “Black women are more likely to die after being diagnosed with breast cancer”. Source

Conclusion

It is important to not lose track of the potential biases in AI for any data project, especially in healthcare.

Early adoption of the right systems and keeping them in the whole project lifecycle can help quickly identify the main issues and respond to them in an efficient manner.

Remember, let’s be very careful and very intentional about how we design those AI systems.

If you like reading my stories and wish to support my writing, consider becoming a Medium member. With a $ 5-a-month commitment, you unlock unlimited access to stories on Medium.

Feel free to follow me on Medium, Twitter, and YouTube, or say Hi on LinkedIn. It is always a pleasure to discuss AI, ML, Data Science, NLP, and MLOps stuff!

Racial bias in health algorithms

COVID-19 and Racial/Ethnic Disparities




Image by National Cancer Institute on Unsplash

I don’t know about you, but my first contact with AI was in movies full of intelligent robots. Their actions made me believe that the whole world would be conquered by those angry machines.

But my belief has changed ever since because now, I believe that AI can do better than that trying to destroy the world.

AI can improve healthcare and save millions of lives around the world.

There are multiple definitions of what AI is, and most of them converge on the same meaning. Here is my definition of AI in three bullet points.

  • It’s the process of educating algorithms to act like humans using large and complex data sets.
  • It learns from the data during the training process to be stable for better decision-making.
  • These models once trained on very large patient data can make even more accurate predictions than actual human experts.

In healthcare, for instance, AI is being used to provide a proper diagnosis, and treatment recommendations, and all of that is followed by a clear explanation of the results.

Illustration of AI in Healthcare (Image by Author)

Geoffrey Hinton, known as the godfather of deep learning put radiologists on notice with this statement in 2016:

People should stop training radiologists now. It is just completely obvious that within 5 years deep learning is going to do better than radiologists

Then, a year later, CheXNet and 6 radiologists from Stanford University performed 50 chest x-rays for pneumonia in a recent test after being trained on thousands of images (pneumonia and no- pneumonia). And, guess what: ChexNet won with 81% accuracy against 79% accuracy of the 6 Radiologists .

ChexNet Vs. 6 Radiologists in chest x-rays diagnosis (Image by Author)

The model was given 10s of thousands of images (pneumonia and no-pneumonia) for training purposes.

This is one of the many projects highlighting how AI can perform tasks normally performed by doctors. The performance tells how much these algorithms are getting better at diagnosing diseases, and even sometimes with much greater performance as illustrated above.

Wait, are you telling us that AI is the summit of all perfections in healthcare?

Despite all these promises, the use of AI is not without legal risks. We can illustrate from these two cases how AI resulted in discrimination against individuals in healthcare. You can have more details of these two cases from the news on How to mitigate algorithmic bias in healthcare.

As you might know, racial bias is still an issue, but it becomes even worse when it is transmitted into healthcare because here, it is a matter of life or death.

First case → UnitedHealth Group

Optum (UnitedHealth group) developed a commercial algorithm in order to determine which patients would require extra medical care (patients with the greatest medical need).

A bias in the algorithm reduced the number of black patients identified for extra care by more than half, and falsely conclude that black patients are healthier than equally sick White patients.

Why did that happen?

Any explicit racial information (like Skin Color, postal code, etc.) was actually used in the algorithm training process. But race correlated with other factors such as historical healthcare expenditures to evaluate future healthcare needs, which made it reflect economic inequality rather than the true medical needs of patients.

According to the Kaiser Family Foundation (KFF)…

  • Households are more likely to allot about 1% of the annual budget to health care.
  • Average health care spending for low-income housing is $235 compared to $2401 for private insurance.

Knowing that Optum is powering health care by connecting and serving the whole health system across 150 countries, we can already imagine the impact such bias can have on people’s life.

Second case → Drug discovery for Covid-19

During Covid-19, an AI system was developed to triage patients and expedite the discovery of a new vaccine.

The AI system was able to predict with 70 to 80% accuracy which patients are likely to develop severe lung disease.

The issue with this AI system is that the triage process was solely based on the symptoms and preexisting conditions of patients, which can be biased because of the disparities based on race and social economic status.

AI models have rapidly evolved over the years. During the same time, we have also seen many missteps. The biases in the AI systems are ultimately contributing to creating an unfair society.

The following five strategies (not an exhaustive list) can be adapted to mitigate those biases.

  1. Collecting and using diverse training data: One of the main causes of bias in algorithms is the use of training data that is not representative of the real-world population. To mitigate this, it is important to collect and use diverse training data that accurately reflects the demographics, backgrounds, and characteristics of the population the algorithm will be used on.
  2. Testing the algorithm for bias: After an algorithm has been trained, it is important to test it for bias to ensure that it is making fair and unbiased decisions. This can be done using a variety of methods, including conducting bias audits and using fairness metrics to measure the algorithm’s performance.
  3. Using algorithmic fairness techniques: There are several algorithmic fairness techniques that can be used to mitigate bias in algorithms, including pre-processing algorithms that adjust the data to reduce bias, in-processing algorithms that make adjustments during the training process, and post-processing algorithms that adjust the output of the algorithm to make it fairer.
  4. Ensuring transparency and accountability: Another important step in mitigating bias in algorithms is to ensure that they are transparent and accountable. This means providing clear explanations of how the algorithm works, regularly reviewing and updating the algorithm to remove any biases that may have been introduced, and providing mechanisms for individuals to challenge the decisions made by the algorithm.
  5. Engaging with diverse stakeholders: Finally, it is important to engage with diverse stakeholders, including the individuals and communities that may be affected by the algorithm, in order to understand their perspectives and incorporate their feedback into the design and implementation of the algorithm. This can help to ensure that the algorithm is fair and unbiased and that it accurately reflects the needs and concerns of the population it will be used on.

Biases exist in different shapes and forms and can be challenging to tackle and can occur at different points in the development lifecycle.

  • Historical bias: this arises when the data collected to train an AI system no longer reflects the current reality. For instance, even though the gender pay gap is still an issue, it was worse in the past.
  • Representation bias: this type of bias results depending on how the training data is defined and sampled from the population. An example of this scenario is the data used for training the first facial recognition system, mostly relying on white faces, which lead the model to have a hard time recognizing black faces and other dark-skinned faces.
  • Measurement bias: occurs when training data features or measurements differ from real-world data. It is a common issue for image recognition systems where the data is mainly collected from one type of camera while real-world data is from multiple types of cameras.
  • Coding/human bias: this happens mostly when scientists dive deep into a project with their subjective thoughts about their study. For instance: “non-white patients receive fewer cardiovascular interventions and fewer renal transplants”, and “Black women are more likely to die after being diagnosed with breast cancer”. Source

Conclusion

It is important to not lose track of the potential biases in AI for any data project, especially in healthcare.

Early adoption of the right systems and keeping them in the whole project lifecycle can help quickly identify the main issues and respond to them in an efficient manner.

Remember, let’s be very careful and very intentional about how we design those AI systems.

If you like reading my stories and wish to support my writing, consider becoming a Medium member. With a $ 5-a-month commitment, you unlock unlimited access to stories on Medium.

Feel free to follow me on Medium, Twitter, and YouTube, or say Hi on LinkedIn. It is always a pleasure to discuss AI, ML, Data Science, NLP, and MLOps stuff!

Racial bias in health algorithms

COVID-19 and Racial/Ethnic Disparities

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment