Ethical Issues Arising Due to Bias in Training A.I. Algorithms in Healthcare and Data Sharing as a Potential Solution

Ethical Issues Arising Due to Bias in Training A.I. Algorithms in Healthcare and Data Sharing as a Potential Solution

Authors

  • Bilwaj Gaonkar Department of Neurosurgery, University of California Los Angeles, UCLA
  • Kirstin Cook Department of Neurosurgery, University of California Los Angeles, UCLA
  • Luke Macyszyn Department of Neurosurgery, University of California Los Angeles, UCLA

DOI:

https://doi.org/10.47289/AIEJ20200916

Abstract

Machine learning algorithms have been shown to be capable of diagnosing cancer, Alzheimer’s disease and even selecting treatment options. However, the majority of machine learning systems implemented in the healthcare setting tend to be based on the supervised machine learning paradigm. These systems tend to rely on previously collected data annotated by medical personnel from specific populations. This leads to ‘learnt’ machine learning models that lack generalizability. In other words, the machine’s predictions are not as accurate for certain populations and can disagree with recommendations of medical experts who did not annotate the data used to train these models. With each human-decided aspect of building supervised machine learning models, human bias is introduced into the machine’s decision-making. This human bias is the source of numerous ethical concerns. In this manuscript, we describe and discuss three challenges to generalizability which affect real world deployment of machine learning systems in clinical practice. First, there is bias which occurs due to the characteristics of the population from which data was collected. Second, the bias which occurs due to the prejudice of the expert annotator involved. And third, the bias by the timing of when A.I. processes start training themselves. We also discuss the future implications of these biases. More importantly, we describe how responsible data sharing can help mitigate the effects of these biases – and allow for the development of novel algorithms which may be able to train in an unbiased manner. We discuss environmental and regulatory hurdles which hinder the sharing of data in medicine  – and discuss possible updates to current regulations that may enable ethical data sharing for machine learning. With these updates in mind, we also discuss emerging algorithmic frameworks being used to create medical machine learning systems, which can eventually learn to be free from population- and expert-induced bias. These models can then truly be deployed to clinics worldwide, making medicine both cheaper and more accessible for the world at large.

Author Biographies

Bilwaj Gaonkar, Department of Neurosurgery, University of California Los Angeles, UCLA

Department of Neurosurgery, University of California Los Angeles, UCLA Stein Eye Institute, Edie & Lew Wasserman Building, 300 Stein Plaza Driveway, Los Angeles, CA 90095, United States

Luke Macyszyn, Department of Neurosurgery, University of California Los Angeles, UCLA

Department of Neurosurgery, University of California Los Angeles, UCLA Stein Eye Institute, Edie & Lew Wasserman Building, 300 Stein Plaza Driveway, Los Angeles, CA 90095, United States

Downloads

Published

2023-02-10

How to Cite

Gaonkar, B., Cook, K., & Macyszyn, L. (2023). Ethical Issues Arising Due to Bias in Training A.I. Algorithms in Healthcare and Data Sharing as a Potential Solution. The AI Ethics Journal, 1(1). https://doi.org/10.47289/AIEJ20200916

Issue

Section

Articles
Loading...