top of page

AI in Healthcare Is Biased, But It Doesn’t Have to Be

Updated: Mar 15

Artificial Intelligence (AI) is a seriously powerful technology. It is necessary we ensure that all patients benefit equally from these systems.


When it first became a reality, artificial intelligence (AI) in healthcare may have seemed like a no-brainer. The need for impersonal and impeccable diagnostic and decision-making skills in an enormously complex field with high-stakes outcomes was, and remains, clear. 


But to look at algorithms as inherently objective is to fundamentally misunderstand what they are and how they work. Algorithms are trained using data, which means that the information they generate is only as reliable and unbiased as the datasets from which they learn. According to Kaushal et al. for Scientific American, healthcare data in particular tends to lack adequate diversity due to factors like medical privacy laws, non-standardized hospital databases and varying access to healthcare across socioeconomic groups. A lack of universal data creates information that’s not representative of the population as a whole. When this information is used to teach AI, the algorithms themselves can only perpetuate and exacerbate already existing disparities in healthcare. 


A glaring example of this is the COVID-19 pandemic. AI touted as the most efficient and objective way to assign medical resources, such as ventilators, based its recommendations on positive COVID tests, which, as claimed by Healthcare IT News, understated the needs of communities with inadequate access to testing. In the field of dermatology, algorithms used to identify potentially cancerous lesions were trained on images of mainly fair-skinned patients. Inevitably, as discussed by Angela Lashbrook for The Atlantic, they proved less accurate at diagnosing darker-skinned patients — again, in an area where inequitable levels of care already lead to higher mortality rates for black patients than white patients. 


All of this underscores a common theme: in an already unequal healthcare system, any algorithm that emerges unchecked will inevitably serve to propagate existing biases. The consequences of this are not only human costs but financial as well; according to Watson and Marsh, in New York, the healthcare company, UnitedHealth, might face financial penalties for its use of a biased algorithm to identify patients in need of greater care. 


In an area that has historically underreported black patients’ pain levels, recently, AI used to detect arthritis in knee X-rays made a significant first step in providing equitable patient care. As was discussed in a report for Wired.com, by training the algorithms on self-reported patient data rather than medical professionals’ diagnoses, researchers created an algorithm that assigned pain scores closer to those reported by the patients than those assigned by doctors themselves. This advancement can be applied more broadly — underrepresentation of certain groups in healthcare might be mitigated through machine learning that takes a diversity of perspectives into account. 


More complex data requires more complex systems, and the X-ray algorithm uses a model so complicated that the researchers themselves don’t even yet understand exactly how it works. There’s a strong argument to be made that one of the best ways to ensure less bias through AI is through algorithmic transparency and publicly available source code, which becomes impossible when the code is too complicated to reverse-engineer. As stated by Bjerring and Busch, opaque algorithmic healthcare in particular is fundamentally opposed to the core tenets of evidence-based healthcare and informed decision-making. However, this doesn’t detract from the need for adequate levels of diversity in artificial intelligence in data, testing and monitoring. It only means that more work is needed to figure out a way that all patients can benefit equally from AI.  


Artificial intelligence can be used in one of two ways. It can serve to mirror and enhance existing societal inequalities, or it can be used as a safeguard to accommodate our failures. It is a path that demands much more effort and vigilance but promises much greater rewards.


The opinions expressed in this article are those of the individual author.


Sources


Bjerring, Jens Christian, and Jacob Busch. “Artificial Intelligence and Patient-Centered Decision-Making.” Philosophy & Technology, vol. 34, no. 2, 8 Feb. 2020, pp. 349–371., https://doi.org/10.1007/s13347-019-00391-6


Jercich, Kat. “AI Bias May Worsen COVID-19 Health Disparities for People of Color.” Healthcare IT News, HIMSS Media, 18 Aug. 2020, https://www.healthcareitnews.com/news/ai-bias-may-worsen-covid-19-health-disparities-people-color.


Kaushal, Amit, et al. “Health Care AI Systems Are Biased.” Scientific American, Scientific American, 17 Nov. 2020, https://www.scientificamerican.com/article/health-care-ai-systems-are-biased/#:~:text=Thanks%20to%20advances%20in%20artificial,a%20colonoscopy%20like%20a%20gastroenterologist.


Lashbrook, Angela. “Ai-Driven Dermatology Could Leave Dark-Skinned Patients Behind.” The Atlantic, The Atlantic Monthly Group, 16 Aug. 2018, https://www.theatlantic.com/health/archive/2018/08/machine-learning-dermatology-skin-color/567619/.


Marsh, Christina, and Wendy Watson. “Artificial Intelligence Bias in Healthcare.” Booz Allen, Booz Allen Hamilton Inc., https://www.boozallen.com/c/insight/blog/ai-bias-in-healthcare.html.


Simonite, Tom. “New Algorithms Could Reduce Racial Disparities in Health Care.” Wired, Conde Nast, 25 Jan. 2021, https://www.wired.com/story/new-algorithms-reduce-racial-disparities-health-care/


Siwicki, Bill. “How Does Bias Affect Healthcare AI, and What Can Be Done about It?” Healthcare IT News, HIMSS Media, 22 Mar. 2021, https://www.healthcareitnews.com/news/how-does-bias-affect-healthcare-ai-and-what-can-be-done-about-it.

bottom of page