Racial Bias in Medical Care Decision-Making Tools

How many Black, Hispanic, and poor people lose out on healthcare

Racial bias in medical care can show up in some unexpected places. One example: the clinical decision tools that play an important role in how today's patients are tested, diagnosed, and treated.

These tools contain algorithms, or step-by-step procedures, usually computerized, for calculating factors such as risk of heart disease, the need for a chest X-ray, and prescription medicine dosages. Artificial intelligence can be used to scour health records and billing systems to create the needed data sets.

On the surface, it may sounds objective. But recent studies have shown that the data analysis used in these algorithms can be biased in crucial ways against certain racial and socioeconomic groups. This can have myriad consequences in terms of the amount and quality of healthcare that people in these groups receive.

Key Takeaways

  • Medical decision tools play a large role in how today's patients are tested, diagnosed, and treated.
  • Unfortunately, the algorithms that these tools rely on can sometimes be biased.
  • For example, using medical spending data to rate a person's medical condition can misjudge the severity of poor and minority patients' illnesses when lower medical spending reflects a lack of access to medical care rather than a lack of need.
  • The body mass index (BMI) algorithm used to diagnose patients as overweight or obese has created an atmosphere of weight-shaming and distrust between patients and doctors as more Black women than Hispanic or White women are now categorized as obese.
  • Data input and outcomes are now starting to be checked for racial, ethnic, income, gender, and age bias so that disparities can be recognized and algorithms corrected.

Racial Bias Affects the Sickest Patients

In 2019, a study of an algorithm widely used by U.S. hospitals and insurers to allocate extra health management assistance was shown to systematically discriminate against Black people. The decision tool was less likely to refer Black people than White people to care-management programs for complex medical needs when both racial groups were equally sick.

The underlying reason for the bias was linked to the algorithm's assignment of risk scores to patients based on their previous year's medical costs. The assumption was that identifying patients with higher costs would identify those with the greatest medical needs. However, many Black patients have less access to, less ability to pay for, and less trust in medical care than White people who are equally sick. In this instance, their lower medical costs did not accurately predict their health status.

Care-management programs use a high-touch approach, such as phone calls, home visits by nurses, and prioritizing doctor appointments to address the complex needs of the sickest patients. The programs have been shown to improve outcomes, decrease emergency room visits and hospitalizations, and lower medical costs. Because the programs themselves are expensive, they are assigned to people with the highest risk scores. Scoring techniques that discriminate against the sickest Black patients for this care may be a significant factor in their increased risk of death from many diseases.

Race as a Variable in Kidney Disease

Algorithms can contain bias without including race as a variable, but some tools deliberately use race as a criterion. Take the eGFR score, which rates kidney health and is used to determine who needs a kidney transplant. In a 1999 study that set the eGFR score criteria, researchers noticed that Black people had, on average, higher levels of creatinine (a byproduct of muscle breakdown) than White people did. The scientists assumed that the higher levels were due to higher muscle mass in Blacks. They therefore adjusted the scoring, which essentially meant that Black people must have a lower eGFR score than Whites to be diagnosed with end-stage kidney disease. As a consequence, Blacks had to wait until their kidney disease reached a more severe stage in order to qualify for treatment.

More recently, a student of medicine and public health at the University of Washington School of Medicine in Seattle observed that eGFR scores were not accurate for diagnosing the severity of kidney disease in Black patients. She fought to have race removed from the algorithm, and won. In 2020, UW Medicine agreed that the use of race was an ineffective variable and did not meet scientific rigor in medical diagnostic tools.

Important

In 2021, a joint task force of the National Kidney Foundation and American Society of Nephrology recommended the adoption of a new eGFR 2021 CKD EPI creatinine equation that estimates kidney function without using race as a variable.

Body Mass Index and Racial Bias

Even the simplest medical decision tool that does not include race can reflect social bias. The body mass index (BMI), for example, is based on a calculation that multiplies weight by height. It is used to identify underweight, overweight, and obese patients.

In 1985, the National Institutes of Health tied the definition of obesity to an individual's BMI, and in 1998 an expert panel put in place guidelines based on BMI that moved 29 million Americans who had previously been classified as normal weight or just overweight into the overweight and obese categories.

Today, by BMI standards, the majority of Blacks, Hispanics, and White people are overweight or obese. But a 2021 report from the Centers for Disease Control and Prevention (CDC) reported that the percentage of Americans who could be classified as obese varies by race or ethnic group.

According to the CDC, the breakdown among adults overall was:

  • Non-Hispanic Black: 49.9%
  • Hispanic: 45.6%
  • Non-Hispanic White: 41.4%
  • Non-Hispanic Asian: 16.1%

Breaking out female adults classified as obese, the differences appear even more significant.

  • Non-Hispanic Black: 57.9%
  • Hispanic: 45.7%
  • Non-Hispanic White: 39.6%
  • Non-Hispanic Asian: 14.5%

Branding such large percentages of populations as overweight or obese has created an atmosphere of weight-shaming and distrust between patients and doctors. Higher-weight people complain that doctors don't address the health problems or concerns that brought them in for a checkup. Instead, doctors blame the patient's weight for their health issues and push weight loss as the solution. This contributes to many Black and Hispanic patients avoiding healthcare practitioners and thus perhaps missing opportunities to prevent problems or catch them early.

Furthermore, it is becoming increasingly clear that being overweight or obese is not always a health problem. Rates for some serious conditions, such as heart disease, stroke, type 2 diabetes, and certain types of cancer, are higher among those who are obese. But in certain situations, such as recovery after heart surgery, being overweight or moderately obese (but not morbidly obese) is associated with better survival rates.

New obesity guidelines for Canadian clinicians, published in August 2020, emphasize that doctors should stop relying on BMI alone in diagnosing patients. People should be diagnosed as obese only if their body weight affects their physical health or mental well-being, according to the new guidelines. Treatment should be holistic and not solely target weight loss. The guidelines also note that, "People living with obesity face substantial bias and stigma, which contribute to increased morbidity and mortality independent of weight or body mass index."

Reducing Bias in Decision Tools

Medical algorithms are not the only type of algorithm that can be biased. As a 2020 article in The New England Journal of Medicine noted, "This problem is not unique to medicine. The criminal justice system, for instance, uses recidivism-prediction tools to guide decisions about bond amounts and prison sentences." The authors said that one widely used tool, "while not using race per se, uses many factors that correlate with race and returns higher risk scores for Black defendants."

The increasing use of artificial intelligence (AI)machine learning in particular—has also raised questions about bias based on race, socioeconomic status, and other factors. In healthcare, machine learning often relies on electronic health records. Poor and minority patients may receive fractured care and be seen at multiple institutions. They are more likely to be seen in teaching clinics where data input or clinical reasoning may be less accurate. And they may not be able to access online patient portals and document outcomes. As a result, the records of these patients may have missing or erroneous data. The algorithms that drive machine learning may thus end up excluding poor and minority patients from the data sets and needed care.

The good news is that awareness of biases in healthcare algorithms has grown in the past few years. Data input and outcomes are being checked for racial, ethnic, income, gender, and age bias. When disparities are recognized, the algorithms and data sets can be revised toward better objectivity.

What Is an Algorithm?

There is no standard legal or scientific definition for algorithm, but the National Institute for Standards and Technology refers to it as "A clearly specified mathematical process for computation; a set of rules that, if followed, will give a prescribed result."

What Is an Example of an Algorithm?

In the broadest sense, an algorithm is simply a step-by-step process for answering a question or achieving a desired result. So, for example, a cake recipe is a form of algorithm. In the world of finance, an automated trading system would be an example.

What Is Machine Learning?

IBM, a pioneer in the field, defines machine learning as "a branch of artificial intelligence (AI) and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy."

The Bottom Line

Despite their appearance of dispassionate objectivity, the algorithms that medical professionals use to make certain decisions can be prone to bias based on race, class, and other factors. For that reason, algorithms can't simply be taken on faith but must be subject to rigorous analysis. As a 2021 article in the MIT Technology Review noted, "The term 'algorithm,' however defined, shouldn't be a shield to absolve the humans who designed and deployed any system of responsibility for the consequences of its use."

Article Sources
Investopedia requires writers to use primary sources to support their work. These include white papers, government data, original reporting, and interviews with industry experts. We also reference original research from other reputable publishers where appropriate. You can learn more about the standards we follow in producing accurate, unbiased content in our editorial policy.
  1. Science. "Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations."

  2. HealthAffairs.org. "Algorithmic Bias in Health Care: A Path Forward."

  3. Scientific American. "How to Take Racial Bias Out of Kidney Tests."

  4. National Kidney Foundation. "NKF and ASN Release New Way to Diagnose Kidney Diseases."

  5. Medium. "The Bizarre and Racist History of the BMI."

  6. Centers for Disease Control and Prevention, National Health Statistics Reports. "National Health and Nutrition Examination Survey 2017–March 2020 Prepandemic Data Files—Development of Files and Prevalence Estimates for Selected Health Outcomes," Page 14.

  7. National Institute of Diabetes and Digestive and Kidney Diseases. "Health Risks of Overweight & Obesity."

  8. Journal of the American Heart Association. "Body Mass Index, Outcomes, and Mortality Following Cardiac Surgery in Ontario, Canada."

  9. CMAJ Group, Canadian Medical Association Journal. "Obesity in Adults: A Clinical Practice Guideline."

  10. New England Journal of Medicine. "Hidden in Plain Sight—Reconsidering the Use of Race Correction in Clinical Algorithms."

  11. National Institutes of Health. "Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data."

  12. National Institute for Standards and Technology. "Algorithm."

  13. IBM. "What Is Machine Learning?"

  14. MIT Technology Review. "What Is an 'Algorithm'? It Depends Whom You Ask."

Take the Next Step to Invest
×
The offers that appear in this table are from partnerships from which Investopedia receives compensation. This compensation may impact how and where listings appear. Investopedia does not include all offers available in the marketplace.