How Race Affects Adoption Of Telehealth

When the pandemic made in-person doctor visits risky, both doctors and patients turned to virtual communication. However, a study from Harvard Business School found that responses from healthcare providers can differ based on a patient’s race. Can technology help make patient portals more equitable?

The researchers examined over 57,000 message threads between patients and medical teams at Boston Medical Center. They found that white patients were more likely to get replies from their primary doctors, while Black and Hispanic patients were more likely to hear from nurses. This suggests that medical teams tend to prioritize messages from white patients.

Digital health

As digital tools become more common and healthcare providers start using AI, it’s important to consider the impact on medical practice. The study highlights how digital platforms can worsen existing biases, even though they aim to improve efficiency.

The study’s results call for more research. Possible reasons for the disparities include differences in the types of questions patients ask, the urgency of the messages, or the patients’ understanding of healthcare and technology.

During the peak of the pandemic in 2021, the study looked at communication between over 39,000 patients and their doctors through Boston Medical Center’s messaging system. It found that Black patients were 17 percent less likely to get replies from their primary doctors compared to white patients. Hispanic and Asian patients faced smaller but similar disparities, with response rates 10.2 percent and 9.3 percent lower, respectively. Despite being only one-fifth of the study population, white patients received half of the replies from attending physicians.

The study controlled for factors like the type of medical practice, patient age, ZIP code, health status, insurance, and preferred language. This means that similar patients, one white and one Black, could get different responses from their care teams.

The study focused on one institution, but its findings match trends seen in other healthcare systems. Ongoing research with another health system shows similar results, even with different patient demographics and types of health systems.

Due to privacy rules, the researchers didn’t have access to the actual text of the messages. This means other factors besides direct racial bias might explain the results. Future research should explore how the language used in messages affects response rates. Differences in tone and terminology—whether a message is formal or informal—could influence which messages are escalated to doctors.

As AI becomes more integrated into healthcare, understanding how to assess and prioritize patient messages will be crucial. Using data like that from the Boston Medical Center study to build algorithms risks reinforcing biases unless these issues are actively addressed.

Facebooktwitterredditpinterestlinkedinmail