University of Wisconsin–Madison Medical College of Wisconsin

Hidden Bias in EMR Flagging Systems: A Call for Standardization

Farzana Hoque, MD, MRCP

WMJ. 2025;124(2):84.

Download full-text pdf.

To the Editor:

Yass et al’s article¹ on electronic medical record (EMR) flagging and its association with patient demographics and psychiatric medication use in a recent issue of WMJ is intriguing. It found that Black male patients and those prescribed psychotropic medications were more likely to receive “vulnerable/unsafe behavior” flags. This study sheds light on a critical yet underexplored intersection of hospital safety protocols and structural bias. When EMR flagging is not standardized and routinely audited, it may reinforce stigma, particularly disproportionately affecting marginalized populations and resulting in unequal care delivery.

Another study revealed that hospitalized patients from minoritized racial and ethnic groups (eg, Black, Hispanic, and others) had significantly lower levels of EMR engagement compared to White patients at 2 academic medical centers.² Clinicians were less likely to perform key EMR actions—such as pending notes, reviewing problem lists, medication records, and scanning barcodes—for these patients, even after adjusting for demographic, socioeconomic, and clinical variables.2 The presence of stigmatizing language in EMRs can influence the perceptions and prescribing behaviors of resident physicians.3 It has been associated with more negative attitudes toward patients and less aggressive pain management, highlighting an important yet often overlooked means of bias transmission between clinicians.3

Artificial intelligence (AI) has the potential to implement transparent and standardized flagging protocols in EMRs to audit flag use, identify patterns of inequity, and establish real-time feedback mechanisms that alert clinical teams to potential bias.4,5 This is both a clinical necessity and an ethical responsibility in efforts to reduce health care disparities. Emerging AI applications – particularly those using natural language processing – can be integrated to detect stigmatizing language within clinical documentation and notify clinicians and administrators to help ensure unbiased records.5 Such interventions may raise awareness of how implicit bias influences communication and contribute meaningfully to advancing equitable care for diverse patient populations.

REFERENCES
  1. Yass N, Walker R, Nagavally S, Kay C. Use of flags in the electronic medical record: a retrospective analysis. WMJ. 2025;124(1):42-46.
  2. Yan C, Zhang X, Yang Y, et al. Differences in health professionals’ engagement with electronic health records based on inpatient race and ethnicity. JAMA Netw Open. 2023;6(10):e2336383. doi:10.1001/jamanetworkopen.2023.36383
  3.  P Goddu A, O’Conor KJ, Lanzkron S, et al. Do words matter? Stigmatizing language and the transmission of bias in the medical record. J Gen Intern Med. 2018;33(5):685-691. doi:10.1007/s11606-017-4289-2
  4. Hoque F, Poowanawittayakom N. Future of AI in medicine: New opportunities and challenges. Mo Med. 2023;120(5):349.
  5. Barcelona V, Scharp D, Idnay BR, Moen H, Cato K, Topaz M. Identifying stigmatizing language in clinical documentation: A scoping review of emerging literature. PLoS One. 2024;19(6):e0303653. doi:10.1371/journal.pone.0303653

Author Affiliations: Department of Medicine, Saint Louis University School of Medicine, St. Louis, Missouri (Hoque).
Corresponding Author: Farzana Hoque, MD, MRCP, FACP, FRCP, Associate Professor of Medicine, Department of Internal Medicine, Saint Louis University School of Medicine, St. Louis, MO 63104-1016; email farzanahoquemd@gmail.com; ORCID ID: 0000-0002-9281-8138
Financial Disclosures: None declared.
Funding/Support: None declared.
Share WMJ