Molinna Bui, BS; Paul H. Yi, MD; Ifeanyichukwu Onuh, BS; Ian Kuckelman, MD; Andrew B. Ross, MD, MPH
WMJ. 2026;125(1):158-161.
ABSTRACT
Introduction: The adoption of artificial intelligence (AI) in image generation raises concerns about potential bias, as these technologies may not accurately reflect the populations represented in the images they create. This study examined whether AI-generated images of medical students accurately represent the diversity of the current US medical student population.
Methods: Using the DALL-E (Open AI) image-generation algorithm, we created 300 images with the text prompt “medical student.” Two researchers independently analyzed images for demographic indicators, including perceived sex, race/ethnicity, age group, setting, and attire. Descriptive statistics summarized the data, and subgroup analyses assessed differences in portrayals by sex and race/ethnicity. Demographic proportions in the virtual cohort were graphically compared with Association of American Medical Colleges enrollment data.
Results: Of the 300 generated images, 227 (76%) were females and 223 (74%) were White, indicating overrepresentation compared with actual medical school demographics. Black and Latino/Hispanic students were more commonly depicted in scrubs compared to White students, who were often portrayed in white coats or collared shirts (P = .002). No images represented Native American/Alaskan Native or Native Hawaiian/Pacific Islander students.
Conclusions: AI-generated images of medical students demonstrated significant demographic disparities, indicating potential bias in these technologies. Such biased portrayals may perpetuate stereotypes and hinder diversity efforts. Future research should identify and address these biases to promote more equitable and inclusive applications of AI tools.