Paper: Using Large Language Models to Promote Health Equity


I enjoyed reading this paper from the NEJM AI journal. This short (3-page) paper takes a positive view of large language models (LLMs) and discusses three potential LLM use cases that can make healthcare more equitable. As the paper mentions, 85% of the articles/papers that focus on equity-related impacts focus on the harm LLMs can cause. Only 15% of the articles/papers focus on equity opportunities.

NEJM AI is a new journal on medical artificial intelligence and machine learning from the NEJM Group. The New England Journal of Medicine (NEJM), published by NEJM Group, is one of the oldest and most prestigious medical journals in the world.

In this paper, the authors discuss three LLM use cases:

  1. LLMs can improve the detection of bias.
  2. LLMs can create structured datasets relevant to health equity.

I won’t discuss the third use case because everyone understands the importance of improving access to health information. However, people often underestimate the effort required to build a useful and reliable AI assistant.

Having spent the last two years working with LLMs, I’ve frequently encountered discussions about their inherent biases from training data. This paper presents an innovative counter-perspective: using LLMs to detect human biases in healthcare settings. While traditional methods relied on simple keyword searches, LLMs can analyze clinical notes to identify subtle linguistic patterns, sentiment, and stereotypes that may indicate bias in patient care. For example, research has shown that doctors more frequently describe Black patients as ‘difficult’ compared to white patients. LLMs can help systematically identify these kinds of biased language patterns by treating medical texts as ‘artifacts’ of a biased healthcare ecosystem.

Similarly, if LLMs can be used to promote propaganda, they can also be powerful tools for detecting it. Their deep understanding of language patterns can be turned from a potential liability into an asset for identifying harmful biases.

The second use case, creating structured datasets, is already happening. Features like structured output have made it much easier to extract data from unstructured documents. However, I still see people writing custom scripts and tools to do this. I believe that just as coding experience has improved in chat applications with the introduction of features like Canvas and artifacts, structured text extraction will also become better. The workflow will be more UI-driven rather than relying on Python code execution with a code interpreter.


Discover more from Shekhar Gulati

Subscribe to get the latest posts sent to your email.

Leave a comment