Artificial Intelligence: How the technology behind ChatGPT could have big implications in the healthcare sector

Odds are you’ve seen a Tweet or LinkedIn post marveling at the writing quality of a new AI chatbot in the past few weeks. We’ve been impressed (and perhaps a little spooked) by many of them too.

Illustration: Mary Delaney
Illustration: Mary Delaney

OpenAI’s ChatGPT, a natural language processing platform, launched last month and has made waves across the internet. 

In its first few weeks, the platform surpassed 1 million users.

And it’s easy to see why. The applications for the platform seem endless, ranging from answering data engineering questions to composing a haiku. The Harvard Business Review called the tool “a tipping point” for AI in its applicability to a wide variety of both simple and complex tasks.

Don’t worry, we’re not here to announce that we’re handing over the writing of MedTech Pulse to ChatGPT. 

However, we do want to discuss how healthcare consumers and providers will likely rely on the platform—and others like it—more and more in the near future.

How the ChatGPT platform came to be

The platform is fine-tuned from a model in the GPT-3.5 series, which finished training in early 2022. Specifically, it’s a sibling model to InstructGPT, which is trained to follow instructions in a prompt and provide a detailed response. All the GPT-3.5 models were trained on a blend of text and code prior to the last quarter of 2021.

But before there was ChatGPT, InstructGPT, or any of the other GPT-3.5 models, OpenAI’s darling was GPT-3. This family of language models performed natural language tasks using engineered text prompts. Plus, because of its ease of use and flexibility, GPT-3 made machine learning functionality simple to implement across a variety of industries and applications. In March 2021, OpenAI reported that over 300 applications were using GPT-3 in search, conversation, text completion, and other AI features through the OpenAI API

Through continual alignment research, OpenAI trained InstructGPT models to be better at following English instructions and presenting more truthful information. ChatGPT takes that advancement and makes it conversational. 

Through its dialogue function, ChatGPT not only answers a wide variety of text prompts, but it can also answer follow-up questions and even admit mistakes. 

The upside: ChatGPT can improve access to health information

The technology behind ChatGPT could have major implications in the healthcare sector.

Healthcare Huddle’s Jared Dashvesky even suggested we may one day look back on ChatGPT as “the next stethoscope—a tool every provider (and patient) can access.”

Dashevsky paints an exciting picture of the many pathways ChatGPT can take toward improving healthcare, including patient care and delivery, research and diagnostics, and clinical and non-clinical workflow optimization. 

We’re also interested in ChatGPT’s potential to compile easily readable and comprehensive health information for patients who can’t easily access a provider, and who may be overwhelmed by the sheer number of articles they must parse through in response to basic search engine queries for health information.

Earlier this year, we shared our perspective on how AI health tools like the WHO’s Florence digital health assistant can save lives. A powerful AI communication tool can bridge gaps around the world where unmet health information needs keep people from seeking and receiving the care they require.

In the piece, we also explained that one of the advancements we’d like to see in health information AI tools is their easy accessibility for the people who need them. With ChatGPT’s growing popularity and ease of use, we can see how the platform can tick that box.

The downside: ChatGPT might improve access to inaccurate health information

One of the spookiest aspects of seeing ChatGPT in action is not just how good it is at generating writing that reads like an eloquent human wrote it—but how good it is at making inaccurate information seem like an eloquent (and informed) human wrote it.

Climate scientist Dr. Alex Thompson came across this ChatGPT ability and shared it on Twitter. Thompson found that when he asked ChatGPT a question about his own research, the AI spat out a coherent response and even provided citations to back it up. 

Only the citations were fake. However, a reader not steeped in the literature like Thompson wouldn’t know that.

Thompson isn’t the only one who has uncovered how ChatGPT’s confident portrayal of inaccurate information can be hard to discern—even for technologically-savvy and media-literate readers. An important signpost for this concern is the temporary ban of ChatGPT answers on programming Q&A site Stack Overflow.

The communication of accurate health information is complicated enough as it is. The COVID-19 pandemic and the worldwide discord between public health bodies and the public has made that clear. 

With that context, we remain optimistic about ChatGPT’s potential applications in optimizing complex clinical workflows and patient interactions like appointment scheduling. 

OpenAI also admits that the platform does have these limitations. They hope that, by making the ChatGPT interface so accessible, an influx of users will generate valuable feedback that will help the platform improve through continual development.

However, when it comes to actually communicating complex health information to patients and the public, we still prefer leaving it to the professionals. Humans, that is.

Read more

See all

MedTech Pulse is a newsletter publication on innovation at the intersection of technology and medicine. Stay ahead with unique perspectives on industry news, the latest startup deals, infographics, and inspiring conversations.

Powered by

CeramTec