Are Custom GPTs safe for use in healthcare automation?

To much excitement and fanfare, OpenAI announced the introduction of custom versions of its popular ChatGPT platform in November 2023. These ‘Custom GPTs’ can be made to serve specific purposes by combining instructions, extra knowledge, and any combination of skills. Then, in January 2024, it launched its highly anticipated ‘GPT Store’ which will function as a marketplace, similar to Apple’s App Store, where Custom GPTs could be shared and sold for others to use. These developments are in addition to the previously available GPT Plugins that allow developers to utilize API connections with various software platforms for AI automation.

Since the launch of the GPT Store, OpenAI has boasted about the availability of over 3 million Custom GPTs. These range from book recommenders to programming tutors, as well as a few dubious and concerning mental health AI bots

For many, these releases are a leap forward in the democratization of AI, creating opportunities for businesses and consumers alike to reap the benefits of Generative AI in a myriad of ways. However, these Custom GPTs are powered by the same Large Language Model as ChatGPT, which is known to be prone to inaccuracies and lacks transparency in decisioning. This begs the question, are Custom GPTs safe for use in healthcare automation? 

 

Is it safe to use Custom GPTs in healthcare automation?

Unsurprisingly, the short answer is no. While custom-trained GPT models can be beneficial for many users, their deployment in regulated industries like healthcare requires careful consideration of several factors. These include ensuring compliance with data privacy regulations like HIPAA in the USA and GDPR in the UK and EU, maintaining accuracy and reliability to prevent errors, addressing biases to avoid disparities in care, and adhering to regulatory standards specific to healthcare. 

This is particularly important when considering it takes little effort to reveal and leak information and files from these models. Researchers at Northwestern University tested over 200 custom GPTs and found the success rate was 100 percent for file leakage and 97 percent for system prompt extraction, all achievable through simple prompts. 

As outlined above, custom GPTs can be created by anyone and uploaded to the OpenAI Store. This poses several risks that make them unsafe for use in healthcare including data privacy and security risks like the study outlines above, but also risks to bias and fairness in outputs and lack of transparency and accountability. 

While these custom bots may provide information on medical topics, they may also provide inaccurate, outdated, or vague medical information, which can be harmful to patients. 

Perhaps most concerning is the absence of robust regulatory frameworks governing custom GPTs in healthcare. Healthcare automation software has to adhere to stringent regulatory standards to ensure patient safety and data privacy, however, custom GPTs lack the necessary oversight and accountability mechanisms. This regulatory void poses a significant barrier to their widespread adoption in healthcare settings.

Learning from mistakes

AI-powered chatbots have recently gained a lot of attention in mainstream media, but not always for the right reasons. The infamous DPD chatbot scandal, where one user made the chatbot disregard any existing rules and curse in every response, underscores the imperative need for strict guardrails, compliance regulations, and robust security protocols. Especially if this technology is being used in highly sensitive environments like healthcare. 

So what exactly went wrong with the DPD chatbot? In this case, DPD likely used an existing ‘off the shelf’ LLM, like ChatGPT or one from the ChatGPT store, and given how easy it was for users to trick the chatbot into disobeying rules, it indicates that there were no specific guardrails in place. This opened the door for users to essentially teach the chatbot to listen to their commands and produce bad outputs. 

In healthcare, it’s especially important to avoid the same issues as highly sensitive data might be shared and communicated through the digital assistant. 

How OpenDialog is different

Ensuring regulatory compliance of software tools and platforms for use in a healthcare setting is paramount. The OpenDialog platform has been designed to support healthcare providers with its unique explainability and audit trails, as well as rich data that provides actionable insights, supporting improvements, and ensuring patient safety. 

Want to learn more about how your healthcare organization can safely harness Generative AI for healthcare automation? Request a demo of the OpenDialog platform here.

Share This Post

More To Explore

See How it works!

Get in touch for a showcase of how OpenDialog can help your business Deploy Conversational AI, at scale.

Transformation eBook download
The Generative AI Transformation Guide

Stay ahead of the curve with Generative AI and automation

Download our free eBook to see how to leverage the power of Generative AI and automation safely and thrive in this new age.