Are Custom GPTs safe for use in insurance automation?

Are custom GPTs safe for use in insurance automation?

Since ChatGPT took the world by storm in December 2022, businesses in every industry, including insurance, have looked for ways to leverage its capabilities. Fast forward 12 months and amidst much excitement and anticipation, OpenAI unveiled custom versions of its popular ChatGPT platform which can be tailored to serve specific purposes by combining instructions and additional knowledge and skill sets. Later, in January 2024, the highly anticipated ‘GPT Store’ was launched, similar to Apple’s App Store, where users could share and sell their ‘Custom GPTs’. Furthermore, developers can now leverage API connections across various software platforms for AI automation through GPT Plugins. Could a CustomGPT be the answer to automation in insurance?

With these releases, businesses and consumers can now harness the benefits of Generative AI across multiple domains, signifying an important step towards democratizing AI for many. However, it’s important to note that Custom GPTs operate on the same Large Language Model as ChatGPT, which is open to inaccuracies and lacks transparency in decision-making.

This raises the question about compliance when employing Custom GPTs in regulated industries and if they are safe to use for automation within the insurance sector.

 

Is it safe to use Custom GPTs for insurance automation?

Despite the excitement around the ability to create custom AI chatbots – or ‘GPTs’ – unsurprisingly they are not safe to use when automating insurance processes. 

While the benefits of custom GPTs in automation are undeniable, such as the ease of creation and availability, it’s essential to address concerns regarding safety and compliance. Insurers must ensure that AI models are trained on high-quality, representative data and undergo rigorous testing to identify and mitigate potential biases and errors. Additionally, robust security measures must be in place to safeguard sensitive customer information and prevent unauthorized access or data breaches. Furthermore, compliance with regulatory requirements and ethical standards is paramount. Insurers must ensure that custom GPTs adhere to industry regulations, such as data protection laws and fair lending practices, to avoid legal repercussions and maintain trust with customers.

This is particularly important when considering it takes little effort to reveal and leak information and files from these GPT models. Last year, researchers at Northwestern University tested over 200 custom GPTs and found the success rate was 100 percent for file leakage and 97 percent for system prompt extraction, all achievable through simple prompts. 

With this in mind, and considering custom GPTs can be created by anyone and uploaded to the OpenAI Store, it poses several risks that make them unsafe for use in insurance. It also risks potentially biased and unfair outputs while there is an additional lack of transparency and accountability. 

While these custom bots may provide information on insurance topics, they may also provide inaccurate, outdated, or vague information, which can be potentially misleading to customers and policyholders. 

Above all, the most concerning risk is the absence of adequate regulatory frameworks governing custom GPTs for use in insurance. Insurance automation software must adhere to stringent regulatory standards to ensure customer safety and data privacy, however, custom GPTs lack the necessary oversight and accountability mechanisms. This regulatory void is a significant barrier to the widespread adoption of custom GPTs amongst insurance providers.

 

Learning from mistakes

Chatbots powered by artificial intelligence have recently attracted a lot of attention in mainstream media, but not always for the right reasons.

The recent and infamous DPD chatbot scandal resulted in one user being able to instruct the chatbot to disregard any existing rules and curse in every response. This underscores the imperative need for strict guardrails, compliance regulations, and robust security protocols. This is especially true if this technology is being used in highly sensitive environments like insurance. 

But what exactly went wrong with the DPD chatbot? In this case, DPD likely used an existing ‘off the shelf’ LLM, like ChatGPT or one from the ChatGPT store, and given how easy it was for users to trick the chatbot into disobeying rules, it indicates that there were no specific guardrails in place. As a result, users could teach the chatbot to ignore their commands and therefore produce incorrect results. 

In insurance, it’s especially important to avoid the same issues as highly sensitive data might be shared and communicated through the digital assistant and produce unwanted responses. 

 

How OpenDialog is different

Ensuring regulatory compliance of software tools and platforms for use in insurance is paramount. The OpenDialog platform has been designed to support insurance providers with its unique explainability and audit trails, as well as rich data that provides actionable insights, supporting improvements, and ultimately automating up to 90% of interactions. 

Want to learn more about how generative AI automation can be safely adopted by insurance organizations? Why not request a demo of the OpenDialog platform with a member of our team? Book here.

Share This Post

More To Explore

See How it works!

Get in touch for a showcase of how OpenDialog can help your business Deploy Conversational AI, at scale.

Transformation eBook download
The Generative AI Transformation Guide

Stay ahead of the curve with Generative AI and automation

Download our free eBook to see how to leverage the power of Generative AI and automation safely and thrive in this new age.