Understanding the European AI Act

Understanding the European AI Act

How OpenDialog supports AI use that is safe, trustworthy and compliant in Insurance

As AI technology continues to advance, so do the regulations governing its use. The European AI Act is the first such large-scale legislative effort aimed at regulating the use of AI technologies. 

At OpenDialog, we are committed to supporting the safe adoption of AI in insurance. In this blog post, I’ll explore how our platform aligns with the European AI Act’s requirements and how we are positioned to adapt to future iterations of the Act.

 

Understanding the European AI Act

Who is affected

The EU AI defines different actors in the AI value chain such as providers, importers, distributors, deployers and users. Providers are those who develop or place on the market an AI system. Importers are those who place on the market an AI system from a third country. Distributors are those who make available on the market an AI system supplied by a provider or an importer. Deployers are those who use an AI system for their own purposes or on behalf of a third party. Finally, users are those who interact with or are affected by an AI system. 

Depending on the role and risk level of the AI system, the EU AI Act imposes different obligations and responsibilities on these actors. 

 

Understanding Risk Levels

The European AI Act categorises AI systems into four risk levels:

  1. Unacceptable Risk: AI systems that pose a clear threat to safety and fundamental rights, such as social scoring and manipulative AI, are prohibited outright.
  2. High Risk: These systems are subject to stringent regulations. They include AI used in critical infrastructure, education, employment, essential services, law enforcement, etc. Providers of such systems must comply with a range of requirements on data training and governance, technical documentation, recordkeeping, technical robustness, transparency, human oversight and cybersecurity. In addition, they must be registered in an EU database before being placed on the EU market. 
  3. Limited Risk: These systems have lighter transparency obligations. Developers must ensure end-users are aware they are interacting with AI.
  4. Minimal Risk: Most AI applications, like spam filters and AI-enabled video games, fall under this category and are largely unregulated.

The majority of use cases involving automated conversational assistants are expected to fall under the limited risk heading, which carries with it a set of requirements around disclosure and transparency.

OpenDialog works with our clients to carefully assess the potential use case to determine if there are any other requirements or concerns to deal with.  The more stringent requirements are targeted to a smaller subset of providers of General Purpose AI models (such as foundational Generative AI Models).

 

Implementation Timelines

The AI Act was approved by the EU parliament on the 13th of March, 2024 and is expected to be published in the Official Journal of the EU by July 2024. Following that understanding the timeline is important. 

  • 20 days following publication the AI Act will “enter into force”, however, providers are not immediately expected to do anything. 
  • 6 months later the prohibitions on unacceptable risk will apply. 
  • 12 months later we should expect notifying authorities to be set up and provisions around general purpose models and governance to come into force. This focuses on high-risk applications. 
  • 24 months later the remainder of the AI Act will apply covering limited risk and minimal risk. 

In addition, we expect to see individual countries publish codes of practice 9 months after the AI Act comes into force. 

 

OpenDialog’s Commitment

At OpenDialog, safety, trustworthiness and transparency are at the core of how our platforms work. We will keep monitoring developments and work with our clients to be ready in a timely fashion to comply with any regulatory requirements. 

Here’s how we align with the key provisions of the European AI Act:

Transparency and Auditing:

We provide comprehensive auditing capabilities that track and provide insights into AI decision-making processes, ensuring transparency and accountability. Our platform records and documents all interactions at a granular level, making it simple to comply with regulatory reporting requirements.

Risk Management and Data Governance:

OpenDialog’s risk management approach is designed to handle risk in AI systems, ensuring robust data governance. We ensure the application of business-centric rules for the training, validation, and testing conversational AI applications that are representative, aligning with the Act’s data governance requirements.

Human Oversight:

We design our AI systems to allow for human oversight, ensuring that decisions can be reviewed and handed off to human operators when necessary. This is crucial for use cases in insurance, where human judgement is paramount.

Technical Documentation:

Our platform includes detailed technical documentation and user instructions, enabling clients to comply with the Act’s requirements. We also support clients by providing tools and resources to facilitate insights and reporting to assist with compliance.

Cybersecurity:

We prioritise operational and technological cybersecurity in our AI solutions, implementing advanced security measures to protect against cyber threats and ensure the integrity of our AI systems.

Proactive Adaptation for Future Compliance

The European AI Act is designed to evolve as AI technology advances. At OpenDialog, we have built our platform with the flexibility and interoperability in mind to adapt to future regulatory changes. Our ongoing commitment to innovation and compliance means that we are always ready to update our systems and processes in line with new regulations.

 

Supporting Insurance

Insurance is among the most heavily regulated industries. The potential positive impact of AI in these fields is enormous, from improving customer journey outcomes to enhancing risk assessment and fraud detection. OpenDialog’s platform is specifically designed to support the unique needs of this industry:

In the insurance sector, our AI tools streamline claims processing, enhance risk assessment, and improve customer experience. By ensuring compliance with regulatory standards, we help insurance companies leverage AI’s capabilities while maintaining trust and transparency with their customers.

 

Closing remarks

The European AI Act represents a significant step towards ensuring the safe and responsible use of AI. At OpenDialog, we are dedicated to helping regulated industries navigate these complex requirements and harness the power of AI safely and effectively. Our platform’s robust controls, transparency features, and commitment to compliance make us a trusted partner for insurance organisations looking to adopt AI responsibly.

Share This Post

More To Explore

See How it works!

Get in touch for a showcase of how OpenDialog can help your business Deploy Conversational AI, at scale.