How to start with AI in insurance, practical guide (part 2.)

At the request of VVP, featured in  the latest issue  VVP, the leading platform for financial advisors in the Netherlands, our CTO and AI strategist, Dennie van den Biggelaar, explains how to apply specific AI and machine learning to ‘advice in practice.’ In various editions, the following topics will be highlighted:

  • Starting with specific AI and ML
  • Operationalizing in business processes
  • Integrating into existing IT landscapes
  • Measuring = learning: KPIs for ML
  • Ethics, regulations, and society
  • AI and ML: a glimpse into the near future

In this second edition, we answer the question: how do you operationalize a trained algorithm?

Challenges in Operationalizing AI in Business Processes

Imagine you and your data science team have designed a promising AI algorithm to predict cancellations, aiming for advisors to proactively address them. This process was discussed in part 1 of this series, ‘AI in Practice.’ The potential is there, but soon you realize that several complex hurdles must be overcome to implement it practically. What are these hurdles, and how can you overcome them?

Measurable Results Are Lacking

A clearly formulated goal precisely identifies what the AI algorithm should achieve and aligns with business objectives. The scope, on the other hand, directs the project by defining relevant data sources, budget, timelines, and expected outcomes. What are the steps to achieve this?

A data science project is generally an investment where:

  1. It is unclear what it can deliver.
  2. It is uncertain if your team can realize it.

Therefore, make the project as small and manageable as possible without losing its value and impact if it succeeds. Try to achieve results as quickly as possible to prove you are on the right track.

If you don’t achieve those results, evaluate and adjust with the team. If you do achieve them? First goal reached! Then, create a compelling story and present it to your business stakeholders, discussing how to scale it within your organization.

Questions About Consistent Data Quality

A common stumbling block is the quality and consistent provision of up-to-date data. Inconsistencies and missing values can jeopardize the accuracy of the AI model. The solution? A thorough exploration of which data is always Accurate, Available, and Consistent (the data ABC).

If essential data does not meet these criteria to achieve your goal, apply extensive data cleaning, such as handling missing values, extreme outliers, and incorrectly entered data. Then you will need to structurally ensure these cleaning steps in a data transformation pipeline and associated process so that you can add this data to your foundation for a reliable operational model.

Insufficient Trust in the AI Model

Insufficient understanding and trust in ML models form a barrier to acceptance among non-technical users. If you don’t pay enough attention to this, distrust and resistance will arise. A solution is selecting transparent models with good explainability and smart methods where complexity is translated into an understandable concept. Visualization and clear (process) documentation increase trust, pushing the “black box” objection into the background.

As with any change, it is important to carefully involve your colleagues in this process. Give them enough time to ask questions and get used to this new technology and its possibilities. Realize that their questions and feedback are essential inputs to make the intended application successful in practice.

Concerns About Security, Privacy, and Ethics

It goes without saying that the security and privacy of (customer) data are prerequisites to start with. Fortunately, much new legislation has been implemented in the past five years, and organizations are also applying it practically and structurally.

Trust is not only an issue of legislation and technology. Ethical objections can also be expected from various angles:

  • Are we sure the algorithm is fair?
  • And what does that mean?
  • Are certain groups worse off with an algorithm?
  • Do we find that ethically responsible?
  • How do I prevent my algorithm from discriminating?

Fortunately, the Dutch Association of Insurers has established several guidelines that you can incorporate into your algorithm and approach. Want to make sure you don’t overlook anything? Assign one person responsible for this.

The Feedback Cycle Is Missing

Listening to user experiences and using this feedback creates a dynamic iterative cycle, allowing the model to evolve according to business requirements. A structured feedback mechanism is crucial for the self-learning ability of the AI model. How you set this up correctly differs per AI application.

In the specific case of ‘preventing cancellations,’ have advisors, for example, record what they did with the prediction: called the customer, visited them, or did nothing. This way, you can measure over time what effect it has on cancellations.

Insufficient Monitoring

The motto should be: “keep the algorithm on the leash.” You don’t want ‘hallucinations’ or unexpected performance degradation, for example, during a trend break. This means that a careful monitoring and warning system must be in place to track model performance. A sustainable application requires detailed documentation of parameters and used data so that the model remains transparent and reproducible.

The Model Is Not Scalable

An algorithm must be ‘by design’ part of a system with scalability in mind. Safe cloud solutions and scalable infrastructure such as MLOps technology (the ML variant of DevOps) are generally necessary. Consider growth projections and ensure a sufficiently flexible system that adapts to evolving business requirements. Making the right choices for integration with the IT landscape is essential (e.g., real-time or batch processing). More on this in the next edition.

Last But Not Least: Insufficient Involvement

According to innovation professor Henk Volberda, the success of innovation is only 25% technical and 75% dependent on human adoption. Successful adoption starts with “CEO sponsorship,” as water flows from top to bottom. Leadership must ensure sufficient training, communication, and support when deploying an AI model. Invest enough time and energy to make this new technology part of your organization, from strategy to operation. Because that’s where the real return on investment lies: the successful collaboration between human experts and AI technology.

“It’s easy to create a self-learning algorithm. What’s challenging is to create a self-learning organization.” – Satya Nadella, CEO of Microsoft

In summary: devising, building, and validating a robust algorithm is just phase one of successfully implementing AI in practice. In the next edition, we will delve into how to integrate it with existing IT systems and workflows.

Post Tags :

Share :

Scroll to Top