Feb 20, 2024
AI in Consulting Practice #1: Getting Started with AI
Dennie van den Biggelaar, Onesurance, in Ken je vak! in VVP 1-2024
In this first edition of AI in Consulting Practice, we start at the beginning: what is it and how do you get started? AI is a machine or software that performs tasks that have traditionally required human intelligence. Machine learning (ML) is a specific part of AI that allows a machine or software to learn on its own from historical predictions or actions.
The best-known and most discussed example of ML software is ChatGPT, which is specifically designed to generate meaningful pieces of text for the user. However, there are countless other issues where machine learning can help us. However, there is not (yet) always an off-the-shelf solution that you can use right away, such as ChatGPT.
To build such an actionable AI solution, you need to bring the right competencies together at the right time. It is the job of an AI strategist to work with a multidisciplinary team of business experts, ML engineers, data engineers and data scientists to determine what you want to predict, how (accurately) to do it, what techniques to deploy and finally how to operationalize and secure it so that it actually leads to the desired results.
predict churn
As an office, you want to make sure that the right customers get the right attention from your advisors at the right time so that churn is minimized. It's ideal if you then know which clients have a high chance of canceling. But how do you translate this to the team?
It is common for a customer to cancel with a single policy. In most cases, this is simply a mutation and you don't want to pollute your ML model with this. Suppose a customer cancels with all policies within the main liability branch, but he does not cancel the rest (yet). Is there then a customer in danger of leaving? And what if he also cancels everything within the main fire branch, but still has legal expenses and ORV? Have policies been transferred internally as well? How high is the churn ? All things you want to determine before putting a team of ML engineers to work.
In addition, you need to consider your forecast horizon: how far ahead do you want to forecast? Do you want to know which customers are going to cancel in the next month or the next three, six or 12 months? Again, this seems like a detail, but under the hood it means that you are going to train a completely different ML model.
'Too little attention is the main reason customers cancel'
Finding patterns
After you have clearly defined what you want to predict, it is time to see if your data is sufficiently Accurate, Available and Consistent (the 'data ABC') to do so. The main reason why customers cancel usually boils down to the fact that they did not receive enough attention. The question, of course, is with whom, when and why 'too little attention' occurred. You don't have this information in your data warehouse, so you have to construct it yourself through feature engineering . Which characteristics (features) have a significant effect on the likelihood of churn? This is an ana lytic and creative process in which the knowledge and experience of insurance experts and data scientists come together.
Once a sound initial table of features has been sculpted, you can finally get started with machine learning. Experience shows that predicting royalties is best modeled with classification or survival analysis. There are hundreds of different ML techniques that are theoretically suitable for this purpose. In your choice, it is important to take into account: to what extent should the algorithm be explainable, how complex should the patterns be or how much data is ABC?
'Based on precision, recall and AUC scores , among others, the best ML model is determined'
Validating patterns
After putting the "machine" to work to find patterns with which to make predictions, there always comes an exciting moment... How accurate are the various models? For this, the ML engineer has an extensive toolbox. First, he keeps a portion of the data separate to test and validate a trained model. This guarantees the robustness of the patterns found and prevents a model from giving inaccurate predictions in the "real world. Then the false positives and false negatives are examined and what the costs are.
For example, a false prediction that someone is going to cancel next month (false positive) is not so bad. The advisor calls the customer and concludes that nothing is wrong: it just costs him fifteen minutes of his time. If the algorithm wrongly predicts that someone will remain loyal (false negative), it is much more costly: you lose a customer.
Based on precision, recall and AUC scores , among other factors, the best ML model is determined. In addition, it is possible to tune algorithms more or less stringently to better suit the intended business process. This is called parameter tuning, and an experienced ML engineer knows how to do this responsibly.
How do you make it usable?
Then you integrate the algorithm into the operational processes. How can the data get back and forth safely and efficiently, how can the advisor use the prediction easily? That's the work of data and software engineers. Finally, you also want the advisor to be able to give feedback on the quality of the algorithm, so the algorithm learns from the user. So the algorithm gets smarter and smarter and more efficient as it is used more and more.
That's the real "AI" component, but more on that in the next issue!
The original article appeared in the VVP and is here to read online.