Choosing Scalable AI Technology with Data Ethics

When deploying Artificial Intelligence, data ethics play a major role as we explain in this article. That is why we work closely with our sister company Brush-AI, the Netherlands’ first company dedicated specifically to responsible AI.

This collaboration ensures that the algorithms we design inherently comply with the Ethical Framework for Data Applications established by the Dutch Association of Insurers.

We see it every day at the office: modern computer technology causes the mountain of ‘big data’ to grow exponentially. But what can we do with this currently meaningless data? How do we turn it into usable information that figuratively puts us in formation, enabling us to make better decisions and improve customer service? And how can we ensure that this data-driven decision-making happens in an ethical and responsible manner?

Authors: Jack Vos (Onesurance.nl) and Max Roeters (Brush-ai.nl)

The DIKW Model

Wisdom is the art of making the right judgments and acting correctly in all circumstances. One thing is clear: cycles of ever-newer technologies with more and more available information confuse us more than they contribute to knowledge and wisdom. This was already noted by writer T.S. Eliot in 1934. In an inspiring poem, he writes: “Where is the Life we have lost in living? Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information? The cycles of heaven in twenty centuries Bring us further from God and nearer to the Dust.”

Eliot’s text is considered by many to be the basis of the well-known DIKW model in the ICT world. The model answers the question of why we want to use data and information: to enable humans to make better decisions.

We distinguish four levels:

  1. Data: Simple facts and figures. This is the huge mountain of big data, such as all stored policy changes. Without analysis, we can do nothing with it.
  2. Information: Data that is bundled, organized, and preferably visualized. This allows us to see what has happened and, for example, use it as a benchmark. How many policies were canceled in 2022 compared to 2021?
  3. Knowledge: Information from which underlying patterns can be derived, given the context. This answers the question of why something happened. Why did customers cancel their policies? Why were they dissatisfied? Was it due to premium increases? Poor claims handling?
  4. Wisdom: Ideally, we use the gathered knowledge to make better decisions now and in the future that benefit everyone. Perhaps we can predict who will cancel a policy based on past patterns? And more importantly: can we predict how to prevent customer dissatisfaction?

The four levels in the DIKW model correlate with the four levels of data maturity in which an organization can be, namely:

  • Descriptive level (organizing data into information)
  • Diagnostic level (gaining knowledge by extracting patterns from information)
  • Predictive level (using knowledge to predict what will happen)
  • Prescriptive level (prescribing how to make it happen or prevent it).

Data maturity indicates the extent to which a company effectively uses its data sources to increase productivity. At the highest levels, you cannot avoid using artificial intelligence (AI) technology. If we strive for wisdom, we want to use this technology correctly, and ethics play an important role.

How Advisors Can Use AI

Many insurers, underwriters, and intermediaries struggle with the question of how to continue serving tens of thousands of customers optimally. This can only be done scalably by using data and AI technology smartly, as deploying more advisors is simply too expensive. AI uses self-learning algorithms – essentially mathematical formulas – that can analyze millions of data points for correlations. Complex software then derives predictive patterns indicating whether something works well or not. For example, a substantiated prediction can be made for each customer about which actions in the customer journey lead to the highest customer satisfaction and the highest return for the office. This knowledge enables advisors to work more effectively and use their valuable time as efficiently as possible.

Some people distrust AI technology because they believe it operates in a ‘black box’. This means that while the incoming and outgoing information streams of the algorithm are visible, the mechanisms connecting these two streams are not. From the ambition to strive for wisdom, we want to know the properties of these mechanisms to ensure the interpretability of AI. This is crucial when working with personal data.

Responsible AI and the Ethical Framework

The idea behind the methodology called responsible AI is to develop and set up the algorithm from scratch so that the user can always make an informed decision about whether its use is ethically responsible given the context. The ethical framework for data-driven decision-making, established by the Dutch Association of Insurers, aligns well with this. This framework stipulates that seven requirements must be observed for the ethical use of AI. The exact standards can be found in the ethical framework, and here is a brief explanation:

  1. Human Autonomy & Control AI should be a means to a certain solution, not the goal itself. This means that if less complicated technology can be used to achieve the same result, it is preferred. Enough attention must also be paid to risks and potential conflicting interests.
  2. Technical Robustness and Safety Customer data must logically be protected, and the quality of the data should be ensured. Outdated customer data can inadvertently lead to incorrect insights.
  3. Privacy and Data Governance Ensuring privacy is paramount when working with data. Human oversight is crucial, as certain biases (prejudices) can inevitably enter the model. Addressing this attentively and preventively helps avoid unnecessary errors.
  4. Transparency Especially when making decisions that can have a significant impact on a person’s life – such as rejecting a claim – it is important to structure the AI model so that it is always possible to explain to customers how a decision was made.
  5. Diversity, Non-Discrimination, and Fairness It is important to recognize that, like humans, it is nearly impossible to create a completely bias-free AI model. By paying extra attention to underrepresented or vulnerable groups, we can prevent the model from discriminating against them, consciously or unconsciously.
  6. Societal Wellbeing AI should help ensure as many customers as possible remain insurable, and those who are difficult to insure or risk becoming uninsurable should be informed about ways to mitigate or cover risks. Interpretable and transparent models can inform customers more precisely, such as by indicating the customer attributes on which a decision is based.
  7. Accountability Safe and receptive interaction about the potential risks of AI between the insurer, its employees, and its customers is essential. Therefore, mechanisms that allow continuous assessment of the technology should be considered during development.

Technology and Ethics: The Human in the Loop

What a human cannot do, AI technology can: convert enormous amounts of data into usable information and recognize predictive patterns within it. This enables insurance companies to deploy advisors more effectively, especially if they want to serve tens or even hundreds of thousands of customers scalably. In striving for wisdom, we want to keep the ‘human in the loop’. The ethical framework provides guidelines that help ensure the quality of an AI model. This is not a threat to AI; on the contrary, it will be used and accepted much more readily, especially if the responsible AI method is used. The advisor always plays an important role as long as they bring in interpersonal qualities such as empathy, kindness, and trust at the right moment.

Interview with Indra Frishert, Marketing & Sales Director / Co-owner of Dazure

For those who don’t know you yet, what is Dazure? Dazure is an underwriting agency that creates innovative insurance products that fit today’s world: honest products that we also want to offer to our loved ones.

Why did you start with AI? We were curious about how we could improve our service to customers by making more use of all the data we have accumulated over the past 14 years. The data turned out to be of good quality. In a short time, we received many great insights and usable opportunities. This positively surprised us!

What advantages do you see in AI? The big advantage is the self-learning effect of AI. This allows us to make great strides in the customer journey while keeping the ‘human in the loop’ to ensure that these strides are taken carefully. This way, we can improve, speed up, and simplify our processes for the customer. For example, when looking at the acceptance process for life insurance, there is a heavy reliance on medical professionals (such as general practitioners). This does not meet the customer’s wishes, as they want a policy immediately, and it also burdens the medical world, which has better things to do than assess insurance risks. With AI, we can predict which small medical examinations (SMEs) are unnecessary. AI is an effective tool, provided it is used correctly.

Why is responsible AI so important? Responsible AI is important to us because big data itself is not the danger, but how we as people (entrepreneurs) handle data. We apply the ‘family yardstick’ to all our products. This is the standard we implement on all fronts and which we find very important. If you wouldn’t offer something to your own family, you simply shouldn’t do it. Applying this yardstick to the AI aspect as well, we believe you create a company that genuinely works on a long-term relationship with its customers.

Post Tags :

Share :

Scroll to Top