Feb 11, 2025

Are we already relying on AI?

The Scholarship Angel has been around for almost ninety years and publishes 10 issues annually. In issue 937, the first column on AI appeared written by our co-founder Jack Vos with the title Do we already trust AI?

Not surprisingly, after all, trust is one of the most important core values in the insurance industry. The question is not whether we should use AI, but how do we deploy AI in a way

that deserves trust. Read more in this article.

It won't have escaped your notice that AI is also gaining momentum in the insurance industry. Innovative firms see the technology as the way to work more efficiently and smarter to stay one step ahead of competitors. Yet the question remains: how do customers perceive these developments? Recent research by DNB and the AFM paints a clear but uncomfortable picture. Only a quarter of respondents are positive about the use of AI by financial institutions, while 22 percent think negatively about it. The majority are neutral or have no opinion yet, partly because few people know exactly how their bank, carrier or pension fund is using AI.

"With Responsible AI, we can create real value with AI in a sustainable way."

- Jack Vos, founder Onesurance.ai

These figures show not only the gap between technology and consumers, but also the challenge for the industry. Trust is at the heart of insurance. Customers must be able to count on decisions being fair, transparent and explainable - whether made by humans or technology. This requires careful choice in the application of AI.

Generative AI (gen AI), known from ChatGPT, has had all the attention since 2022. The possibilities seem endless: writing texts, generating creative outputs and much more. But in insurance, gen AI often proves unsuitable. Even for experts, this form of AI is a black box, not explainable and produces variable results, making it unreliable for critical processes, such as claims processing or underwriting. Moreover, using the required large data sets poses privacy risks, which may violate the AVG.

In contrast, Predictive AI offers a more reliable alternative. This technology is accurate, consistent and - crucially - much more explainable. With an audit trail, you can show which factors influence a decision. Thus, it offers both transparency and scalability. Predictive AI is by no means a newcomer; the technology has already proven itself in the financial sector.

Yet we need not write off Gen AI. It is already valuable in support tasks, such as writing customer communications. For critical processes, human control remains essential. Here, AI can make a proposal, but humans keep the final say, especially when it comes to rejecting applications or claims. This procedure, known as human in the loop, helps ensure that ethical principles such as fairness and accountability are maintained.

Supervision also plays a crucial role. DNB's research shows that 62 percent of consumers are more positive about AI if it is subject to strict supervision. The AFM and DNB are therefore working on new methods to effectively assess AI. One thing is certain: the technology must meet the ethical standards of the industry. For this, proper procedures must be put in place with proactive monitoring of the solution.

So the question is not whether we should use AI, but how we deploy AI in a way that earns trust. With Responsible AI (RAI) - a combination of the right technology, human control and ethics - we can create real value with AI in a sustainable way. As Warren Buffett once said, "It takes 20 years to build a reputation and five minutes to ruin it." No matter how smart AI becomes, it is up to us to deploy it carefully and responsibly.


Source: AFM