Why I Refuse to Build AI Without Guardrails
On the choices nobody finds exciting, but that determine whether AI in insurance will last.
Last year, a prospect asked us whether we could build a model that predicts which clients are "difficult." High claims behaviour, lots of phone calls, that sort of thing. The idea: give those clients less attention. More time for the profitable relationships.
We said no.
Not because it is technically impossible. It is perfectly feasible. But because a model that ranks clients by how "difficult" they are will inevitably discriminate. Older people call more often. People with lower incomes claim differently. If you feed that into an algorithm without explicitly acknowledging it, you build in biases that nobody sees any more. That is exactly the kind of AI I refuse to be part of.
Explainability is not a feature
In our industry, explainability is not a nice-to-have. It is a hard requirement. When an adviser calls asking "why is this client at the top of my priority list?", there needs to be an answer. Not "the model says so", but: the policy expires in six weeks, policy density is low and there has been no contact for nine months.
That is why we do not build black-box models. It is a deliberate choice that sometimes makes us slower. A neural network occasionally delivers better predictions than an interpretable model. But if nobody can explain why client A gets priority over client B, you do not have a tool. You have a guess with a dashboard around it.
What we deliberately do not build
We do not build risk scores based on postcodes. Not because the correlation is absent -- it exists -- but because it is a proxy for income and background. We do not build models that replace advisers. And we do not build "engagement scores" that reduce clients to a number.
That may sound principled. It is mostly practical. An insurer that discovers its AI system systematically disadvantages certain groups does not have a PR problem. It has a regulatory problem. The AI Act makes this legally enforceable from 2026 onwards.
The invisible choices
The difficult thing about ethics in AI is that the most important choices are invisible. Which variables do you include? Which do you deliberately leave out? How do you handle bias in historical data? Nobody wants to hear about that in a sales pitch. But it determines whether your system will still stand up when the regulator comes knocking in two years.
We work with Brush-AI to systematically manage the ethical component. Not as a marketing narrative, but as part of our development process. Every module we build goes through an ethical review before it goes into production. That takes time. It does not produce flashy demos. But it is the reason our clients trust us with their client data.
I hold a seat on the KOAT advisory committee at SIVI, which focuses on quality assurance for automated applications in the financial sector. What I see there confirms the picture: the industry is moving fast, but the frameworks for responsible use are lagging behind. Those who invest now in explainable, transparent AI are building something that is becoming increasingly scarce: trust.
And trust is the one thing you cannot automate.