Nov. 1, 2023
More than a stochastic papega-ai
Jack Vos, Onesurance, in VVP 4, 2023
The rise of Artificial Intelligence (AI) seems to have reached the insurance industry as well through theChatGPT hype, and the discussion about its effects is anything but black and white. The question is how to strike a balance between the positive and negative aspects of the rise of ai, because in no sector is this balance more crucial than in insurance.
To begin with, the concept of artificial intelligence is by no means new. The term itself was first used at the "Dartmouth Workshop" in 1956, where scientists got together to discuss how machines might exhibit intelligent behavior. Since that time, we can distinguish three main phases in the development of AI within the insurance landscape:
- Rule-based systems (1960-1990). In the early years of AI, these systems used only manually entered rules to make simple decisions based on specific inputs. For example, accepting based on simple predefined rules. These systems were still far too limited to make complex decisions.
- Statistical Modeling and Data Analysis (1990-2010). As computers became faster and faster and analysis software became more intelligent, Machine Learning models could be used to discover patterns and trends in large amounts of insurance data. This helped particularly in risk assessment and fraud detection.
- Machine Learning and Predictive Analytics (2010-present). With the emergence of more advanced Machine Learning techniques, such as neural networks and deep learning, one has been able to perform even more complex analysis. This includes predicting customer behavior, setting rates based on individual characteristics and detecting fraudulent activity with higher precision. AI is also being used to improve customer service with chatbots, virtual assistants and automated interactions.
And so now, since November 2022, there is Chat-GPT, in which Microsoft plans to invest 10 billion euros. This is a Large Language Model (LLM) that constantly predicts the next word in a text, mimicking human language. Especially among seasoned data experts who have spent years refining algorithms and making sense of big data, the recent AI hype caused by ChatGPT has generated a mixture of excitement and concern. Two tweets from leading figures in the land of AI sum up the contrast between the positives and negatives well.
The positive side
Tweet 1 is from noted venture capitalist Marc Andreessen, in January 2023: "We are just entering an AIpowered golden age of writing, art, music, software, and science. It's going to be glorious. World historical."
The excitement around AI is certainly justified for the insurance industry as well, as it has introduced new possibilities that once seemed unthinkable. AI promises efficiency and accuracy on a new level. AI can analyze very large amounts of data and create insights around risk assessment, claims handling or customer service that would never be possible for human assessors. So the benefits of AI are simply too attractive to ignore, which is why more and more decision makers are putting AI on the agenda. It enables insurance companies to gain competitive advantages, improve the customer experience and reduce costs at the same time. In short: a unique opportunity to lead in a traditional industry that is subject to change.
The negative side
The contrast with tweet 2 is stark: "ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It's a mistake to be relying on it for anything important right now. (...)"
The author of this tweet is Sam Altman, he is the founder and CEO of OpenAI's ChatGPT himself. Have you ever heard a CEO say about his own product that, for now, it is good at creating misguided greatness? The concern is justified, by the way, because persuasion is, after all, the most important factor in allowing misinformation to do its devastating work.
The persuasiveness of ChatGPT is overrated in this regard. This means that anyone who is professionally involved in truth(invention) in any way should be alert to this, because with the right questions, extremely credible and convincing nonsense can come out.
According to Professor Terrence Seinowski, author of The Deep Learning Revolution, language models thereby also reflect the intelligence and diversity of their interviewer. For example, Sejnowski asked ChatGPT-3, "What is the world record walking across the English Channel?" to which GPT-3 replied, "The world record walking across the English Channel is 18 hours and 33 minutes" . The truth, that one cannot walk across the English Channel, was easily bent by GPT-3 to reflect Sejnowski's question. The coherence of GPT-3's answer depends entirely on the coherence of the question it receives. Suddenly, it is possible for GPT-3 to walk on water, all because the interviewer used the verb "walk" instead of "swim.
'AI is like a parrot repeating words without knowing their meaning'
There is an apt analogy that illustrates the drawbacks of AI: the stochastic parrot. Stochastic means "random or chance-based," referring to processes in which outcomes are not completely predictable. AI, especially in the form of Generative AI such as ChatGPT, actually acts as a repeating mechanism without complete understanding. Just as a parrot can repeat words without knowing their meaning, AI can reproduce (text) patterns without the ability to understand the underlying logic. This is worrisome, especially when we think about decisions with major consequences, such as underwriting risk or assessing insurance claims. When AI is deployed to make these decisions without a thorough understanding of the context, and without human intervention, called human in the loop, errors can occur unintentionally. This inherent unpredictability can lead to customer dissatisfaction, ethical issues and ultimately undermine trust in the insurance industry. Add to that the complexity of implementing AI and concerns about privacy and security, and you can understand why some decision makers at insurance companies are reluctant to invest fully in AI.
Important role specific ai
So the fierce discussion about AI seems to be mostly about Generative AI, of which ChatGPT is the best-known example. Generative AI refers to a subset of artificial intelligence in which algorithms are used to generate new, original and creative output . The distinction between Generative AI and Specific AI is becoming increasingly relevant in the insurance industry. Specific AI is focused on solving specific problems or performing specific tasks. The main application of Specific AI in the insurance industry has long been Predictive analytics. With this, highly accurate predictions can be made reliably, mathematically based, on historical data, for example for risk estimation, combined ratio development, claim level or most effective customer service. Reliability and accuracy are typically qualities required in the insurance industry.
So specific AI is also not "hype" and has been applied more and more successfully in the last decade, especially by the Data Masters in the market. Data Masters are companies that know how to make the best use of their resource data among other things through the use of data science and AI.
In addition to reliability and accuracy, a major advantage of dedicated AI systems is that it allows developers to closely control and adjust the transparency and fairness of the algorithms. Thus, applications can be designed and calibrated to meet the ethical data standards set by the Insurance Association. For now, even according to ChatGPT's own CEO, this is very challenging for Generative AI applications.
Balance crucial for insurance industry
Bottom-line, this is about the question: can we trust AI or not (yet)? A well-known definition of trust is: "the belief in good standing and honesty." The Association of Insurers has established ethical data frameworks for good reason, to ensure that data-driven insurance applications are fair and respectful. Adherence to these frameworks should ensure that the deployment of AI does not lead to discrimination, for example. No one wants a second "surcharge affair" that could seriously damage the good name of the insurance business.
This presents an additional challenge, especially since AI is able to detect complex patterns that are invisible to the human eye. The ability to explain these patterns and maintain ethical standards is critical to maintaining trust in the industry.
Jack Vos: 'Specific AI is not hype.'
Here lies an important task and responsibility for our industry's experienced data experts and AI strategists. With relevant knowledge and experience about AI and a thorough understanding of the insurance context, they can guard the balance between technological innovation and ethical considerations. We need to look not only at what AI can do for us, but also at what human experts can contribute to a sustainable and balanced future. So AI is not just about innovation and increased efficiency, but also about preserving the human factor and trust that is so crucial in our industry. Finding this balance is a challenge, but also an obligation that we must take seriously.
The key to success
According to a Cap Gemini survey recently conducted among 204 insurers worldwide, only 18 percent of insurers can call themselves Data Masters. More than 70 percent of insurers still belong to the Data Laggards . The differences are stark: revenue per FTE at a Data Master is 175 percent higher and they are 63 percent more profitable than Data Laggards . Data Masters' initiatives around data science and AI lead to higher NPS, improved combined ratio and increased premium revenue in more than 95 percent of cases.
Getting started yourself with the koat checklist
KOAT stands for Quality Unhampered Consulting and Transactional Applications. 'Unmounted applications' remains a fancy word for smart technology capable of replacing tasks, not people. Increasing use of such automated applications in the financial sector, combined with new (European) regulations, make quality assurance of unfastened applications increasingly important. SIVI has developed a tool for all parties developing and using unfixed applications with the Platform Unfixed Applications, which includes a knowledge base and a checklist. With broad representation on the Advisory Committee on the Quality of Unmounted Applications, the industry is showing its commitment to this platform. Visit www.sivi.org.
Source: the original article appeared in the VVP and is online here to read.


