AI is more Than a Stochastic Parrot
The rise of artificial intelligence (AI) seems to have reached the insurance sector through the ChatGPT hype, and the discussion about its effects is anything but black-and-white. The question is how we can find a balance between the positive and negative aspects of the rise of AI, as this balance is crucial in no other sector as much as it is in the insurance world.
AUTHOR – JACK VOS – ONESURANCE.NL
To begin with, the concept of artificial intelligence is certainly not new. The term itself was first used during the “Dartmouth Workshop” in 1956, where scientists gathered to discuss how machines could exhibit intelligent behavior. Since then, we can distinguish three main phases in the development of AI within the insurance landscape:
Rule-based systems (1960-1990) In the early years of AI, these systems only used manually inputted rules to make simple decisions based on specific inputs. For example, acceptance based on simple predefined rules. These systems were still too limited to make complex decisions.
Statistical modeling and data analysis (1990-2010) As computers became faster and analysis software became more intelligent, Machine Learning models were deployed to discover patterns and trends in large amounts of insurance data. This was particularly helpful in risk assessment and fraud detection.
Machine Learning and Predictive Analytics (2010-present) With the advent of more advanced Machine Learning techniques, such as neural networks and deep learning, it has become possible to perform even more complex analyses. This includes predicting customer behavior, determining rates based on individual characteristics, and detecting fraudulent activities with higher precision. AI is also used to improve customer service with chatbots, virtual assistants, and automated interactions.
And now, since November 2022, there is Chat-GPT, in which Microsoft wants to invest €10 billion. This is a Large Language Model (LLM) that predicts the next word in a text, mimicking human language. Among seasoned data experts who have worked for years on refining algorithms and understanding big data, the recent AI hype caused by ChatGPT has elicited a mixture of excitement and concern. Two tweets from prominent AI figures encapsulate the contrast between the positive and negative aspects.
The positive side: “It’s going to be glorious” Tweet 1 is from well-known venture capitalist Marc Andreessen, in January 2023: “We are just entering an AI-powered golden age of writing, art, music, software, and science. It’s going to be glorious. World historical.” The excitement around AI is certainly justified in the insurance sector, as it has introduced new possibilities that once seemed unthinkable. AI promises efficiency and accuracy at a new level. AI can analyze very large amounts of data and create insights into risk assessment, claim handling, or customer service that human assessors could never achieve. The benefits of AI are simply too attractive to ignore, which is why more and more decision-makers are putting AI on the agenda. It enables insurance companies to gain competitive advantages, improve customer experience, and reduce costs simultaneously. In short, a unique opportunity to lead as a ‘Data Master’ in a traditional industry subject to change.
The negative side: “a misleading impression of greatness” The contrast with tweet 2 is stark: “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now. (…)” The author of this tweet is Sam Altman, the founder and CEO of OpenAI’s ChatGPT itself. Have you ever heard a CEO say about their own product that it is still good at creating misplaced greatness? The concern is justified because persuasiveness is indeed the main factor in allowing false information to do its devastating work. The persuasiveness of ChatGPT is overrated in this regard. This means that anyone professionally involved with the truth must be alert, as asking the right questions can produce highly credible and persuasive nonsense. According to Professor Terrence Sejnowski, author of The Deep Learning Revolution, language models also reflect the intelligence and diversity of their interviewer. Sejnowski, for example, asked ChatGPT-3: “What is the world record for walking across the English Channel?” to which GPT-3 replied: “The world record for walking across the English Channel is 18 hours and 33 minutes.” The truth, that one cannot walk across the English Channel, was easily bent by GPT-3 to reflect Sejnowski’s question. The coherence of GPT-3’s answer is entirely dependent on the coherence of the question it receives. Suddenly, for GPT-3, it is possible to walk on water, all because the interviewer used the verb ‘walk’ instead of ‘swim.’
There is a striking analogy that illustrates the drawbacks of AI: the stochastic parrot. Stochastic means ‘random or based on chance,’ referring to processes where outcomes are not entirely predictable. AI, especially in the form of Generative AI like Chat-GPT, essentially functions as a repeating mechanism without full understanding. Just as a parrot can repeat words without knowing their meaning, AI can reproduce (text) patterns without the ability to understand the underlying logic. This is concerning, especially when we think about decisions with significant consequences, such as risk acceptance or assessing insurance claims. If AI is used to make these decisions without a thorough understanding of the context and without human intervention, the so-called human in the loop, unintended errors can occur. This inherent unpredictability can lead to customer dissatisfaction, ethical issues, and ultimately undermine trust in the insurance industry. Add to this the complexity of AI implementation and concerns about privacy and security, and it is understandable that some decision-makers in insurance companies are hesitant to fully invest in AI.
The Important Role of Specific AI The intense debate about AI seems to focus mainly on Generative AI, of which ChatGPT is the most well-known example. Generative AI refers to a subset of artificial intelligence that uses algorithms to generate new, original, and creative output. The distinction between Generative AI and Specific AI is becoming increasingly relevant in the insurance sector. Specific AI is focused on solving specific problems or performing specific tasks. The main application of Specific AI in the insurance industry has been predictive analytics for years. With historical data, very accurate predictions can be made reliably and mathematically, for example, for risk assessment, combined ratio development, claim amounts, or the most effective customer service. Reliability and accuracy are typically qualities demanded in the insurance industry. Specific AI is also not a ‘hype’ and has been increasingly successfully applied over the last 10 years, particularly by the ‘Data Masters’ in the market. Data Masters are companies that optimally utilize their resource data, among other things, through the use of data science and AI.
For insurers, data is the key to success [Cap Gemini] According to a recent Cap Gemini study conducted among 204 insurers worldwide, only 18% of insurers can call themselves ‘Data Masters.’ More than 70% of insurers are still among the ‘Data Laggards.’ The differences are striking: the revenue per FTE is 175% higher for a Data Master, and they are 63% more profitable than Data Laggards. Initiatives of Data Masters around data science and AI lead to a higher NPS, an improved combined ratio, and increased premium income in more than 95% of cases. In addition to reliability and accuracy, a significant advantage of Specific AI systems is that it allows developers to accurately control and adjust the transparency and fairness of the algorithms. This way, applications can be designed and calibrated to meet the ethical data standards set by the Dutch Association of Insurers. For now, even according to the CEO of ChatGPT himself, this is very challenging for Generative AI applications.
Finding the Balance: Crucial for the Insurance Sector Bottom line, the question is: can we trust AI or not (yet)? A well-known definition of trust is: “the belief in a good name and honesty.” The Dutch Association of Insurers has established ethical data frameworks to ensure that data-driven insurance applications are fair and respectful. Adhering to these frameworks should ensure that the use of AI does not lead to discrimination, for example. No one wants a second ‘benefits scandal’ that could severely damage the good name of the insurance company. This brings an additional challenge, especially because AI can discover complex patterns that are invisible to the human eye. The ability to explain these patterns and uphold ethical standards is essential for maintaining trust in the sector. Here lies an important task and responsibility for experienced data experts and AI strategists in our industry. With relevant knowledge and experience in AI and a thorough understanding of the insurance context, they can maintain the balance between technological innovation and ethical considerations. We must not only look at what AI can do for us but also at what human experts can contribute to a sustainable and balanced future. AI is not just about innovation and more efficiency but also about maintaining the human factor and the trust that is so crucial in our industry. Finding this balance is a challenge but also an obligation that we must take seriously.
Get started with the KOAT Checklist from SIVI KOAT stands for Quality Unmanned Advice and Transaction Applications. ‘Unmanned applications’ is a nice term for smart technology that can replace tasks, not people. The increasing use of such automated applications in the financial sector, combined with new (European) regulations, makes quality control of unmanned applications increasingly important. SIVI has developed a tool with the Platform Unmanned Applications, which includes a knowledge base and a checklist, for all parties that develop and use unmanned applications. With broad representation in the Quality Unmanned Applications Advisory Committee, the sector shows its commitment to this platform. Visit www.sivi.org.
Post Tags :
Share :