Purple Flower
Purple Flower
Purple Flower

Jan 12, 2022

AI in Consulting Practice: how do you operationalize AI in your business processes?

At the request of VVP, the platform for financial service providers, our CTO and AI strategist Dennie van den Biggelaar explains how to apply specific AI and machine learning to "advice in practice. In different editions, the following topics will be highlighted: 

  • Starting with specific AI and ML

  • Operationalizing in business processes

  • Integrate into existing IT landscape

  • Measuring = Learning: KPIs for ML

  • Ethics, regulation and society

  • AI and ML: a glimpse into the near future

In this second edition, we answer the question: how do you now operationalize such a trained algorithm?

Challenges in operationalizing AI in business processes

Imagine this: you and your data science team have designed a promising AI algorithm to predict churn with the goal of having advisors proactively work with it. This process was discussed in Part 1 of this "AI in Practice" series. The potential is there, but you soon find out that in actually putting it into practice, there are a number of complex barriers to overcome. What are these thresholds and how can they be circumvented?

Measurable results fail to materialize

A clearly articulated goal accurately identifies exactly what the AI algorithm needs to accomplish and is aligned with business objectives. The scope, on the other hand, guides the project by defining the relevant data sources, budget, timelines and expected results. What are the steps to achieve it?

A data science project is generally an investment in which:

  1. It's not clear what it can get you

  2. Not sure if your team will be able to make it happen

Therefore, make the project as small and manageable as possible, without losing its value and impact if it succeeds. And try to get results as quickly as possible that will give you proof that you are on the right track.

If you don't achieve those results, evaluate with the team and make adjustments. Do you achieve them? First goal achieved! Then turn it into a great story and present it to your business stakeholders and discuss with them how to make this bigger within your organization.

Questioning consistent data quality

A common stumbling block is the quality and constant delivery of up-to-date data. Inconsistencies and missing values can compromise the accuracy of the AI model. The solution? A thorough exploration of what data is always Accurate, Available and Consistent (the data ABC).

Does essential data to achieve your goal not meet this? Then apply extensive data cleaning, such as dealing with missing values, extreme outliers and incorrectly entered data. Then you will need to structurally secure these cleaning steps in a data transformation pipeline and associated process, so you can add this data to your foundation for a reliable operational model.

There is insufficient confidence in the AI model

Insufficient understanding and confidence in ML models is a barrier to acceptance among non-technical users. Not paying sufficient attention to this creates mistrust and resistance.... A solution is to select transparent models with good explainability and smart methods where complexity is converted to an understandable concept. Visualization and clear (process) documentation increase trust. As a result, the objection of a "black box" fades into the background.

And as with any change, it is important to carefully bring your colleagues into this process. Give them plenty of time to ask their questions and get used to this new technology and its capabilities. And realize that their questions and feedback are essential ideal inputs for you to make your intended application successful in practice.

Concerns around security, privacy and ethics

It goes without saying that security and privacy of (customer) data are a prerequisite to start at all. Fortunately, a lot of new legislation has been implemented in the past 5 years and organizations are applying it practically and structurally.

Trust is not just an issue of legislation and technology. You can also expect objections from various quarters on the ethical front:

  • Are we sure the algorithm is fair?

  • And what does that actually mean?

  • Are certain groups worse off in a situation with algorithm?

  • Do we consider that ethical?

  • How do I prevent my algorithm from discriminating?

Fortunately, the Dutch Association of Insurers has drawn up a number of guidelines that you can incorporate into your algorithm and approach. Do you want to be sure that nothing is overlooked? Make one person responsible for

The feedback cycle is missing

Listening to user experiences and leveraging this feedback creates a dynamic iterative cycle, evolving the model in line with business requirements. A structured feedback mechanism is crucial to the self-learning ability of the AI model. How to set this up correctly is different for each AI application.

For example, in the specific case of "preventing cancellations," have the advisors record what they did with the prediction: called customer, visited or did nothing with it. That way, over time, the effect on churn becomes measurable.

Insufficient monitoring takes place

The motto should be: "keep the algorithm on the leash". What you don't want are "hallucinations" or unexpected performance degradation, such as when the trend breaks. This means that there must be a careful monitoring and warning system to monitor model performance. A sustainable application further requires detailed documentation of parameters and data used, so that the model is and remains transparent and reproducible.

The model does not appear to be scalable

An algorithm should be "by design" part of a system with scalability in mind. This typically requires secure cloud solutions and scalable infrastructure such as MLOps technology (the ML variant of DevOps). Consider growth projections and ensure a sufficiently flexible system that adapts to evolving business requirements. Making the right choices for integration with IT landscape is essential (e.g. real-time or batch processing). But more on this in the next edition.

Last-but-not-least: insufficient engagement

According to Professor of Innovation Henk Volberda, the success of innovation is only 25% technical and 75% dependent on human adoption. Successful adoption starts with "CEO sponsorship," after all, water flows from the top down. Leadership must ensure that there is adequate training, communication and support when an AI model is deployed. Invest sufficient time and energy to make this new technology part of your organization, from strategy to operations. Because that's where the real return on investment is: the successful collaboration between human expert and AI technology.

"It's easy to create a self-learning algorithm. What's challenging is to create a self-learning organization." - Satya Nadella, CEO of Microsoft

In short, devising, building and validating a robust algorithm is only phase one of successfully putting AI into practice. In the next edition, we'll take a closer look at how to integrate it with existing IT systems and workflows.


The original article was published in the VVP magazine.