IKOR is part of

Aqueduct with boat sailing in canal
"AI will be in every insurance process".

- data analytics

Scaled AI platforms: "Collaboration and integration boost business potential"

Even with a small budget, AI solutions can be scaled creating business benefits. The options are varied - whether insurers use low-code or graphic workflow tools, or even program themselves.

– Abstract

Just about all insurers have taken it upon themselves to build and/or expand AI platforms. Their goal is to scale a data science ecosystem for many AI applications. This helps insurers derive more value from enhanced risk management, for example - via forecasting, improving marketing reach, or making strategic decisions. Sarah Detzler (SAP), Dr. Michael Zimmer (Zurich Insurance), Philipp Schützbach (DataIku) and Thomas Löchte (IKOR Informationsfabrik) share their experiences in this roundtable interview. They outline how insurers are utilizing scaling options.

– Interview

About the experts

Dr. Sarah Detzler is Competence Lead Data Science and Machine Learning at SAP

Dr. Michael Zimmer is the Chief Data Officer at Zurich Group Germany

Philipp Schützbach is a Sales Engineer at DataIku

Thomas Löchte is the business leader of IKOR sister Informationsfabrik

– Focus

Data Analytics

„To scale faster, companies need agile, flexible yet standardized processes as well as reusable building blocks and cross-functional collaboration“

Building and scaling AI ecosystem platforms is an absolute trend for insurance companies. What are your most important guidelines and learning areas?

Detzler: In consulting, I always start an architecture discussion with questions like: Where is the data? Where do we put the R or Python script, for example, to achieve the optimal runtime? These are the kinds of questions you have to consciously ask data science teams, because not all team members have this on their radar. If you take the architecture into account right from the start, projects become much easier.

Zimmer: We have built a cloud-based state-of-the-art AI landscape at Zurich - based on a hyperscaler. For scaling, this means: We pull up the platform and build suitable Git repositories for data management and MLOps for our models. Additionally, each AI application is encapsulated in a function or container - allowing us to maintain up to three different versions in parallel. We also established clear naming conventions and defined service levels within our Delivery Center in Barcelona. These analyze, for example, whether our containers and applications are active and functioning correctly. Our authorization concepts are data protection compliant. All accepting systems are supplied via interfaces; all AI results are therefore API -enabled. We store the information in a data lake.

In terms of resources, can data scientists handle the scaling of AI platforms? Or do we need domain experts on top of that?

Zimmer: My division works on an interdisciplinary basis. On the one hand, insurance companies such as Corporate Insurance, Life or Motor are completely different. On the other hand, technology-savvy underwriters and actuaries are also proficient in data science methods. Therefore, they come on board. Since the specialist departments have to adopt standard structures, we should make life as easy as possible for their teams.

What's the point here?

Zimmer: The department owner should deploy the analytical models; they decide and put the models into production themselves. In this way, for example, new versions can be made productive by the respective department owner in just a few minutes. We are currently setting up more employees centrally: I direct them, but they work exclusively on AI topics. In this way, we want to further spread artificial intelligence in the specialist area - especially because the data situation differs massively from area to area.

In specialist departments, for example among actuaries, there is obviously a lot of enthusiasm for technology. People working there are talented with numbers and methods ...

Schützbach: Because insurance companies make predictions, they are fundamentally more adept with numbers and advanced in predictive analytics compared to other industries. However, the industry focus is often only on the added value of a specific use case and not on the added value that a platform can generate. The platform idea is still maturing - that is, the awareness that collaboration, reusability, and integration can benefit business areas and help companies scale from a few to hundreds of productive use cases.

At the same time, insurance companies have a real need to glean knowledge from data ...

Detzler: You have to be accountable to auditors and avoid creating a black box that spits out barely comprehensible results. Responsible parties want to know: How does the machine learning model work? Which factors influence the results? What happens in the background? Insurers want to understand these dynamics much better.

Zimmer: Because of the personal reference in the data, we must be able to explain why we reject a customer, for example, for data protection or regulatory reasons.

Many insurance companies have put numerous AI use cases into production in recent years. How strong is the desire to scale relevant ecosystems, i.e., platforms on which several AIs run?

Löchte: All insurers are on their way there. In the future, insurance companies will no longer operate a single-digit number of AI solutions, but rather hundreds.

Schützbach: That's exactly the point: In order to remain competitive, companies must be able to use the platform idea to scale AI initiatives and thus leverage the potential of AI in the insurance industry. The key question is: Why are insurance companies unable to scale such applications?

thomas-löchte-informationsfabrik.jpg

Thomas Löchte is the business leader of IKOR sister Informationsfabrik

"Insurers will operate hundreds of AI solutions in the future."

What scalability barriers have you actually encountered?

Schützbach: Often the wrong tools and skills are present within the company, such as code-based tools only. Many analysts are not able to work with them, or insufficiently. Collaboration between relevant stakeholders cannot take place in this way. Also, the tools to manage the risks and the lifecycle models are often missing. Last but not least, historically grown silos cause problems; synergies fall by the wayside. This is because organizations often implement the same use cases in different areas, often starting from scratch.

Zimmer: You also need a top-level advocate sponsoring the platform idea. Often, there is an impulse among management to spend millions on platforms and hip methods. But even with small amounts of money, it is possible to create a lot of value and scale solutions.

Which data science and AI topics can insurers particularly benefit from?

Löchte: Significant potential lies in higher dark processing rates and optimized processes - be it claims or applications. Around new business models, AI solutions such as telematics, data and usage-based tariffs are often in use. Probabilities like "next best action" provide a quick return on investment, not just for marketing and assuming there is an existing data basis for it.

„The platform should be considered at the onset, otherwise AI can quickly become inefficient and expensive"

Schützbach: Marketing examples include pricing forecasts, product recommendations or risk areas from fraud detection to cancellation probabilities. If the corresponding use cases show potential for added value, organizations should look at the platform idea because as soon as use cases can be reused without starting from scratch, economies of scale take effect.

Zimmer: We also use increments to implement quick-and-dirty solutions once using the appropriate technologies and improving them over the long term. In our platform, we are housekeeping after a year to make things better and identify more solutions. For example, if we identify a fraud or recourse that can be improved by just a few percentage points, there's a lot of money attached to that. At the same time, in process automation, we see systems that are not yet exhausted.

Would you give us an example?

Zimmer: Automated reading and matching of ID cards is important for insurers in order to comply with the Money Laundering Act. Processes, robots, and even workflow engines can be provided corresponding with the data patterns. A service catalog is created. This is not a volume case. But managing expectations with a 96 percent hit rate turns out great: you can build trust in Data Science. You only get moments of opportunity to instill trust with executive boards. My most important message here is: You should think about the platform from the onset, otherwise AI can quickly become inefficient and expensive.

Because scaling - even for business strategy and expansive themes - is more of a mindset question?

Zimmer: Exactly. We have rolled out applications such as ID card reading and adopted the solution with meta-structures in the Hyperscaler in our international group. Because a solution does not work exactly the same way in every market, we define blueprints that we roll out worldwide - around basic functions such as OCR , i.e., text recognition.

Dr. Michael Zimmer: "The further development of models gives rise to different variants; they must be operated in parallel in order to support different life cycles within the value lines."

What kind of volume are you guys running here?

Zimmer: We currently have three applications live, 15 more in the pipeline in Germany, and 20 more worldwide. One example: subrogation recognition now makes it possible to identify subrogation early on in court documents and to predict the probability of subrogation with just a few pieces of information upon the first notice of loss (FNOL). We are introducing a similar engine with our colleagues in the corporate insurance area for expert reports. Strategically, this means that a modular construct makes it possible to scale vertically and horizontally.

michael-zimmer-zurich-gruppe-deutschland.jpg

Dr. Michael Zimmer is the Chief Data Officer at Zurich Group Germany

"The further development of models gives rise to different variants; they must be operated in parallel in order to support different life cycles within the value lines."

Do you appreciate low-code approaches and graphical workflow tools in terms of scaling?

Zimmer: I have yet to see a click interface that performs significantly better than self-written code. Within 6 months at the latest, you always notice that the “devil is in the details.”

Detzler: We, on the other hand, use both low-code and no-code approaches with prefabricated modules and models, and not just for insurers. These approaches are always well documented in our solutions. You can get relatively far with them. We offer various solutions for models that are suitable for low-code/no-code applications as well as graphical self-services.

To what percentage of the cases does this possibility apply?

Detzler: Of the standard cases, you can get away with low-code/no-code and graphical self-services in 80 percent of cases. Otherwise, it is possible to additionally integrate your own code - be it via Python, R, etc. - but you have to start with self-written code right from the beginning. However, with self-written code you have to pay attention to productivity and reusability at the onset in order to work sustainably. Otherwise, no target-oriented results will come out.

"Low-code platforms and self-services are good for processes."

What is especially critical to success in low-code/no-code solutions and graphical workflow tools?

Detzler: We always take the departments with us and offer them low-code applications or self-service tools - especially in the beginning, always with the support of the data scientists and IT. Once the departmental employees have been appropriately trained and oriented, they use the solutions themselves. In many projects, however, the experts still have to get involved - with customized Python or R code.

Zimmer: Low-code interfaces as pure self-services pose the risk that your program is not included in standard monitoring processes - and nobody notices if it doesn't run. Especially if the self-service developer is not available, this can be challenging. As models evolve, there are different model variants that need to be run in parallel to support different lifecycles of our value lines. Therefore, based on IT best practices, we generally deploy multiple containers. We maintain repositories for roll-backs. Low-code platforms and self-services are well suited for processes. For more complex applications, however, there is a lack of maintenance, operation and infrastructure for monitoring. And performance, robustness, stability, and efficiency take on additional importance.

Schützbach: Nevertheless, I would like to take up the cudgels for data science platforms that can also contain low code and graphical workflows. These can keep up with CI or CD as well as versioning and also generate a great advantage as a central platform - especially with regard to maintainability - when hundreds of projects are operated productively.

Detzler: Moreover, it's not just one employee who builds automation solutions via low-code and the like. Capacity bottlenecks are dealt with by a team of several people. The departments implement the solutions that initially bring them the greatest added value; meanwhile, IT can plan and implement solutions productively. Our solutions also offer maintenance, operation and monitoring as standard.

Dr. Sarah Detzler, SAP

Dr. Sarah Detzler, Competence Lead Data Science and Machine Learning at SAP:

"An evolved IT infrastructure with many silos cannot be easily lifted onto a data science platform; moreover, many companies, so far, have given little thought to methodologies for data science projects."

For insurance companies, it also boils down to clean documentation as soon as their employees’ program themselves. But then the risk of causing damage also increases ...

Schützbach: The classic data scientist is considered the most capable user when implementing use cases. However, they often lack domain knowledge about the data. Therefore, business unit users and data scientists should work closely together in the implementation of use cases. To do this, different user profiles must be enabled to work with the data in both low code and full code - and through a common, collaborative tool.

"A machine learning model is not a 'no-brainer'."

More broadly: What happens when models stop working and bias develops - especially in terms of testing, correction, adjustment and limited staff resources?

Detzler: Circumstances change, new data is added and a model can suddenly perform a few percentage points worse than the start point. You have to question the speed at which new data is added and when certain processes change. On this basis, you can retrain the model and adapt it to the new data situation.

Schützbach: Exactly, a machine learning model is not a "self-runner". Anyone who sets one up has to proactively manage the life cycle of the model, i.e., when and how a model has to be retrained. On the other hand, he or she should also understand why the model behaves the way it does. A decision tree is very easy to follow. However, it is also possible to analyze in a neural network whether biases exist with respect to certain populations.

In which cycles do you plan the timing of model testing and retraining with IT and the business units?

Detzler: Tests that catch bias should be performed regularly. We often test new incoming customer data on a quarterly basis. This helps us determine whether models actually need to be retrained. For some use cases, monthly retraining is worthwhile. If you work with sensor data, as in industry, you should retrain the solutions weekly if necessary. For other cases, an annual cycle is sufficient.

Schützbach: When analyzing for biases, we ask: How does the model behave and why does it do that? To ensure consistently high model performance, we recommend checks during productive operation. These trigger model retraining, if possible, automatically, if there are difficulties - for example due to a drift in the data or in the performance of the model.

What might such an automated check look like in practice?

Schützbach: This is problematic in the so-called supervised area. Here, you don't continuously label new data. If the data structure changes over time, the model no longer necessarily performs as well. Therefore, we either have to label new data in order to validate the performance of the model, or we compare the differences of the new data with the original training data, i.e., the data drift. Once there is a statistical discrepancy, the workflow prompts a worker to label new training data and retrain the model. Or the workflow does this automatically.

Zimmer: Both data drift and accuracy are core components of properly set up platforms. It is crucial to automatically notify an employee if the models deliver deviating results or no longer function accurately. We always need a semi-automated process, otherwise we get undetected bias. This is also imperative to avoid from a regulatory perspective.

From an IT perspective, the task of integrating a multitude of solutions into one platform is a mammoth task. What is the current situation in the insurance industry regarding the central management and monitoring of all AI projects under one virtual roof?

Löchte: The relevant platforms offer all the functionalities needed to perform monitoring, support deployment and be maintainable. However, there are no standard concepts yet; each company builds its own. This is because data requirements differ from insurer to insurer. If everything is hard-coded, however, this impacts flexibility.

Detzler: An IT infrastructure that has grown up over time with many silos cannot simply be lifted onto a data science platform. In addition, most insurance companies are not yet ready on the mindset side. According to the motto: Agile is a topic. But little thought has been given to methodologies for data science projects. A mindset shift must develop slowly and continuously.

Philipp Schützbach, Dataiku
Philipp Schützbach is a Sales Engineer at DataIku

"To scale faster, companies need agile, flexible yet standardized processes as well as reusable building blocks and cross-functional collaboration."

What advice would you give an insurance company on the way to scale their AI platform?

Löchte: Every company has to find its own way. Making use cases visible and creating benefits, bringing people along and building up know-how helps to develop platforms. That is a long, individual path. In addition, the technological foundations are diverse: from code in Python and containerization to proximity to the core system SAP or Guidewire to the data science platform, such as that provided by DataIku.

Imagine the year 2025 and every insurance company is trying to run smart AI platforms. Have we already reached the end of possibilities?

Zimmer: The whole thing thrives on the AI services I use - preferably ones I can maintain and am certain have no bias. I want to make the services generally accessible via repositories: If an organizational unit uses AI services, my team will be notified. Then we can ramp up our slider for performance. Gradually, we are able to actually use service-oriented architecture. Keeping our services documented, sustainable, and future-proof still has a long way to go. But one thing is certain: AI will be pervasive in every insurance process.

Schützbach: Both increasing availability of big data and advances in computing power and scalability through hyperscalers will lead to increasing use of AI. However, in order to scale faster, we need agile, flexible and yet standardized processes in companies, as well as reusable building blocks and cross-disciplinary collaboration.

"In the future, you will get more ready-made models for special tasks."

Would you please give an example?

Löchte: Standard models for translation or handwriting recognition will cover a much broader field in the future. This will lead to more ready-made models for special tasks in the future. Currently, you usually have to develop these for your own platform first.

Is the insurance industry better than its reputation?

Absolutely. There will be more cross-company solutions - as an ecosystem and with cooperation partners. This will open up further optimization opportunities for the industry: further potential can be leveraged via joint data-based solutions. In five years, we will be working on precisely these topics.

Contact Person

sarah-detzler-sap.jpg

Dr. Sarah Detzler

Competence Lead Data Science and Machine Learning
SAP
communications@ikor.one
+49 40 8199442-0

Dr. Michael Zimmer, Zurich Group Germany

Dr. Michael Zimmer

Chief Data Officer
Zurich Gruppe Germany
communications@ikor.one
+49 40 8199442-0

Philipp Schützbach, Dataiku

Philipp Schützbach

Sales Engineer
Dataiku
communications@ikor.one
+49 40 8199442-0

thomas-löchte-informationsfabrik.jpg

Thomas Löchte

Managing Director
Informationsfabrik GmbH
communications@ikor.one
+49 251 9199790

Related Content