Risk Modelling in Insurance: Big Compute futures
Big Compute delivers modelling, simulation, data gathering and analytics capabilities that are transforming the way insurers do business.
Faster, more granular modelling, simulation and analytics have become essential to insurers as they respond to today’s competitive and regulatory challenges. Critically, insurers need high-performance compute power to do calculations at scale and in parallel, so they can better understand their business and support informed decision-making.
That’s what Big Compute is about – taking advantage of the power of advanced processors and accelerators like graphics processing units (GPUs) to run large numbers of models, simulations and variables. This is a way of thinking supported by technology: if you have a number of independent tasks, you can speed things up by running them in parallel and then create tight feedback loops of modelling, experimentation, data gathering and analysis.
As compute demands continue to grow at a rapid pace, many insurers are finding it difficult to keep up using their on-premises technology. Some simply don’t have any more space, power or cooling for yet more servers. At the same time, spikes in compute capacity requirements when actuaries are developing new models, or when companies are closing their order books or reporting to regulators, are causing organizations to question whether they should add more servers that are going to sit idle much of the time. Many are asking whether there is another way to get the capacity to handle that peak load, and this is driving customers to evaluate the cloud.
A lot of insurers currently run a compute cluster for in-house or commercial applications. A very easy starting point is to run jobs in the cloud using the Microsoft HPC Pack cluster management tool, which provides a set of tools for bursting into Microsoft Azure. This enables insurers to easily add compute capacity in the cloud, move the data, run the applications, and turn it off when they’re done.
The next step in that evolution is for customers to extend their clusters into the cloud, setting up virtual machines (VMs) that they can manage and are in full control of. This provides an alternative way to expand the cluster, with VMs that can appear as part of the corporate network. Some customers are taking that model one step further and moving from a hybrid environment to deploy complete clusters in Microsoft Azure. They may still have some on-premise data centres, but they’re replicating what works for them, all in the cloud. This gives them full control over the lifetime of those VMs, and they can manage the storage, networking and so on.
Ultimately, some insurers are now moving to a model where they’re submitting jobs to the cloud, not clusters. They’re managing the applications, but the code that makes things work in the cloud is managed by a service provider or partner. All the actuary needs to do is provide their application, input data and parameters, specify the type of quantity of VMs they need and the cloud service will take care of the rest. One way to develop these services is with Azure Batch, which delivers job scheduling as a service to automate cluster management and task execution. It’s a different paradigm that enables a much lower operational overhead because the organization doesn’t have to worry about the infrastructure. That’s the true promise of the cloud – we take care of it for you.
A cloud native approach also makes a lot of new scenarios possible because insurers can move towards a self-service model that enables users to run the applications they want when they need them, within policy or cost restrictions. The organization has control of its data sets and can manage which applications are available to users, but users can spin up on-demand and run their compute-intensive jobs. Insurers can fine-tune the types of VMs they choose for each job, using GPUs and high-end processors for compute-intensive tasks. They can do more testing of their application models, such as A/B testing where they’re running jobs twice with different settings because they’re not constrained by the kit they have on-premise. Insurers can transform how their actuaries and developers work and help solve new problems for the business, while minimizing costs.
The compute power and analytics that were only available to the largest players in the industry are now available to virtually anyone, and this really changes the game in terms of competitiveness. Smaller firms can now more readily compete with big ones, and big firms can take advantage of their scale in different ways. If an insurer has better visibility into their portfolio, they can ask a lot more ‘what-if’ questions to get more of a real-time feel on their risk exposure and transform how they operate the business. As they build a holistic view of the business and evolve towards a self-service model for actuaries and risk officers, rapid simulation workflows and feedback loops with machine learning is making this transformation possible.
Find out more by downloading Microsoft’s Perspectives on Insurance Risk Modelling