Blog Layout

Automating Actuarial Processes Part 2 - Self Service tools

Thomas Saliba • 4 August 2023

Being a co-worker, not a customer!

What’s the blog about


How to improve agility when it comes to obtaining data required for adhoc analyses when data teams are stretched.


Who is this blog post for?


If you’re in an actuarial team, and you’re struggling to access the data you need to respond to your underwriters’ requests for more detailed analysis, the problems and solutions outlined below are likely to be of interest.


What’s the problem


Actuaries can’t easily get hold of their data as they don’t understand the structure of the detailed databases used to produce their nicely summarise triangles. It’s possible to get the required data ready for analysis, but the members of the business with the knowledge to do it are extremely busy, and not always available to produce custom data extracts.


So how do we solve the problem?

Continuing to rely on a small number of data experts may no longer be an option. It could seriously hamper your ability to “slice and dice” your account quickly in response to underwriter feedback during deep dive, planning and reserving analyses. This is the kind of insight underwriters will look to actuaries and portfolio managers to provide.


So what does the solution look like?


The solution I’ve found most efficient is a self-service tool, allowing the actuarial team to customise the data that is extracted. The tool usually allows data to be extracted in various pre-determined formats. The user interface could either be Excel based or web based.


Sounds great, how do we get there?


This is where we reach a fork in the road. The three solutions offered below will get you to where you need to be, however the paths taken will be very different, and involve different challenges.


You could:


Be the data expert - Task your Actuaries with understanding the structure of the data in the backend databases and how to produce the information needed – From experience, the databases underlying insurance companies can be a:

  • Morass – Defined by Bing (google wasn’t working) as an area of muddy or boggy ground, working your way through the various tables and the relationships between them can be an extremely time consuming task, and can certainly feel like wading through the muddiest of fields.
  • Minefield – If you’re used to having triangles delivered already converted to a single currency, adjusted for line size with transactions cut off at a particular point in time, you’ll need to tread very carefully when trying to put this information together yourself.


I know from experience that Actuaries are capable of achieving this. In the past, this is the route I’ve had to take, sometimes going directly into mainframe based systems (and sometimes crashing them!) to get the data I need. This explains a lot of the SQL and coding knowledge I’ve gained throughout my career. While I learned a huge amount about insurance data from this, it was a difficult path to follow, and probably wasn’t the best use of my Actuarial expertise.


An alternative approach would be to...


Be a customer – Alternatively, your Actuaries can specify exactly what you need your tool to do, and have your data experts can build it. While I’ve seen this approach work in the past, it risks compounding the key data problem that many insurers currently face, that there simply aren’t enough data experts in their business. 


Here are some more advantages and disadvantages of this approach:

Advantage Disadvantage
Well produced piece of software built purely by data professionals Actuaries unlikely to have time to specify exactly what tool needs to do
No need for Actuarial Team to skill up in coding Data experts unlikely to have time to either build or provide support for such a tool
If data teams are unavailable to make changes, even a minor adjustment to the data extracted won’t be possible


While I favour this approach to training your Actuarial Team to be data experts and produce their own bespoke data assets, I feel there is an optimal mix of the two approaches. This is…


Be a co-worker – I wrote a short LinkedIn post titled “A little SQL goes a long way” in which I advocated for Actuaries to learn some SQL. Not enough to become a professional SQL developer, but enough to do some basic manipulation of data. Building self service tools without requiring a lot of your data team’s time is a great example of this. Let’s look at why.


Most of the custom data extracts I’ve needed for my work can be split into two parts, namely:


  • The data call – This is the piece of SQL which joins all of the requires tables together in just the right way to produce the data in the format required. This is what you need your data expert for, there’s just no way around it.


  • The customisation – You know what you want the data to look like e.g. a triangle of incurred or paid claims converted to USD using the latest exchange rate, but you want to be able to do things like:
  • Split the account into certain classes of business
  • Compare losses to a previous period
  • Exclude certain losses e.g. catastrophe or large losses, accounts now out of underwriting appetite etc.


You need a way of applying these customisations without having to go to your data expert each time. This is where we can do a lot of the work.


With a working knowledge of SQL, it’s possible to work side by side with your data experts to produce the required solution. They will focus purely on the data call, using their unique knowledge of the data to deliver you a base dataset which you can customize as needed. This base dataset has many names e.g. Data View or Presentation Layer, but it’s commonly known in Actuarial Teams as “The Pricing Data” or “The Claim Level Reserving Data”.


Following this approach will give your data team confidence that by working with you to build the tool, they won’t be called on to deal with minor changes to the tool in the future.


Let’s do the Pros and Cons.

Advantage Disadvantage
Requires much less time from data teams than building the tool Your team will need some SQL knowledge.
Actuaries don’t need to specify everything the tool needs to do, as they can modify and fine tune as they go. Actuarial Team will need to share some of the maintenance
Working closely with data experts is a great development opportunity for Actuaries.


Having built these for clients in the past, I would favour this approach, even if it means making some time for a few member of your team to learn some SQL.


Can you give an example of where you’ve used this?


I’ll go one better, and give you two.


The customer approach


In one example, the reserving team had the benefit of a data team embedded within the department. This team provided a range of self service tools to extract most of the data needed for adhoc or detailed analyses. This meant that there wasn’t an imperative for the Actuaries to be able to manipulate the data themselves, allowing them to focus purely on their Actuarial analyses.

This worked well, until a slightly different cut of the data was needed, which the tool couldn’t provide. The data was needed to answer a request from the regulator, and the data team were unable to assist due to staff absence. The SQL required to produce the required data wasn’t complex, and I was able to produce the data required, and the regulatory deadline was met. I didn’t have the knowledge of the insurer’s data to build the solution from scratch, but I did have the SQL skills to take what was available and make minor changes.


The co-worker approach


I worked with a Lloyds’ syndicate who relied on manual data extracts to carry out certain analyses. This either required their data team to produce a bespoke extract, or a manual process was deployed. Their data team didn’t have available resource to build a self service tool, so I helped by building the team a self-service tool.


I achieved this by designing the base data we discussed earlier. In this case, we needed incurred losses by development period, all converted to a single currently, with the ability to group by a few different fields e.g. Class of Business, Cause code etc. I then asked the data team if they could produce such a dataset. Within an hour, I had some SQL delivered which produced the base data I needed!


I then adjusted the code to allow me to filter and group the data as needed. I now had a customisable way of producing almost any data extract the team needed. I also had a useful base dataset to carry out further analysis in future, again, all achievable using some basic SQL.


I then built a basic user interface within Excel which dynamically updated the SQL code based on the filters selected by the user e.g. Selecting a particular line of business or cause code. This allowed members of the team with no SQL knowledge to utilise the tool as well.


Summary


So in summary, I’ve found the best way to achieve greater agility when it comes to extracting data for detailed analysis is to work alongside a data team to produce self serve tools, rather than simply specify exactly what the tool needs to do and have them build it. This means having your data experts produce the base data you need, and have enough SQL skills within your Actuarial Team to adjust it as needed. 

by Thomas Saliba 28 July 2023
I have a passion for simplifying and automating Actuarial processes. I’ve found these engagements to be some of the most rewarding in my career. Being able to free up an Actuarial team to carry out high value analysis is a great feeling. This series of blog posts will look at some of the key blockers to automating your processes, and how to overcome them. Who is this blog post for? If you already have a centralised data team embedded in your actuarial function, and your actuaries can all import granular data into their R and Python based analyses, you’ve probably overcome a lot of the issues we’ll discuss here. If however, you’re looking to make your Actuarial Team’s processes more efficient, without leaning on another team to build your solutions, you’re in the right place. What should we automate? Automation can mean a few different things within an Actuarial process. I break these down into two separate categories: Preparation of data – If you’re manually copying and pasting data from various spreadsheets, or creating pivot tables to get data in the right format, this can be done automatically. This is the automation we’ll focus on. Judgements – Getting to a first cut by applying “judgement algorithms” e.g. choosing a chain ladder ultimate if an accident year is more than x% developed, or leaving ultimate costs unchanged if the AvsE is below a certain percentage can get you to a first cut of your results very quickly. We’ll focus out attention first at getting data processes automated, and look at this exciting area in the future. Some of the best automated resources for an actuarial team that I’ve worked on are: Self-service reserving data – The ability to put your triangles under the microscope and quickly dig down to much more granular splits of an account are vital. Having to wait for external data teams to produce the required data for you can rob an actuarial team of a the agility they need to produce the high value decision support a business needs in business planning and portfolio deep dives. Analysis of change and AvsE – Having an automated process to understand how claims have moved from one quarter to the next can help a team to decide where to focus their analytical efforts during a reserving exercise. Rate simulation – Being able to calculate rate change impacts at individual policy level and aggregating up will allow you to drill down to any level of detail your underwriters desire. However, if it’s not automated, it’s probably going to take too long. This can be done with specialist software, however this may not be an option for you in the short term. While we all know that automating processes will lead to better outcomes, so what’s holding us back? Here are my top reasons… Let’s start with…. We can’t automate, we don’t have coding skills I can definitely empathise with this. We build our processes with the tools that we have. For most of us, that’s Excel. While I’d strongly recommend bringing some SQL and open source coding skills into any Actuarial team, even the most automated processes are probably going to interface with Excel at some point, so building out from here is a good place to start. I’ve had the best results when able to utilise the power of software such as SQL and SAS, plus more recently R and Python (Still using a lot of SQL), so not having access to these skills can be seen as a reason not to start automating. If you don’t have these skills, I’d recommend starting with a programming language I’ve used a lot, which is…. VBA! It’s definitely not the trendiest way to get things done, and I have to admit my R markdown notebooks and shiny apps tend to raise more eyebrows. That said, when it comes to taking that first step into process automation, using VBA macros to create tools that integrate seamlessly into existing Excel based processes has been a winning solution. Reasons for this are: No IT issues – While it’s getting easier to access open source language on company networks, if you don’t already have Excel and VBA available is pretty much unheard of. Backwards compatible – This is a great advantage of starting out in VBA. When building VBA based processes for clients without a lot of team members with VBA knowledge, I’ll build the process in a different way. I’ll build something that “looks and feels” like a normal Excel workbook i.e. has formulae that can be tracked back to source, and doesn’t require the user to understand the code to understand how the spreadsheet works. This gives the client the confidence to embrace automation without having to put their faith in a “black box” process. Quick learning curve – You can pick up VBA very quickly, even having Excel write some basic macros for you to get you started. This will allow a team to build up their capability and enhance the process further over time. While your end state might well be analyses using R or python, teams without much coding experience have found it much easier to get up to speed with VBA more quickly, while avoiding key person dependencies that come with adopting new software. There’s one other thing that makes VBA a great place to start. If your data process involves your MI team extracting data from your backend system and delivering it to you in a spreadsheet, there’s a good chance this step can be removed completely using some well written SQL code embedded in a VBA macro. Your team can then interact with your data in a much more user friendly way, The problem you’ll likely face is… Our data expert is too busy to assist us If you can solve this problem, you’ll be well on your way to achieving self-service granular data We’ll cover this in the next blog post titled “Data teams.. How to be a co-worker and not jut a customer”. So in summary, if you have coding skills in your team, that’s great, but not having them doesn’t mean you can’t start automating your processes. Getting something built in VBA and using it as a starting point for your team to upskill in coding is a great first step to making your processes more efficient.
Share by: