by Thomas Saliba
•
28 July 2023
I have a passion for simplifying and automating Actuarial processes. I’ve found these engagements to be some of the most rewarding in my career. Being able to free up an Actuarial team to carry out high value analysis is a great feeling. This series of blog posts will look at some of the key blockers to automating your processes, and how to overcome them. Who is this blog post for? If you already have a centralised data team embedded in your actuarial function, and your actuaries can all import granular data into their R and Python based analyses, you’ve probably overcome a lot of the issues we’ll discuss here. If however, you’re looking to make your Actuarial Team’s processes more efficient, without leaning on another team to build your solutions, you’re in the right place. What should we automate? Automation can mean a few different things within an Actuarial process. I break these down into two separate categories: Preparation of data – If you’re manually copying and pasting data from various spreadsheets, or creating pivot tables to get data in the right format, this can be done automatically. This is the automation we’ll focus on. Judgements – Getting to a first cut by applying “judgement algorithms” e.g. choosing a chain ladder ultimate if an accident year is more than x% developed, or leaving ultimate costs unchanged if the AvsE is below a certain percentage can get you to a first cut of your results very quickly. We’ll focus out attention first at getting data processes automated, and look at this exciting area in the future. Some of the best automated resources for an actuarial team that I’ve worked on are: Self-service reserving data – The ability to put your triangles under the microscope and quickly dig down to much more granular splits of an account are vital. Having to wait for external data teams to produce the required data for you can rob an actuarial team of a the agility they need to produce the high value decision support a business needs in business planning and portfolio deep dives. Analysis of change and AvsE – Having an automated process to understand how claims have moved from one quarter to the next can help a team to decide where to focus their analytical efforts during a reserving exercise. Rate simulation – Being able to calculate rate change impacts at individual policy level and aggregating up will allow you to drill down to any level of detail your underwriters desire. However, if it’s not automated, it’s probably going to take too long. This can be done with specialist software, however this may not be an option for you in the short term. While we all know that automating processes will lead to better outcomes, so what’s holding us back? Here are my top reasons… Let’s start with…. We can’t automate, we don’t have coding skills I can definitely empathise with this. We build our processes with the tools that we have. For most of us, that’s Excel. While I’d strongly recommend bringing some SQL and open source coding skills into any Actuarial team, even the most automated processes are probably going to interface with Excel at some point, so building out from here is a good place to start. I’ve had the best results when able to utilise the power of software such as SQL and SAS, plus more recently R and Python (Still using a lot of SQL), so not having access to these skills can be seen as a reason not to start automating. If you don’t have these skills, I’d recommend starting with a programming language I’ve used a lot, which is…. VBA! It’s definitely not the trendiest way to get things done, and I have to admit my R markdown notebooks and shiny apps tend to raise more eyebrows. That said, when it comes to taking that first step into process automation, using VBA macros to create tools that integrate seamlessly into existing Excel based processes has been a winning solution. Reasons for this are: No IT issues – While it’s getting easier to access open source language on company networks, if you don’t already have Excel and VBA available is pretty much unheard of. Backwards compatible – This is a great advantage of starting out in VBA. When building VBA based processes for clients without a lot of team members with VBA knowledge, I’ll build the process in a different way. I’ll build something that “looks and feels” like a normal Excel workbook i.e. has formulae that can be tracked back to source, and doesn’t require the user to understand the code to understand how the spreadsheet works. This gives the client the confidence to embrace automation without having to put their faith in a “black box” process. Quick learning curve – You can pick up VBA very quickly, even having Excel write some basic macros for you to get you started. This will allow a team to build up their capability and enhance the process further over time. While your end state might well be analyses using R or python, teams without much coding experience have found it much easier to get up to speed with VBA more quickly, while avoiding key person dependencies that come with adopting new software. There’s one other thing that makes VBA a great place to start. If your data process involves your MI team extracting data from your backend system and delivering it to you in a spreadsheet, there’s a good chance this step can be removed completely using some well written SQL code embedded in a VBA macro. Your team can then interact with your data in a much more user friendly way, The problem you’ll likely face is… Our data expert is too busy to assist us If you can solve this problem, you’ll be well on your way to achieving self-service granular data We’ll cover this in the next blog post titled “Data teams.. How to be a co-worker and not jut a customer”. So in summary, if you have coding skills in your team, that’s great, but not having them doesn’t mean you can’t start automating your processes. Getting something built in VBA and using it as a starting point for your team to upskill in coding is a great first step to making your processes more efficient.