Blog

by Thomas Saliba 4 August 2023
What’s the blog about How to improve agility when it comes to obtaining data required for adhoc analyses when data teams are stretched. Who is this blog post for? If you’re in an actuarial team, and you’re struggling to access the data you need to respond to your underwriters’ requests for more detailed analysis, the problems and solutions outlined below are likely to be of interest. What’s the problem Actuaries can’t easily get hold of their data as they don’t understand the structure of the detailed databases used to produce their nicely summarise triangles. It’s possible to get the required data ready for analysis, but the members of the business with the knowledge to do it are extremely busy, and not always available to produce custom data extracts. So how do we solve the problem? Continuing to rely on a small number of data experts may no longer be an option. It could seriously hamper your ability to “slice and dice” your account quickly in response to underwriter feedback during deep dive, planning and reserving analyses. This is the kind of insight underwriters will look to actuaries and portfolio managers to provide. So what does the solution look like? The solution I’ve found most efficient is a self-service tool, allowing the actuarial team to customise the data that is extracted. The tool usually allows data to be extracted in various pre-determined formats. The user interface could either be Excel based or web based. Sounds great, how do we get there? This is where we reach a fork in the road. The three solutions offered below will get you to where you need to be, however the paths taken will be very different, and involve different challenges. You could: Be the data expert - Task your Actuaries with understanding the structure of the data in the backend databases and how to produce the information needed – From experience, the databases underlying insurance companies can be a: Morass – Defined by Bing (google wasn’t working) as an area of muddy or boggy ground, working your way through the various tables and the relationships between them can be an extremely time consuming task, and can certainly feel like wading through the muddiest of fields. Minefield – If you’re used to having triangles delivered already converted to a single currency, adjusted for line size with transactions cut off at a particular point in time, you’ll need to tread very carefully when trying to put this information together yourself. I know from experience that Actuaries are capable of achieving this. In the past, this is the route I’ve had to take, sometimes going directly into mainframe based systems (and sometimes crashing them!) to get the data I need. This explains a lot of the SQL and coding knowledge I’ve gained throughout my career. While I learned a huge amount about insurance data from this, it was a difficult path to follow, and probably wasn’t the best use of my Actuarial expertise. An alternative approach would be to... Be a customer – Alternatively, your Actuaries can specify exactly what you need your tool to do, and have your data experts can build it. While I’ve seen this approach work in the past, it risks compounding the key data problem that many insurers currently face, that there simply aren’t enough data experts in their business. Here are some more advantages and disadvantages of this approach:
by Thomas Saliba 28 July 2023
I have a passion for simplifying and automating Actuarial processes. I’ve found these engagements to be some of the most rewarding in my career. Being able to free up an Actuarial team to carry out high value analysis is a great feeling. This series of blog posts will look at some of the key blockers to automating your processes, and how to overcome them. Who is this blog post for? If you already have a centralised data team embedded in your actuarial function, and your actuaries can all import granular data into their R and Python based analyses, you’ve probably overcome a lot of the issues we’ll discuss here. If however, you’re looking to make your Actuarial Team’s processes more efficient, without leaning on another team to build your solutions, you’re in the right place. What should we automate? Automation can mean a few different things within an Actuarial process. I break these down into two separate categories: Preparation of data – If you’re manually copying and pasting data from various spreadsheets, or creating pivot tables to get data in the right format, this can be done automatically. This is the automation we’ll focus on. Judgements – Getting to a first cut by applying “judgement algorithms” e.g. choosing a chain ladder ultimate if an accident year is more than x% developed, or leaving ultimate costs unchanged if the AvsE is below a certain percentage can get you to a first cut of your results very quickly. We’ll focus out attention first at getting data processes automated, and look at this exciting area in the future. Some of the best automated resources for an actuarial team that I’ve worked on are: Self-service reserving data – The ability to put your triangles under the microscope and quickly dig down to much more granular splits of an account are vital. Having to wait for external data teams to produce the required data for you can rob an actuarial team of a the agility they need to produce the high value decision support a business needs in business planning and portfolio deep dives. Analysis of change and AvsE – Having an automated process to understand how claims have moved from one quarter to the next can help a team to decide where to focus their analytical efforts during a reserving exercise. Rate simulation – Being able to calculate rate change impacts at individual policy level and aggregating up will allow you to drill down to any level of detail your underwriters desire. However, if it’s not automated, it’s probably going to take too long. This can be done with specialist software, however this may not be an option for you in the short term. While we all know that automating processes will lead to better outcomes, so what’s holding us back? Here are my top reasons… Let’s start with…. We can’t automate, we don’t have coding skills I can definitely empathise with this. We build our processes with the tools that we have. For most of us, that’s Excel. While I’d strongly recommend bringing some SQL and open source coding skills into any Actuarial team, even the most automated processes are probably going to interface with Excel at some point, so building out from here is a good place to start. I’ve had the best results when able to utilise the power of software such as SQL and SAS, plus more recently R and Python (Still using a lot of SQL), so not having access to these skills can be seen as a reason not to start automating. If you don’t have these skills, I’d recommend starting with a programming language I’ve used a lot, which is…. VBA! It’s definitely not the trendiest way to get things done, and I have to admit my R markdown notebooks and shiny apps tend to raise more eyebrows. That said, when it comes to taking that first step into process automation, using VBA macros to create tools that integrate seamlessly into existing Excel based processes has been a winning solution. Reasons for this are: No IT issues – While it’s getting easier to access open source language on company networks, if you don’t already have Excel and VBA available is pretty much unheard of. Backwards compatible – This is a great advantage of starting out in VBA. When building VBA based processes for clients without a lot of team members with VBA knowledge, I’ll build the process in a different way. I’ll build something that “looks and feels” like a normal Excel workbook i.e. has formulae that can be tracked back to source, and doesn’t require the user to understand the code to understand how the spreadsheet works. This gives the client the confidence to embrace automation without having to put their faith in a “black box” process. Quick learning curve – You can pick up VBA very quickly, even having Excel write some basic macros for you to get you started. This will allow a team to build up their capability and enhance the process further over time. While your end state might well be analyses using R or python, teams without much coding experience have found it much easier to get up to speed with VBA more quickly, while avoiding key person dependencies that come with adopting new software. There’s one other thing that makes VBA a great place to start. If your data process involves your MI team extracting data from your backend system and delivering it to you in a spreadsheet, there’s a good chance this step can be removed completely using some well written SQL code embedded in a VBA macro. Your team can then interact with your data in a much more user friendly way, The problem you’ll likely face is… Our data expert is too busy to assist us If you can solve this problem, you’ll be well on your way to achieving self-service granular data We’ll cover this in the next blog post titled “Data teams.. How to be a co-worker and not jut a customer”. So in summary, if you have coding skills in your team, that’s great, but not having them doesn’t mean you can’t start automating your processes. Getting something built in VBA and using it as a starting point for your team to upskill in coding is a great first step to making your processes more efficient.
Share by: