What’s the blog about
How to improve agility when it comes to obtaining data required for adhoc analyses when data teams are stretched.
Who is this blog post for?
If you’re in an actuarial team, and you’re struggling to access the data you need to respond to your underwriters’ requests for more detailed analysis, the problems and solutions outlined below are likely to be of interest.
What’s the problem
Actuaries can’t easily get hold of their data as they don’t understand the structure of the detailed databases used to produce their nicely summarise triangles. It’s possible to get the required data ready for analysis, but the members of the business with the knowledge to do it are extremely busy, and not always available to produce custom data extracts.
So how do we solve the problem?
Continuing to rely on a small number of data experts may no longer be an option. It could seriously hamper your ability to “slice and dice” your account quickly in response to underwriter feedback during deep dive, planning and reserving analyses. This is the kind of insight underwriters will look to actuaries and portfolio managers to provide.
So what does the solution look like?
The solution I’ve found most efficient is a self-service tool, allowing the actuarial team to customise the data that is extracted. The tool usually allows data to be extracted in various pre-determined formats. The user interface could either be Excel based or web based.
Sounds great, how do we get there?
This is where we reach a fork in the road. The three solutions offered below will get you to where you need to be, however the paths taken will be very different, and involve different challenges.
You could:
Be the data expert - Task your Actuaries with understanding the structure of the data in the backend databases and how to produce the information needed – From experience, the databases underlying insurance companies can be a:
I know from experience that Actuaries are capable of achieving this. In the past, this is the route I’ve had to take, sometimes going directly into mainframe based systems (and sometimes crashing them!) to get the data I need. This explains a lot of the SQL and coding knowledge I’ve gained throughout my career. While I learned a huge amount about insurance data from this, it was a difficult path to follow, and probably wasn’t the best use of my Actuarial expertise.
An alternative approach would be to...
Be a customer – Alternatively, your Actuaries can specify exactly what you need your tool to do, and have your data experts can build it. While I’ve seen this approach work in the past, it risks compounding the key data problem that many insurers currently face, that there simply aren’t enough data experts in their business.
Here are some more advantages and disadvantages of this approach:
Advantage | Disadvantage |
---|---|
Well produced piece of software built purely by data professionals | Actuaries unlikely to have time to specify exactly what tool needs to do |
No need for Actuarial Team to skill up in coding | Data experts unlikely to have time to either build or provide support for such a tool |
If data teams are unavailable to make changes, even a minor adjustment to the data extracted won’t be possible |
While I favour this approach to training your Actuarial Team to be data experts and produce their own bespoke data assets, I feel there is an optimal mix of the two approaches. This is…
Be a co-worker – I wrote a short LinkedIn post titled “A little SQL goes a long way” in which I advocated for Actuaries to learn some SQL. Not enough to become a professional SQL developer, but enough to do some basic manipulation of data. Building self service tools without requiring a lot of your data team’s time is a great example of this. Let’s look at why.
Most of the custom data extracts I’ve needed for my work can be split into two parts, namely:
You need a way of applying these customisations without having to go to your data expert each time. This is where we can do a lot of the work.
With a working knowledge of SQL, it’s possible to work side by side with your data experts to produce the required solution. They will focus purely on the data call, using their unique knowledge of the data to deliver you a base dataset which you can customize as needed. This base dataset has many names e.g. Data View or Presentation Layer, but it’s commonly known in Actuarial Teams as “The Pricing Data” or “The Claim Level Reserving Data”.
Following this approach will give your data team confidence that by working with you to build the tool, they won’t be called on to deal with minor changes to the tool in the future.
Let’s do the Pros and Cons.
Advantage | Disadvantage |
---|---|
Requires much less time from data teams than building the tool | Your team will need some SQL knowledge. |
Actuaries don’t need to specify everything the tool needs to do, as they can modify and fine tune as they go. | Actuarial Team will need to share some of the maintenance |
Working closely with data experts is a great development opportunity for Actuaries. |
Having built these for clients in the past, I would favour this approach, even if it means making some time for a few member of your team to learn some SQL.
Can you give an example of where you’ve used this?
I’ll go one better, and give you two.
The customer approach
In one example, the reserving team had the benefit of a data team embedded within the department. This team provided a range of self service tools to extract most of the data needed for adhoc or detailed analyses. This meant that there wasn’t an imperative for the Actuaries to be able to manipulate the data themselves, allowing them to focus purely on their Actuarial analyses.
This worked well, until a slightly different cut of the data was needed, which the tool couldn’t provide. The data was needed to answer a request from the regulator, and the data team were unable to assist due to staff absence. The SQL required to produce the required data wasn’t complex, and I was able to produce the data required, and the regulatory deadline was met. I didn’t have the knowledge of the insurer’s data to build the solution from scratch, but I did have the SQL skills to take what was available and make minor changes.
The co-worker approach
I worked with a Lloyds’ syndicate who relied on manual data extracts to carry out certain analyses. This either required their data team to produce a bespoke extract, or a manual process was deployed. Their data team didn’t have available resource to build a self service tool, so I helped by building the team a self-service tool.
I achieved this by designing the base data we discussed earlier. In this case, we needed incurred losses by development period, all converted to a single currently, with the ability to group by a few different fields e.g. Class of Business, Cause code etc. I then asked the data team if they could produce such a dataset. Within an hour, I had some SQL delivered which produced the base data I needed!
I then adjusted the code to allow me to filter and group the data as needed. I now had a customisable way of producing almost any data extract the team needed. I also had a useful base dataset to carry out further analysis in future, again, all achievable using some basic SQL.
I then built a basic user interface within Excel which dynamically updated the SQL code based on the filters selected by the user e.g. Selecting a particular line of business or cause code. This allowed members of the team with no SQL knowledge to utilise the tool as well.
Summary
So in summary, I’ve found the best way to achieve greater agility when it comes to extracting data for detailed analysis is to work alongside a data team to produce self serve tools, rather than simply specify exactly what the tool needs to do and have them build it. This means having your data experts produce the base data you need, and have enough SQL skills within your Actuarial Team to adjust it as needed.