by Kirsty Ip
I attended my first ever GIRO session (virtually) in 2020 and was excited to understand more about the research paper by @RonaldRichman and @CaesarBalona that won last year’s Brian Hey Prize. The paper, titled “The Actuary and IBNR Techniques: A Machine Learning Approach”, presents a new framework for selecting the most appropriate model for estimating an insurer’s IBNR.
In the paper, machine learning concepts are applied in a novel manner to the traditional reserving tools of the Chain-Ladder, Bornhuetter-Ferguson and Cape-Cod techniques. Unlike papers that seek to use machine learning to determine the best IBNR or identify data features, this paper focused on the choices that the reserving actuary makes when applying these techniques.
In this set up, the claims run-off triangle is modelled in sequential years and each successive version of the triangle forms part of the training data or, for the most recent years, the test data.
Tuning parameters are the reserving model parameter choices and two model performance measures are examined: the one-year actual vs. expected claims experience, and the one-year change in estimated ultimate claims.
Replicating the algorithm in Excel proved straightforward (and provides a good insight into how the model behaves), although clearly an R or Python implementation is needed for any serious application.
So, having got the model up and running, what did my colleagues and I make of it?
Well, it looks like the framework might (with a little development – see below where we think this is needed) make a useful addition to a firm’s quarterly reserving processes: possibly informing model choices and providing quantitative backing to actuarial rules of thumb.
In this vein, it looks like it may find a role as a useful back-testing tool to validate and help justify model choices, particularly for classes where reserving is more routine.
More interestingly, is whether it can evolve into a first-line tool for making model selections, relieving some of the labour-intensive work beloved of actuarial trainees.
Less clear-cut is whether the model has the scope to outperform an experienced actuary at reserving. But we know that people said that about the best human players of chess and go, and they were in time proved wrong. That said, perhaps because reserving is more of a good enough activity than the win or lose environment of games, only dramatic outperformance will cause the demise of the actuary.
But for now, we can accept the model as work in progress and consider where it should go next. Here are two key areas we think will need attention if this framework is to become part of the toolkit:
- Getting the algorithm to cope beyond the range of the triangle. In practice, sensible extrapolation beyond the data is a key reserving challenge.
- Considering performance over a longer period than a one-year time horizon. The objective function specified in the paper looks at minimizing the change in ultimate or the difference between actual and expected claims over the next twelve months. Often this will suffice, but this criteria rewards those models choices that spread out the bad (or good) news over a number of years rather than straight away.
For now, our focus will be on testing and adapting our implementation of the algorithm, in particular exploring the following areas:
- Widening the range of triangles tested.
- Alternative, possibly asymmetric, penalty functions.
- Widening the range and types of tuning parameters optimised in the model.
I hope to report the findings and results in a future blog post so watch this space! In the meantime, I would be very interested to hear your thoughts on this paper, particularly if you are happy to share your own experiences of its implementation. Please email us at email@example.com.