The humble chain ladder has been part of the actuarial toolkit for almost a century since Tarbell (1934) introduced a method to compute reserves from a run-off triangle of cumulative losses. It has repeatedly been extended and enhanced, with examples ranging from the Bornhuetter-Ferguson method (1972) and the Stanard-Bühlmann method (Stanard, 1985) to a reassessment of the method introduced in 1975 Benktander (Mack, 2000) and the Munich chain ladder (Quarg & Mack, 2004), among others.
In this blog, we do not set out a new approach but rather highlight the applicability of the underlying assumption behind the Stanard-Bühlmann approach and its relevance for pricing with sparse claims data. The other methods listed above are best applied in regimes with extensive and reliable claims information – a situation in which we rarely find ourselves in reinsurance pricing.
Many of the core underlying assumptions of chain ladders are strained to the breaking point in a non-proportional reinsurance context. Nevertheless, we can often accept or adjust for such assumptions. However, one of the insurmountable challenges to the simple chain ladder is loss free years. When dealing with long-tailed lines, it is often many years before the reinsurer knows about or can process claims: No multiplicative factor can take zero losses and produce a non-nil reserve.
Why the Bornheutter-Ferguson Method is not Necessarily the Solution
The Bornhuetter-Ferguson (BF) method is often asked to come to the rescue here – and it is well suited for this role in some contexts, particularly reserving.1 However, for pricing of non-proportional layers, the appropriate prior loss ratio (LR) selection is far from trivial. Some practitioners opt to use their exposure pricing as a prior. This is valid – although ideally it should allow for the exposure of each considered year – but it is imperative to note that this can no longer be compared to or weighted together with the exposure price: They are no‑longer independent! In cases with few claims and a slow development, the “experience” pricing can become almost entirely exposure based.
We should stop and ask ourselves, “why are we experience rating?” We are past the era of “payback” pricing – even more so with the increasing flows of institutional capital into the market. The purpose of experience rating is to provide a view of the client’s own performance, ideally relative to an exposure view based on market assumptions applied to their specific profile. In this context it is worth considering whether blending experience with an exposure rate via a BF approach gives us a clear comparison between experience and exposure? We would argue that it does not.
The Processed Premium Approach
What then is our alternative, that we call the processed premium approach? We look to the work of Stanard and Bühlmann. The Stanard-Bühlmann or Cape Cod is a reserving method for non-proportional data where there is no credible independent prior LR (a pre-requisite for the BF method)2. Instead, they proposed deriving the implied prior from the experience itself (on‑levelled for rate and trend).
Their key insight was that the credit given to the experience should only be as much as is expected to have developed at the point of analysis. That is, effectively, they derive a prior LR as the weighted average incurred LR, where the weighting is the fraction developed – which is equivalent to reducing the premium by the fraction developed and using incurred claims. They then used this “prior” LR in a BF‑like approach on the data. Practical calculation is simple: the sum of the incurred losses / sum of the premiums each multiplied by the percentage developed of their year (taken from a benchmark or fitted development pattern).3
We propose that this approach not only provides an excellent reserving estimate in many cases but, additionally, that the on‑levelled prior LR is the experience implied LR for pricing. More specifically, to produce a pricing view, we should follow the Stanard-Bühlmann procedure but stop short of projecting out the most recent years; we only need the on‑levelled prior LR.
This LR has several appealing qualities:
- Loss free years are given full credit up to the fraction of development expected at this point in time. There are no artificial loadings complicating the comparison to exposure.
- Large losses early in the year are given full credit while not being extrapolated with chain-ladder factors.
- The approach is simple, transparent, and provides an independent comparison to exposure pricing.
The approach is not without its own limitations: it naturally weights more heavily to older years. This places great importance on appropriate rate change, claim inflation assumptions, and observation period selection. One solution was proposed by Gluck (1997), who added an exponential decay factor to the weighting between a year and neighbouring years – effectively introducing a moving average – which can easily be adjusted to considering the pricing year. More technical approaches separate movements on reported claims (IBNER) and future claim notifications (IBNYR) and consider these separately or apply Kalman filters in place of simple decay (Korn, 2016). Going further, the processed premium is the natural basis for the layer credibility approaches presented in Korn (2017).
Note that this approach is still, at its core, a chain ladder method: The fraction developed most likely (but not necessarily) comes from a chain ladder analysis of either the client’s own claims or a benchmark dataset. For any individual year, the calculated LR is identical; the difference is in the weighting in the average. The processed premium approach is nothing more than a reframing of the Stanard-Bühlmann method for pricing instead of reserving. It does not tell us where the historical years will end up – it tells us where the on‑levelled prior LR for the treaty is running – which is our experience view of the pricing LR.