Perspective

Correlation – Dealing With Uncertainty and Other Assumptions

November 09, 2014| By Tony Sammur | Property | English

In my last blog, I discussed Cat risk correlation in accounts and portfolios from individual catastrophes. Correlation can also happen when a cluster of Cat events occur within a short time period, due to persistent atmospheric conditions, or where an earthquake at one place can affect the likelihood and timing at another. I think of the two New Zealand events in September 2010 and the February 2011 aftershocks. Both were on previously undocumented faults, but the second quake, while having a lower magnitude, produced much more severe shaking due to its depth and location. 

While the correlation assumption in earthquake modeling can make a prediction of a rupture that's more or less likely, some scientists believe that the transfer of stress from one fault segment to another can provide positive correlation. In any case it is probably incorrect to model earthquake events as completely independent. 

Early Cat models handled the correlation question with a simplifying assumption that applied a rule by peril and region, such as 80/20, with the 80% weight applied to the loss independence assumption. These assumptions were based on industry data for historical catastrophes coupled with expert opinion. Subsequent models added that certain large events should have higher loss correlations, especially if there is a concentration of exposures within a region, such as certain segments of the San Andreas Fault in California. 

The models then introduced differentiation by occupancy and construction. As the models became more complex, running them required more calculations and increased computing resources. Current modeling platforms apply correlation using a detailed multi-dimensional database of parameters.

Testing model results to various correlation assumptions is crucial for a global reinsurer with multiple business units that write different structures in different regions. From a treaty reinsurance perspective, much of the catastrophe risk comes from modeling reinsurance contracts with loss limits on portfolios of exposures.

Loss limits provide a contractual capping of exposures and a reduction of uncertainty in the tail. However, the likelihood of attaching and exhausting limits on treaty programs increases greatly when higher correlation is assumed in the modeling. For the facultative business unit, the loss risk in the tail is very sensitive to correlation assumptions because facultative coverage limits usually attach well above a single risk’s loss expectancy. When aggregating a group of facultative exposures affected by a modeled Cat event, the simulated loss can therefore get distributed across the full sum of the limits exposed to the event, with no capping. Correlation can drive these tail values up significantly.  

Increasing the correlation assumption always adds to uncertainty and tail risk. Grouping losses from subsets of global exposures can amplify these effects. The true essence of Cat modeling is a representation of this correlated uncertainty in relation to financial outcomes. Simply running a model to select a loss level that the company can use to measure against its risk appetite is not a prudent use of Cat models.

There’s real value in having a framework that allows you to investigate sources of uncertainty. In fact, the model vendor firms have recognized this and they are starting to provide open model architecture to allow for sensitivity testing of parameter uncertainty.

As world economies develop and become more globalized, insurers and reinsurers need to better understand what correlation actually means - and how their companies' capital is exposed to it.

 

Stay Up to Date. Subscribe Today.

Contributors

Get to know our global experts

View Contributors