Canadian Underwriter
Feature

Cracks in Cat Models


June 1, 2011   by Karen Clark, president, CEO, Karen Clark & Company


Print this page Share

Recent industry events have highlighted a few of the limitations of catastrophe models. For example, the Magnitude 9.0 Tohoku earthquake in March 2011 was not captured by any of the models because scientists previously did not believe an earthquake of this magnitude could occur in the region due to the nature of the plate boundaries. Catastrophe models generally do not capture the clustering of earthquakes as we saw in New Zealand over the past several months. And the most recent RMS hurricane model update, which has produced very large changes in loss estimates for many companies in the United States, illustrates the uncertainty and volatility inherent in model-generated estimates (such as PMLs), even for models that have been in use for more than 20 years.

Despite their limitations, there has been a growing tendency to rely on the models for every decision concerning catastrophe risk. There is a general belief that “catastrophe models are the best tools we have, so we have to use them.” If you had to pick just one tool to do everything for everyone, then, yes, a catastrophe model might be the best option. But a catastrophe model may not be the best tool for all purposes, or for all users.

Issues With Models

Model Volatility

Let’s look at model volatility. It may not be a big problem for a reinsurer, since it’s relatively easy for property reinsurers to adjust prices and to change portfolio composition. However, for primary insurers, loss estimates that swing widely from one model update to another can cause major disruptions to underwriting and business strategies. In regions where insurance is regulated, it’s not easy to adapt to the variations, and it can take years to implement new pricing strategies.

When model loss estimates change significantly, insurance companies can end up non-renewing large books of business – only to find out later that, with the next major model update, many of those lost policies weren’t so bad in hindsight. Obviously it’s a problem if your primary risk management tool gives conflicting information from update to update. Erratic underwriting decisions can lead to loss of agent and policyholder goodwill. Such volatility is a deterrent to regulatory and consumer acceptance.

Loss estimates and financial strength

Another significant problem is that rating agencies are using highly volatile 1-in-100 and 1-in-250 year loss estimates in their financial strength ratings. This puts even more weight on the model-generated output, and results in highly volatile and uncertain numbers influencing very important risk management decisions.

Model-generated numbers such as PMLs are volatile because scientists do not know the probabilities of large-magnitude events happening in different peril regions. In many regions of the world, scientists do not know the intensities of the most significant historical events. This paucity of data causes the model information to be uncertain. Most model updates are based on re-analyses of limited and imprecise data rather than advances in factual knowledge. Given this uncertainty and scientific “unknowledge,” are there ways to use the scientific data we do have in a more robust and transparent fashion?

Characteristic Events

One alternative is to use scientific data to define “characteristic events” for each peril region, representing the types of events that could happen with 1% and lower probabilities. These characteristic events could be used to estimate 1-in-100 and 1-in-250 year losses for insurance companies.

The catastrophe models estimate extreme event losses by first creating probability distributions for event characteristics and then using these distributions to simulate thousands of potential events. Slight changes in the probability distributions can lead to dramatically different PML estimates, because the model loss estimates are highly sensitive to even small changes in model assumptions. The simulation technique makes it virtually impossible to check the model calculations.

On the other hand, characteristic events, once defined using the scientific data, are totally transparent. They can stay constant until there is a real breakthrough in scientific knowledge. If the characteristic events do change, the impact on loss estimates can be easily understood and checked for credibility.

Characteristic events are also very useful for rating agency purposes. Currently, rating agencies use information from different models and different model versions, and have no comparable basis for evaluating the catastrophe risk impact on company financial strength ratings. Characteristic events, because they are comparatively constant, provide a consistent way to compare companies and to monitor the effectiveness of risk management strategies over time.

New tools and “out of the black box thinking” can also be more effective in capturing the “black swans” that seem to appear every few years. The catastrophe models did not anticipate the Tohoku earthquake because no scientific data suggested it could happen. Now that we know this magnitude event can occur in this region, it will likely take years for the modeling companies to release updated Japan earthquake models incorporating this new information. It will take some time because of the amount of research that will be conducted to figure out why the scientists were wrong and the likelihood of this type of event happening again. Given that scientists will not be able to determine the probability with any degree of certainty, why not create an extreme characteristic event now to capture what we know can happen?

One Size Does Not Fit All

While catastrophe models are comprehensive, they are a “one size fits all” approach to catastrophe risk. The models provide a general indication of risk, but they may not provide the best view of the risk for a regional or specialized book of business. Insurance companies need additional tools that are more flexible and can be customized to specific types of property business.

The insurance industry has gone from the one extreme of ‘no models, all underwriter judgment,’ to the other extreme of ‘all models, no underwriter judgment.’ Neither extreme is optimal. A more effective tool for many companies would be to use a scientific framework, but allow for additional account-specific information to be factored into underwriting decisions.

Catastrophe risk is a large and growing component of property losses. Given the importance of catastrophe losses, certainly it’s appropriate to bring all credible information to bear on understanding and managing catastrophe risk. We don’t need to limit ourselves to one tool or one approach. We can use other information and additional tools more suitable for different purposes and different users.


Print this page Share

Have your say:

Your email address will not be published. Required fields are marked *

*