Free Preview

This is a preview of some of our exclusive, member only content. If you enjoy this article, please consider becoming a member.

A wise statistician once said: “All models are wrong, but some are useful.” The key to getting useful (accurate) model results is working with models that fit specific underwriting needs and making sure the models are regularly updated while employing qualitative analysis (in addition to the models’ quantitative analysis). Those are the opinions of experts interviewed for this article.

Executive Summary

The key to getting useful model results is working with models that fit specific underwriting needs and making sure the models are regularly updated while employing qualitative analysis (in addition to the models’ quantitative analysis), say experts from KCC, Aon and Everest.

Nevertheless, there are common complaints within the insurance industry that model outputs are inaccurate. Modeling experts provide some thoughts about how models are best used—with one CEO suggesting that if model results are inaccurate, it’s time to make a change. “It’s easier to complain than change,” said Karen Clark, chief executive officer of Karen Clark & Co. (KCC).

Users of property-catastrophe models should not assume that the model is 100 percent accurate, according to Dan Dick, global head of Catastrophe Management at Aon, in an interview. “It’s just a tool and part of the risk management underwriting solution, and it should strongly inform the underwriters’ view in their decision-making.”

However, he emphasized, models should not be the only part of the decision-making process.

“Reinsurers and insurers that use models to help support their analysis, but ultimately use underwriting judgment to make decisions, have better outcomes,” according to Juan C. Andrade, president and CEO of Everest. “I think companies that solely rely on models are just not going to do as well.”

As the fourth-largest reinsurer in the world, Everest Re has “a lot of tools, a lot of technology, a lot of data and analytics, but we also have some very talented underwriters, and at the end of the day, you can’t supplant that human judgment,” he said.

Expertise, experience, knowledge of the cedent, underwriting practices and knowledge of the geography become very important, particularly with climate change trends, Andrade said in an interview with Carrier Management.

As a reinsurance and insurance company, it’s important to update models and to continuously update underwriting judgment. “It’s really about understanding the shifts that are taking place and then how to react accordingly,” he continued.

One Size Does Not Fit All

“It’s important to recognize that the models are not a one-size-fits-all solution,” Dick said.

For example, a large carrier with a large market share for a particular peril and region could see that the model works fairly well when the law of large numbers kicks in, he explained. “But when you have a much smaller market share, the model can grossly underestimate or overestimate your loss potential.”

“Models are a macro fit for the industry. Modeling firms don’t build 115 versions of their U.S. model; they build one version and expect it to fit dozens or hundreds of different perspectives.”

Dan Dick, Aon

The models are trying to get the industry number right, which means the carriers with larger market shares may regress closer to the modeled output, but clients with smaller market shares could see wildly different results, he said.

The key for insurers is to have a view of risk, Dick said. “You need to understand how you expect the models to perform. If you’re expecting the model to pick the candle factory in Kentucky that got hit by a tornado or the Amazon warehouse in St. Louis, that’s very difficult,” he said. “If you have a large, diverse book or a large, well-spread book, the model’s going to normalize out a little more.”

On the quantitative side, the models do a great job, Dick said, noting that underwriters need to hone their qualitative analysis as well — by considering the following questions: How much do you agree with the model? How much do you adjust it to your view of risk? How do you think your portfolio’s going to perform when the event occurs?

For years, underwriters have failed to realize that the models are not necessarily tuned down to an individual carrier’s book, line of business and policy form, Dick explained. “Models are a macro fit for the industry. Modeling firms don’t build 115 versions of their U.S. model; they build one version and expect it to fit dozens or hundreds of different perspectives.”

Dick said most of the industry’s complaints about models occur after a natural disaster that is more extreme than the model had predicted, and the event hits much further up on their loss exceedance probability curves or the model indicates that the event should have had a lower probability.

Model accuracy is also based on the information that is entered, “so if a company is not getting appropriate insurance-to-value, or if they’re paying out roof claims for more than the historical claims levels, you’re going to see a disconnect,” Dick said. (See related article on reinsurers in the Florida market.)

Another consideration is the fact that loss estimates are typically a very small percentage of the total economic value exposed. If a model misses by 20 percent, it is probably because there is much more value exposed, Dick explained.

People are surprised about models’ range of uncertainty, which is not a perfect distribution, he said. “The range of uncertainty is such that if there is any uncertainty in the mean result, it’s typically going to be a higher loss than was actually predicted.”

Modeling for Severe Convective Storms

While modeling firms do a good job of developing models based on historical experience for hurricanes and earthquakes, with some extreme events caused by severe convective storms (SCS) and flooding, accuracy is more difficult, Dick said.

He cited the example of the recent flooding in Vermont, which was considered a one-in-a-thousand-year event, but Hurricane Irene in 2011, which hit the same area, also caused massive flooding. As a result, the frequency of these extreme events is somewhat subjective, Dick said.

“The solution is not to get rid of cat models. You just have to upgrade to better models. You can’t keep using the same models for 30 years, the same methodology without any advancements, and think you’re going to keep up with changes that are happening.”

Karen Clark, Karen Clark & Co.

A big problem with modeling SCS, he said, is that some of the models have not been updated in a decade. “If you’re looking at 30 years of data and you haven’t updated your model in 10 years, you’re looking at a much different population footprint, which causes model-misses.”

In addition, Dick added, some vendors are using data from the most recent Intergovernmental Panel on Climate Change (IPCC). The IPCC data, released in 2022, actually used 30 years of data leading up to 2014, “so if you’re using that for your view on climate impact, you’re already out of date.”

When KCC was launched in 2007, Clark said, the biggest complaint she heard from reinsurers was that the legacy models for SCS, winter storm and frequency perils (also known as secondary perils) were inaccurate. She quickly realized that her new company needed a different approach. (Clark launched AIR Worldwide in 1987, which was purchased in 2002 by Insurance Services Office, now a subsidiary of Verisk Analytics, and rebranded as Verisk.)

While other modelers treat SCS like hurricanes based on historical data, KCC takes a different approach using physical modeling techniques and high-resolution atmospheric data. Along with the traditional stochastic catalogs providing exceedance probability (EP) curves, the KCC models provide real-time information on claims for individual insurers.

Frequency perils are a lot harder to model than hurricanes, which is why KCC developed the ability to validate models in real time.

Real-Time Model Validation

Clark said that KCC employs an advanced scientific technique called numerical weather prediction, which uses radar data and satellite data to create hail and tornado wind intensity footprint in real time.

Clients are able to access the data every day to see where their hail and wind claims are going to be and are able to continuously validate the models, commented Clark.

Every day clients can “see what our model says on their loss, and then weeks or months later, they can compare it to their actual losses.”

Clark questioned how insurers would be able to trust models for hypothetical events if they are inaccurate and don’t match losses on real events.

Update Models

Clark emphasized that the industry is slowly starting to shift away from older models to newer, more advanced models. Older models, she added, do not provide real-time calculations for SCS nor high-resolution intensity footprints. “We get continuous feedback with our real-time loss estimates, so we can make sure that our models are becoming more and more accurate.”

If companies want more accurate models, they need to embrace change, Clark said. “I have a new phrase I’ve started to use: ‘It’s easier to complain than to change.'”

Reinsurers and brokers, in particular, love to complain about models, she added. “The solution is not to get rid of cat models. You just have to upgrade to better models. You can’t keep using the same models for 30 years, the same methodology without any advancements, and think you’re going to keep up with changes that are happening.”

Reinsurers often say that they make their own adjustments, add their own factors and develop their own models, Clark said. “We could ask them, ‘How well did that work out? Why are you raising reinsurance prices by 80-100 hundred percent in the past couple years?’ Obviously, they were not pricing the product correctly. It’s really time to upgrade the cat models.”

Aon’s Impact Forecasting also forecasts in real time, which produces a relatively narrow range of meaningful numbers for a natural catastrophe based on real-time weather forecasts, rather than using a predefined stochastic catalog of similar events—as some model vendors do—which “results in bad forecasting,” said Adam Podlaha, Aon’s head of Impact Forecasting. “If you use real-weather forecasting, as we are doing, you will produce a range which is meaningful, not too wide, and it’s more valuable than one answer.”

See related sidebar: Loss Trends for Severe Convective Storms

As proof of Impact Forecasting’s model accuracy, Dick noted that the estimate for Hurricane Ian, which was issued on Oct. 2, 2023, has not been adjusted since that date, which shows the power of having real-time modeling capability.

Impact Forecasting maps out every building in the United States, Dick said, so when these events occur on an industry threshold, we’re able to quickly overlay those events against our industry database. “And that’s why we’re able to release some of these industry views very, very quickly.”

“We have a system for U.S. hurricane, European windstorm and Japanese typhoon, where we do loss forecasting. So, a few days before a U.S. hurricane, we actually are able to produce a forecast every six hours,” said Podlaha.

Impact Forecasting provides clients with a well-managed range of numbers. “We can produce one number if the user wants one number, but we also can produce a range, and the range will be always better” because it provides more information, he added.