f
TAGS
H

Granular underwriting and the death of risk sharing?

Tower recently announced that it would be moving to a "risk pricing model" recently and we are now seeing the effect of that with premiums rising, or really rocketing for some Wellington customers due to perceived earthquake risk; https://www.radionz.co.nz/news.... Now leaving aside whether risk pricing is simply underwriting, or whether as accurate assessment of earthquake risk can actually be made without detailed geotechnical and structural engineering information specific to an individual property which I highly doubt Tower had when it set its pricing, there is a bigger issue: does the granular data which is available to underwriters threaten the viability of risk pooling? 

Risk pooling is the model which has been the foundation of insurance since Genoese merchants started doing it in the 14th century.  The model works by spreading risks; your house may be built of wood and mine of brick, yours will survive an earthquake unharmed while mine will be badly damaged, but mine is fireproof, while yours will burn. By pooling the collective risk, we offset our individual risk should either fire or earthquake occur the bigger the pool the more the risk is offset and spread. But the model only works while we're dealing with uncertainties; risks rather than certainties. This is important because short term certainties are not suitable for risk pooling; if I know that your house will burn down, when mine only might be shaken down, why should I pay more when it won't benefit me (because my risk remains low while yours increases to a certainty)? When risks become certainties insurers deal with this by either increasing the pricing as the risk moves closer to certainty or add in exclusions; they'll cover you for all the other risks but not, for instance, earthquake damage. And the uncertainty is what allows insurers to profit, they live off the margins between the predicted risk and the eventuated risk. When that margin is out of balance insurers make losses or go out of business; think AMI after the Christchurch Earthquake sequence.    

Now as the abilities of insurers to obtain and apply data to their particular risk pools becomes better and better, we run the risk that uncertainty is minimised and the issue becomes; if  the data allows a predictive risk assessment which says with a high degree of certainty that your house will burn down, why should the insurer (and the rest of the risk pool) pay to offset your known higher risk. And if your policy won't pay for the main risk to your house, fire, why would you bother paying premiums? Underwriting has been historically backwards looking; we know that X number of houses burn down every year on average, so we can spread that risk across the entire pool. The tools which allows this have been statistical and blunt, they are only predictive in a broad sense. However, modern tools mean that we can now look forward with far more certainty. This means that for instance low lying parts of Florida cannot obtain insurance against Hurricane damage. I think  it is only a matter of time before we see similar issues in parts of New Zealand. This leaves premium payers with a choice, pay premiums which won't cover against the main threat to your house/business, or self insure. There are also questions about whether the insurers which after all, are private companies, should be more open with the data which drives the pricing.  Wider society  has a choice; does the state step in as it has with earthquake/landslip/etc under the EQC legislation, or let insurability become a tool to prevent people from building or buying in risky places. Quite where this will leave those of us who live in places which have climate, earthquake or volcanic risk (everywhere except Hamilton if you  believe the press) is all up in the air.