The McKinsey report details how underwriting excellence and pricing sophistication, fueled by next-generation data and analytics, can help leading insurers reduce loss ratios by three to five points and increase new premiums by 10 to 15 points.
This is just the latest evidence of the impact of advanced data and analytics in the P&C industry.
The continued digitization of core processes, the increasing migration to the cloud, and the explosion of insurtech companies has set a foundation for a new era in data and analytics. The accessibility and quality of data has exploded – and simultaneously gotten dramatically more cost-effective.
And now the market is seeing a clear separation between those companies relying on legacy data and systems, as compared to those leveraging next-generation data resources.
To illustrate, just consider that more than 30% of legacy systems do not have data on the location of a property’s nearest fire station or fire hydrant. Yet, that data is readily available and has a significant impact on the estimated extent of potential fire damage to a property.
And when your data is sparse, outdated or inaccurate, you’re taking unnecessary risks.
There are literally thousands of data points that are available to increase an insurer’s understanding of property risks. To give just a small sampling of the type of unique data points available today that may not have been readily available a few years ago, consider that insurers now have immediate access to (a) property distance to the exact locations of fire stations and fire hydrants; (b) data on which properties contain underground storage tanks; (c) the distance to PFA sites, superfund sites or toxic release facilities; (d) a specific property’s hail risk score; and (e) a specific property’s lightning risk.
Further, consider that most property risk models rely heavily on evaluations based on a property’s ZIP code or census block, a practice that dates back to the 1980s.
Yet, a house in one area of a ZIP code may have a completely different wildfire risk, flood risk, or crime risk than a property in another area in the same ZIP code. Modern geospatial technologies enable much smaller, more precise, and more accurate evaluation zones – right down to the property level.
In sum, today, most insurers only examine a few data points based on a property’s ZIP code. Yet, the technology and data exist to look at 1,000+ property risk data points for every individual residential and commercial property in the U.S.
By leveraging next-generation data and analytics, insurers can gain deeper insight into risks and better inform decisions across the insurance lifecycle from risk selection through pricing.
By integrating the right mix of internal and external data, insurers can better screen applicants during the risk selection process. Insurers can classify applicants using risk models based on their underwriting principles to determine whether to cover or renew a client and determine the amount of premium they should offer to that client.
With next-generation data and analytics, insurers can prefill data for prospective customers quickly and inexpensively, minimizing the number of questions you need to ask a potential customer and dramatically speeding the screening and sales process.
And with quick access to and integration of data, insurers can more effectively make their case to regulators on pricing – and can more accurately and appropriately price policies to reflect the actual inherent risk.
There is tremendous value in closely integrating next-generation property risk data into the underwriting and pricing process. The market is starting to take advantage of these new sources of data and new risk models – and the evidence of positive impact is clear.
Source: Digital Insurance