Insurers Must Address Bias in AI
Insurers Must Address Bias in AI
Recently, a group of U.S. senators wrote a letter urging the Equal Employment Opportunity Commission to address employers' use of Artificial Intelligence (AI), machine learning, and other hiring technologies that might result in discrimination.

President Biden’s new deputy science policy chief, Alondra Nelson, is an academic whose work stands at the intersection of technology, science, and social inequality. In her acceptance speech she shared, “There has never been a more important moment to get scientific development right or to situate that development in our values of equality, accountability, justice, and trustworthiness.”

For years now, data scientists, programmers, and researchers have been bringing attention to all the ways Artificial Intelligence (AI), big data, and machine-learning models are reinforcing inequality and discrimination in advertising, hiring, insurance pricing, and lending.

We can expect regulators to increasingly do the same under the Biden Administration.

The are several notable examples of AI and big data demonstrating bias. Apple Card’s lending algorithms extend more credit to husbands and less to their wives. Facebook’s advertising algorithms skew the delivery of employment and housing advertisements to audiences based on race and gender, according to a 2019 paper. Amazon discovered its hiring algorithms favored men over women.

However, there is also an opportunity for big data, machine-learning and other AI techniques to help combat inequality and bias.

Leveraging smartphones, telematics, and sensors, insurers have an opportunity to generate insights and data to quantify risk in more accurate and personalized ways. Data of this type lets insurers increasingly assess applicants as individuals in insurance pricing. Use of demographic factors like income, education level, occupation, zip codes, and credit scores can be perceived as unfairly penalizing immigrants, minorities, and disadvantaged groups.

State Farm’s Drive Safe & Save program uses a telematics device to collect real-time data on driving habits. Driving less, better and smarter earns drivers a discount.

Root Insurance have gone one step further. In an effort to remove bias and discrimination from insurance pricing, the Ohio-based insuretech plans to eliminate the use of credit scores as a factor in its auto insurance pricing model by 2025. According to Root, its pricing model assigns more weight to policyholders’ driving behaviors and less weight to demographic factors like gender, zip code and age, compared to those of other insurers. Root uses an app to measure safe driving behaviors such as breaking, speed of turns, driving times and route consistency.

AI, big data and new digital technologies can also help remove or minimize bias and discrimination in the claims assessment process. People can behave with prejudice without intending to do so through implicit bias. Unconscious attitudes, reactions, or stereotypes towards a certain social group can impact a claims interaction and appraisal.

In January, property and casualty insurer The Hartford and tech startup Tractable announced they have partnered up to introduce an AI-powered claims process. Relying on computer vision to assess car damage photos like a human appraiser, Hartford would also be able to shorten the time it takes to process claims from days to mere minutes.

Since 2019, USAA have been working with Google Cloud to streamline and move towards an end-to-end touchless claims process. The two organizations have been able to develop machine-learning models that can analyze digital images to estimate vehicle damage across a range of vehicles with a high degree of accuracy

AI can also be used to detect racial and gender bias. According to French-company Maathics, its algorithms can detect, quantify, and correct discrimination sources in other algorithms. Similarly, IBM’s AI Fairness 360 provides data scientists with a set of metrics, models, and algorithms to detect and remove bias in machine-learning models.

AI has the potential to dramatically transform insurance by streamlining and introducing more precision in underwriting, claims processing, fraud detection and marketing. With the explosion of computing power coupled with the increasing availability of big data, AI has become increasingly viable to act as intelligent agents to interpret data, learn and use that learning to make decisions at a scale impossible for humans to do.

In addition to its potential, insurers must also recognize the pitfalls of AI and big data. We must take active measures to ensure AI algorithms are fair. This includes building a deeper and more thorough understanding of datasets and how it can introduce bias into algorithms; gathering a diverse machine-learning team; establishing processes to mitigate bias; and staying abreast of the latest developments in this area.

Read the full article: Digital Insurance

Share this article:

Share this article: