An Eye on AI: How the Human Element Plays a Role in Today’s Tech
An Eye on AI: How the Human Element Plays a Role in Today’s Tech
Artificial intelligence is becoming an increasingly important tool for commercial insurers, but ethical concerns remain.

Artificial intelligence has become an integral part of the day-to-day operations across most industries. And in great part, AI can be credited with condensing vast amounts of data into something more usable.

But as companies come under greater public scrutiny for how algorithms are influencing corporate behavior, the question of how to ethically apply artificial intelligence is top of mind for commercial insurance leaders.

Ethical use of technology is “not a problem that’s exclusive to AI,” Anthony Habayeb, founding CEO of Monitaur, an AI governance company, said.

“Corporations have their corporate governance and need to have their opinion of what sort of ethics and practices they bring into the market as a company. And those principles should be implemented in their AI, software and practices overall,” he explained.

What Determines the Ethical Use of AI?

The volume, selection and ownership of available data is causing the wider business community to reflect on the decision-making capacity of algorithms. “These AI systems are making decisions of great consequence,” Habayeb said.

“As a result, they feel like more addressable and governable ‘decision makers’ than just nebulous software. And that’s causing our use of the word ‘ethical,’ because we’re humanizing AI in a way we have not approached with other software.”

But concerns around AI’s autonomy need to be measured against the actual amount of data available in the insurance space, according to Tom Warden, chief science and insurance officer for CLARA Analytics.

Unlike the massive pipeline of data that companies like Google have access to, the insurance industry is still dealing with static data, to some degree, Warden said. “We don’t have a continuous stream of information on a claim or even a policy, so there’s just not enough data to really rely on the models themselves.”

Even as the data capacity of the insurance industry continues to expand, and companies rely more heavily upon models to automate decisions, “it’s important to remember that AI doesn’t really reason; algorithms have no ethics; they are just simply algorithms,” Warden said.

“You can’t really program a model to be ethical, but you can make sure that as it’s applied to the decisions that it’s being asked to influence, that the outcomes maintain the ethical standards of the various constituents,” Warden said.

Questions of ethical AI and digital privacy may feel weighty and philosophical, but Habayeb encouraged companies developing these new technologies to consider what their products are trying to accomplish and allow that to guide their practices.

“What is the brand promise that you’re making to the market, to the companies and the small business owners that you’re providing commercial coverage to?” Habayeb asked. “And then how do you effect those brand promises in your AI system?”

Quality Control

As more carriers, especially those with legacy systems, team up with Insurtechs to sharpen the story their data tells with machine learning and AI, regulators will have a greater role to play to encourage greater accountability.

Carriers will have to think through their entire data ecosystem to achieve comprehensive AI governance, including the Insurtech vendors with whom they partner.

“Whether or not a carrier is the ultimate developer of some algorithm or AI system, they will be accountable for the results of that system. It’s important carriers develop strategies and processes to achieve assurances that extend into their vendor ecosystem as well,” Habayeb added.

To support state regulators in the formation of their frameworks for AI, in 2020 the National Association of Insurance Commissioners (NAIC) released guiding principles on artificial intelligence “emphasizing the importance of accountability, compliance, transparency and safe, secure and robust outputs.”

Since then, calls have been made to establish national standards in the U.S. for “acceptable design and behavior of AI-enabled systems,” as well as a certification system to verify that AI-enabled systems are developed in accordance with said standards, according to a 2021 publication of the Journal of Insurance Regulation.

And at the close of 2021, the NAIC voted to form a new high-priority H Committee on innovation, cybersecurity and technology to address the insurance implications of emerging technologies, such as AI, and cybersecurity — with this being only the eighth committee of this level in the history of the NAIC.

External validation is not just for the sake of regulatory compliance. For Warden, AI and data science teams must be willing to be looked at by internal audit teams as well as external verifiers: “I’ve gone through that process in the past and it’s painful. But it’s necessary because you can uncover things that [are] slipping through the cracks.”

Take the Time to Be Thorough

Carriers and Insurtechs need not wait for AI regulations to dictate their accountability practices. For both Habayeb and Warden, continuously monitoring the output of algorithms should be standard protocol.

“There’s such a rush to develop and deploy in this space that many companies potentially are not paying as much attention to the ongoing monitoring of the of the model output that they need to,” Warden said. “Companies need well-documented processes for developing, testing and deploying AI.”

Documentation about how models are developed, and the checks and balances implemented throughout development, are essential to make sure they’re being built properly. “And that goes beyond just the ethical use of data,” Warden added.

Checks and balances provide a sense of assurance for insurers and insured alike. “As you think about the impact on people and insured parties, these models are going to have great impact. Those stakeholders would like to know that, when a model has made a decision about them, someone has checked it, someone’s tested it,” Habayeb said.

No Need to Overcomplicate Things

There is no need to reinvent the wheel when it comes to AI risk management. Carriers have corporate governance and risk principles in place.

“We should not overly complicate what AI is [so] that it feels unapproachable to humans,” Habayeb said.

“Instead, a connecting thread can be maintained, recognizing humans built this, humans should be testing this, and humans should be able to establish confidence in the applications’ safety, transparency, trustworthiness,” Habayeb added.

Giving too much credit to AI alone is risky. “Then we’re disconnecting the accountability from the people who built it,” Habayeb said. And people make mistakes in the products that they build all the time.

“I think that we have a tolerance for mistakes,” Habayeb said. “We don’t have a tolerance for negligence.”

Source: Risk & Insurance

Share this article: