The Looming AI Risks Insurtechs Must Prepare For
The Looming AI Risks Insurtechs Must Prepare For
Artificial intelligence and machine learning are driving what McKinsey calls a “seismic, tech-driven shift” in the insurance industry. But even as insurtech companies find new ways to enable intelligent customer experiences, they must also become rigorous in applying what Gartner research calls “AI TRiSM” to achieve business success while avoiding AI’s pitfalls.

By Manasi Vartak, Founder and CEO of Verta

Gartner defines AI trust, risk and security management (AI TRiSM) as a “framework that supports AI model governance, trustworthiness, fairness, reliability, robustness, efficacy and privacy.” Supporting TRiSM requires a blend of technology and processes that enable AI explainability, ML governance and data protection.

The focus on TRiSM stems from AI’s very success in industries like insurance. The more companies turn to AI to drive decisions that impact consumers, the more opportunities arise for bad outcomes:

  • Consumers are concerned about the use or misuse of their personally identifiable information (PII) to make decisions that impact their financial well-being. These concerns affect consumers’ trust in companies employing AI to make these decisions.
  • Regulators are scrutinizing enterprises’ handling of PII and compliance with laws around protected classes. Proposed AI regulations like the ADPPA and the AI Bill of Rights create new risks for organizations that rely on AI/ML in revenue-driving applications.
  • IT experts worry that hackers could access not only consumers’ data but also algorithms that make decisions on underwriting, pricing and claims. These security threats, in turn, make it more difficult to foster consumer trust and mitigate regulatory risks.

Capabilities to Support TRiSM

From the technology perspective, capabilities to support AI TRiSM should be embedded in the operational AI platform that an enterprise uses to productionalize, manage and monitor its machine learning models. This ensures that best practices around trust, risk and security are baked into the tools and processes used by data scientists, ML engineers, DevOps and others involved in the ML lifecycle.

The capabilities that support AI trust, risk and security management include:


A key component of ensuring trust in AI is explainability — the ability to understand how a model arrived at an outcome. A central enterprise model management system provides a “single source of truth” where data scientists can publish all metadata, documentation, and artifacts related to a model, making explainability data easily accessible.

Companies should also set up explainability and bias checks as part of the release process for a model to ensure compliance with Ethical AI standards. Finally, trust requires visibility to the data supply chain, and enterprises should ensure that they can track model lineage back to training data used in experimentation. 


Enterprises manage AI risks by applying rigorous governance to models throughout the ML lifecycle, from development and staging to production and archive. Governance and risk teams should be able to monitor model I/O and performance, administrating rules that ensure against bad outcomes. 

An operational AI platform should enable model validation to ensure that the organization’s models are performing as designed, identify edge cases that require manual processing or further review, and monitor the overall health of models through metrics such as response time, latency, error rate and throughput.

Enterprises also should expect that their model management system will automatically monitor data quality and drift, as well as model performance metrics like accuracy, precision, recall, and F1 score. Alert capabilities — where alerts are triggered in the event of performance degradation or drift — ensure that corrective actions can be taken before bad outcomes occur.

Security Management

Data protection and IT security of the ML process are essential for supporting AI TRiSM. Security around data begins with enabling granular access controls across the entire ML lifecycle, providing users the right access to the right information while preventing inadvertent or intentional breaches. Ideally your operational AI platform should easily integrate with existing security policies and identity management system, versus creating yet another system for IT to manage.  

An effective operational AI platform should further allow companies to create separate workspaces that provide isolation between environments, whether between teams or between development and production models. It should allow for establishing standardized safe release practices with release checklists and CI/CD automations, and be able to scan models for vulnerabilities as part of the deployment and maintain audit logs for compliance.

The Time to Start Is Now

The good news is that companies addressing the issues around AI trust, risk and security can expect significantly better business outcomes from their AI projects, according to Gartner. On the other hand, research states that organizations that don’t take steps to manage these risks are much more likely to see models not performing as expected, security failures, financial losses, and damage to reputation.

As the insurtech industry continues to expand and innovate AI applications to better serve a broadening customer base, the time to deploy capabilities to support AI TRiSM in your ML operations is now — before bad outcomes can occur that impact consumer trust or capture regulatory scrutiny.

About Verta

Verta enables enterprises to achieve the high-velocity data science and real-time machine learning required for the next generation of AI-enabled intelligent systems and devices. Based in Palo Alto, Verta is backed by Intel Capital and General Catalyst. For more information, visit or follow @VertaAI.

About the Author

Manasi Vartak is the founder and CEO of Verta, a Palo Alto-based startup building tools for AI & ML model management and operations. Manasi is the creator of ModelDB, the first open-source model management system deployed at Fortune-500 companies. She previously worked on deep learning at Twitter and Google. She earned her MS/PhD in Computer Science from MIT.

Share this article:

Share on linkedin
Share on twitter
Share on facebook