AI Regulation Divide Poses Growing Risks for Insurers, say Leaders
AI Regulation Divide Poses Growing Risks for Insurers, say Leaders
The UK and US collective decision not to sign a global AI declaration, underscores a growing division over international regulation and raises concerns about the emerging risks of unregulated AI, say insurance business leaders. Insurtech Insights gets the lowdown from Mark Kirby, Professional Services Director at Intersys Ltd. and Risto Rossar, Founder and CEO of Insly.

As AI innovation accelerates, the lack of coordinated global safeguards is increasing uncertainty for businesses and the insurance sector.

With AI adoption expanding across industries, the absence of clear regulatory frameworks presents significant liability, compliance, and cybersecurity challenges. The insurance industry, which plays a critical role in risk mitigation, faces mounting pressure to assess AI-related exposures, including algorithmic bias, data security breaches, and operational failures.

What is the Global AI Declaration?

The Global AI Declaration is an international agreement aimed at promoting the ethical, inclusive, and sustainable development of artificial intelligence. Introduced at the AI Action Summit in Paris on February 11, 2025, the declaration has been endorsed by over 60 countries, including India and China. It emphasises the importance of ensuring that AI technologies are developed and deployed in ways that are open, ethical, secure, and sustainable.

Both the United States and the United Kingdom declined to sign the declaration, citing concerns over national security and a perceived lack of clarity regarding global AI governance. U.S. Vice President JD Vance also criticised Europe’s regulatory approach to technology and expressed apprehension about collaborating with China in this domain.

The refusal of the U.S. and the U.K. to endorse the declaration has drawn criticism from various quarters, including campaign groups and AI research organisations. Critics argue that this move could undermine the credibility of these nations as leaders in ethical AI innovation.

However, many believe the Global AI Declaration represents a significant step toward establishing international norms and standards for AI development. The differing stances of major global players highlight the ongoing challenges in achieving a unified approach to AI governance.

The Declaration aims to foster:

Ethical Development: Commitment to developing AI systems that adhere to ethical standards, ensuring respect for human rights and societal values.

Transparency: Advocacy for openness in AI algorithms and decision-making processes to foster trust and accountability.

Inclusivity: Ensuring that AI benefits are distributed equitably across different sectors of society, preventing biases and discrimination.

Sustainability: Promoting the use of AI in ways that support sustainable development goals and environmental stewardship.

International Collaboration: Encouraging cooperation among nations to establish common frameworks and standards for AI governance.

Mark Kirby, Professional Services Director at Intersys, warns that the current regulatory standoff could exacerbate risk volatility, making it more difficult for insurers to develop comprehensive AI-related policies. Without a global consensus, businesses may struggle with inconsistent regulations across jurisdictions, further complicating risk management strategies.

As the debate over AI governance continues, industry leaders and policymakers must navigate the balance between innovation and risk mitigation, ensuring that businesses and insurers are equipped to adapt to an evolving regulatory landscape.

Mark Kirby

He said: “The UK and US’s refusal to sign the global AI declaration is a clear signal that national and financial interests are being prioritised over collective security. While AI innovation continues at an astonishing pace, the absence of robust international safeguards poses serious risks – not just for businesses, but for the insurance industry that underpins them.

Kirby explained that AI’s ability to process and generate vast amounts of data creates new exposure points, and noted that bias in training models can lead to unfair or inaccurate decision-making, presenting challenges in underwriting and claims assessments. He also said that the rise of AI-driven fraud – such as deepfake-enabled scams and hyper-personalised phishing attacks – demands urgent attention from insurers assessing cyber risk.

And the risks don’t stop there. “Compounding this is the threat of AI ‘data poisoning,’ where malicious actors manipulate datasets to distort AI outputs. Without proper oversight, we risk an environment where fraudulent claims become harder to detect, identity verification is undermined, and businesses face an evolving cyber threat landscape,” Kirby said.

He continued: “The insurance industry must prepare for these challenges now. The failure to establish international AI standards only increases exposure, making it imperative for insurers to integrate AI risk management into policies, fraud detection, and cyber liability coverage. With AI continuing to shape the business world, insurers cannot afford to wait for governments to catch up.”

Risto Rossar

Risto Rossar, Founder and CEO of Insly, which recently launched FormFlow, an AI-powered insurance tool, also expressed concern and misgivings regarding the Code of Practice. 

He said: “Most of the new AI Code of Practice is unproblematic for insurance and insurtech companies, which already have strong security standards. However, the biggest red flag from an innovation perspective is the need for ‘human oversight’ of AI systems.”

Rossar pointed out that while this may not impact insurance companies in the near term, given AI is currently being used to enhance human activities, there is a significant chance it will hinder progress and growth in the longer-term, as the technology becomes more powerful. 

He continued: “I’m convinced that building a fully AI-powered insurance company will ultimately be possible, but it can’t happen if human oversight is always required. After all, a self-driving car isn’t self-driving if a person is still needed at the controls.”

Rossar added: “Insurance companies already operate within strict financial and security compliance frameworks, so I question the need for more rules at such an early stage in AI’s development. All it does is limit the scope for UK innovation and lay the path for future insurance leaders to be built elsewhere.”

Reporting by Joanna England

Share this article:

APPLY TO SPONSOR

Gain access to the most senior audience of insurance executives, entrepreneurs, and investors. We offer a wide range of opportunities for you to engage with our attendees from networking to thought leadership.

Sponsorship packages provide a wide range of opportunities developed for almost any budget and are designed to help achieve your branding, networking, and/or thought leadership goals. 

Insurtech Insights Europe 2025

Join us at Europe's largest insurtech conference at InterContinental London - The O2
on March 19-20th, uniting over 6,000 senior insurance professionals!