Challenges facing the industry include the lack of loss experience data or models that estimate the potential frequency and severity of risks, particularly given the potential of impacts to spread rapidly.
“As a result, there are few AI insurance policies commercially available and AI losses are not explicitly covered by traditional insurance products,” it says.
Issues in providing cover involve questions of who is ultimately responsible for an AI system’s problems and how fault should be identified and apportioned.
“Complications of causality may arise due to various contributors that are typically involved in the creation and operation of an AI system, including data providers, developers, programmers, users, and the AI system itself.,” the paper says.
The white paper highlights a number of cases where use of AI has led to discriminatory outcomes or where it may have made incorrect associations.
“Even in instances where the data appears perfect to the human eye, AI can pick up patterns in the data that were not anticipated during the training process,” it says. “This can cause an AI system to draw inaccurate conclusions and thus, ultimately, generate incorrect or undesired outcomes.”
Despite the fast rise of algorithm-related activities, the response of the US, Chinese and European legal systems so far is described as “rather cautious”, while all algorithms also require specialised knowledge.
“The complex and often opaque nature of algorithms, specifically ‘black box’ algorithms or deep learning applications, means that they lack transparency and can sometimes hardly be understood by experts,” the paper says.
Source: Insurance News