The AI black box problem - an adoption hurdle in insurance

APR 20, 2021

Emerging technologies are enabling companies across verticals to transform their processes and bring about a paradigm shift in the global business landscape. Even the traditionally slow-to-adopt-technology insurance sector has not been not untouched by this drive to disrupt and become more competitive. While the insurance space is fraught with inherent barriers that prevent any sweeping, rapid change — like tight regulatory and compliance requirements — the change is inevitable. Propelled by concepts of advanced analytics, robotics, the Internet of Things, artificial intelligence (AI) and deep learning (DL), the insurance space is on the threshold of perhaps, its most disruptive phase ever.

Insurance companies then turned to AI systems and predictive models, after realizing the need for an additional tool to automate decisions where it was not feasible to create rules-based logic. Many companies have already started their innovation programs by automating existing rule sets with machine learning models to bring automation in existing processes. These models can make automated decisions across vast quantities of data. Yet, insurers are quite skeptical about AI adoption for core processes, their main concern- the ‘black box’ of AI driven decisions.

Explaining AI decisions after they happen is a complex issue, and without being able to interpret the way AI algorithms work, companies, including insurers, have no way to justify the AI decisions. They struggle to trust, understand and explain the decisions provided by AI, since the system only provides a view of the input and output, but reveals nothing of the process and workings in between. Owing to the self-learning nature of AI and machine learning systems, the rules are continually being updated by the system itself. This was not the case with traditional rule based systems - the rules did not update or change on their own, and explainability was never a challenge since there were fixed outcomes based on the rules written by users.

So, how can a heavily regulated industry, which has always been more inclined to conservatism than innovation, start trusting AI for core processes?