RiskGenius Blog

Insights into our world of Insurance & Technology.

Fixing Commercial Property Insurance (Part 2)

RG-Blog-102018-CommercialInsurance-pt2

Hopefully, from the last post, it is apparent that there is a disconnect that exists in the commercial property underwriting world. There was certainly a lot of feedback from the first post over on Linkedin. For example, I loved this in depth response: 

Screenshot 2018-09-16 08.35.43

As a quick summary of blog post 1 

Reviewing a manuscript policy takes too long because the underwriter must rationalize two sources of data -- the insurance policy and the underwriting checklist. The underwriter must look back and forth from the policy to the checklist, interpret each clause and rule, reconcile similarities and differences, then make a determination as to what rule applies to what clause, or what clause applies to what rule or some combination of both.

For example let’s go back to a previous example:

Imagine you are reading through the commercial insurance policy that includes the following exclusion: “War or other violent events.”

An underwriter has to first look at the underwriting checklist, and identify there is a rule for “War Exclusion”. Then the underwriter looks back at the clause -- “War or other violent events” -- and confirms he has identified a war exclusion clause. Then the underwriter checks the box for “War Exclusion.”

In this blog post, we are going to look at an interim solution for better manuscript review. Then we will discuss the technological underpinnings of the best solution for manuscript review. 

 

An Interim Solution for Better Underwriting Checklists

Underwriting checklists almost always come in the form of a Microsoft Word document or PDF. Users then fill in check boxes either manually (after printing out the checklist) or on their computer.

I believe Microsoft Excel is a better approach for underwriting checklists.

Here’s how I would setup an underwriting checklist in Microsoft Excel for two rules:  

Clause Category

Sample Language

Must be included

Must not be included

Optional

Instruction

UW Analysis

Sublimit

Exclusion – War

“War or other violent events.”

X

   

Confirm this includes violent events.

Exclusion included.

NA

Property – Alien Abduction

“Property taken as part of alien abduction.”

 

X

 

Sublimit if included.

Coverage included and sublimited.

$100,000

 

In the first rule, the underwriter must confirm that a war exclusion clause is included in the policy and that the clause contains the words “violent events.” In the second rule, the user is to evaluate whether coverage for property taken during an alien abduction is included. If this coverage is requested, it must be sublimited (most likely in the declaration page). Here, the underwriter has confirmed the coverage exists in the policy and has been sublimited.

I could see an argument that completing this analysis in an Excel spreadsheet would take longer than a Word or PDF document. However, there are functions other than underwriting that must access and use this data. For example, most insurance carriers conduct audits, particularly when manuscript policies are completed. A spreadsheet, like the one detailed above, provides a standardized view into the policy review process. This makes the auditing function easier.

Additionally, using a spreadsheet will automatically create standardized data that can then be reused. In Part I, I mentioned that underwriting checklist data lays dormant when it is stored in Word or PDF document. This data is unusable because it is not structured or categorized. However, if the checklist analysis is stored in a structured spreadsheet, it can be analyzed at the aggregate level. For example, an underwriting team could very quickly evaluate how many manuscript policies contain coverage for alien abductions (I’m imagining a rash of invasions catching carriers off guard!).

But using an Excel spreadsheet to do a manuscript policy review still isn’t good enough.

 

Envisioning the Ideal Solution for Manuscript Review

In a perfectly efficient world, in the War Exclusion example, the underwriter would be presented automatically with this information in one nice user interface:

Clause: “War or other violent events.”

Category: Exclusion – War

Applicable Rule: This clause is mandatory.

Decision: [Accept or Reject]

Comment: [Explain why]

Here’s how we present this information in RiskGenius:

Screenshot 2018-09-16 08.44.02

What if each underwriting rule was tied directly to a clause?

This is possible through RiskGenius. The key is to take every clause in a policy and categorize it and then set up rules based on these categories. 

This is possible through machine learning. But first I will explain a manual approach to complete this type of analysis for the non-data scientists reading this.

 

The Technical Hurdle: Consistently Categorizing Clauses

Categorizing clauses is very hard.

Let me repeat that, categorizing clauses is very, very hard.

Someday, RiskGenius will study the error rate in the underwriting industry for interpreting insurance policy language. Until then, I am going to simply point to statistics from the legal world because reviewing documents for litigation is similar to reviewing an insurance policy.

In the litigation context, a document reviewer skims each section to determine if a document includes a relevant issue (e.g. Is it attorney client privileged? Does it cover a particular hot topic?). Reviewing an insurance policy against an underwriting checklist is similar in nature (e.g. does this document cover a particularly topic, like a war exclusion?).

Humans consistently make mistakes when interpreting documents or clauses or any kind of text. I know this firsthand because I spent the first three years of my professional life doing document discovery for a large commercial insurance case. One American Bar Association article estimates that during litigation document review (e.g. determining what documents must be produced to the other side), the error rate can be 50 percent or higher:

“Studies and anecdotal evidence suggest that the error rate of document review can be 50 percent or higher, a troubling number indeed from the perspective of those who recognize the importance of having a quality discovery process.”

 We have seen similar error rates play out when we work with insurance carriers that tried to undertake clause categorization across libraries of insurance policies. In one instance, underwriters copied clauses to a spreadsheet and then typed in clause categories. When we reviewed the spreadsheet created by the underwriters, it was full of errors.

A simple experiment demonstrates how hard it is to correctly categorize information, like a clause or paragraph. Look at the three paragraphs above this one. Read them and create a label for each clause to describe it. 

Are you done? Do it before reading on.

Here are my labels:

  • “Fixing Insurance - Machine Learning - Error Rate Citation”
  • “Fixing Insurance - Machine Learning – Carrier Experience”
  • “Fixing Insurance - Machine Learning – Experiment”

I guarantee you came up with labels that are different. Now try to imagine doing this same exercise across millions of insurance clauses. This is the Herculean task that we have undertaken at RiskGenius. This is the Herculean task that no insurance carrier should undertake.

There is really only one way to consistently categorize insurance text: machine learning.

In our next post, I will review a commercial insurance policy in our software with Rules in place.

p.s. If you have a sample manuscript policy you would like to submit, email me at chris@riskgenius.com.