Legal data is incredibly valuable. In the insurance field, data about the average similarity score of the clauses in a policy - a measure of the relative similarity or divergence for one clause in relation to a set of other similar clauses that are meant to accomplish the same function - can be used to instantly show how one policy stacks up against another. Data about the composition of cyber policies can be used to reveal which clauses or information a given policy might be missing.
The following series of posts survey how the seemingly nascent features of an insurance policy - policy metadata - can be levereaged to create artificial intelligence that improve the quality and efficiency of drafting these policies. By using even the most rudementary AI tools, underwriters and legal professionals can begin to speed up the work they are doing by more effectively spotting areas that deviate from industry standards.
I. Reviewing documents
The way documents are reviewed is broken. Having been through law school and worked as a doc reviewer in a law firm, I thought I knew what document analysis entailed. It was not until I expanded my perception of what document review could be that I realized how wrong that thought was.
With advances in computing technology, documents can be broken down into a bunch of different pieces. Those pieces can then be scored against each other to deliver deeper insights about a document or series of documents that would otherwise be difficult to achieve. The above spreadsheet is actually one document, broken down into about 100 rows and 10 columns. Document review, now, is more than just reading words on a page.
The machine learning algorithms incorporated into the RiskGenius platform have opened my eyes to the ability of technologically advanced tools to supplement document review, make the process of reviewing documents easier, and empower lawyers, underwriters, and other professionals working with documents. Over the rest of this post, I will walk through an exercise that demonstrates the value of a modern review process that actually turns legal text into data that is able to be scored against each other.
II. Documents --> Data
To show you one of the cool things that our cyber liability index (the above spreadsheet) actually produced, I decided to start creating a picture of the average cyber liability policy. After looking at 1994 Definitions, 1204 Exclusions, 915 Conditions, 378 Insuring Agreement sections, 95 Limits of Liability, and 71 Opening Statements*, we were able to determine that the average cyber liability policy has 1.5xOpening Statements, 8xInsuring Agreements, 2xLimitations of Liability, 42xDefinitions, 19xConditions, and 26xExclusions. Here is how that looks as a pie chart: