Risky business: why it’s so important to let the machines do the work
Staying ahead of the curve, especially when it comes to new technology, is critical for the insurance industry. And with artificial intelligence (AI) becoming the new buzzword, here Chris Downer, associate at XL Innovate discusses why it makes business sense to start phasing this in. Downer is at InsurTech Rising US exploring What’s next and how to prepare when it comes to emerging tech.
Man versus machine – a conflict which dates back to the invention of the wheel (“hey, this ‘wheel’ thing sure is going to put a lot of porters out of work”) – looms heavily over our world today, and the realm of risk management is no safe haven. Most pieces discussing the impact of AI, data, and machines focus on how algorithms can take over small human tasks – from crunching large data sets, to translating text in multiple languages instantaneously, to automating advertising.
However, as someone who backs start-ups that analyse risk as a living (i.e. insurance), the opportunity that comes with AI and technology is so much larger than better algorithms and advertising. While machines today certainly have their limitations, we cannot escape the fact that in area after area, man’s limitations vastly exceed those of our mechanical counterparts (look no further than Flippy the burger flipping robot).
In fact, anywhere from 75% to 94% of incidents in property and casualty insurance are due to human error. Insurers are in business and covering risk, in large part, because of humans. Unfortunately, these incidents aren’t just totalled cars or bruised backs, but lives lost. If any other single element triggered over 75% of losses, of course users and insurers would look to immediately eliminate that element. Obviously, I’m not advocating for a Westworld/i-Robot-type future (although both make for very good entertainment on the TV at home), but why not move to reduce human error and let machines do more of the work?
There is no doubt that AI and machines can dramatically reduce these risks, saving lives and avoiding massive financial and productivity losses.
Here are three obvious places to start where progress in AI will translate to big reductions in risk:
A National Highway Traffic Safety Administration (NHTSA) study looked at the major accident causes, and they found that a mere 2% of accidents were caused by the environment, another 2% were caused by the vehicles, and 2% came from “unknown” causes. That means a full 94%, meanwhile, where caused by human error. 94%! If humans were in school for bad driving we’d be getting a solid A for our work thus far.
What does this mean? Well, statistics show over 3,000 people die every day due to road crashes and another 20-50 million are injured or disabled every year, globally. In financial terms, road crashes cost $518 billion globally, which is 1-2% of global annual GDP. That is a terrible track record and needs to change. Given Waymo has driven over five million miles at last count, with zero fatalities, it seems likely autonomous vehicles will be able to do better. Still, we’ll have to contend with AI ethics questions, like the infamous Trolley Problem.
The story isn’t any better in the marine space. An analysis by Allianz shows that human error accounts for approximately 75% of the value of the almost 15,000 marine liability insurance claims studied over five years from 2011 to 2016, equivalent to over $1.6 billion in losses. In 2016 alone, marine accidents killed 1,596 people and caused $2.5 billion in damage. In 2017, US Navy accidents led to the deaths of 17 sailors, between a series of destroyer collisions.
Investigations have led many to believe the culprit is sleep deprivation, which is one of the most basic human needs. You would think the military, of all organisations, would seek to reduce risk related to human impairment. Unfortunately, incentive structures mean that in many cases, humans are not even performing at their average cognitive capacity. Fortunately, sleep deprivation is not a concern for AI.
Ok, but humans have to be better if they’re not operating heavy machinery, right? Wrong. Cyberattacks and data breaches may not carry a death toll, but can lead to sizeable financial losses. It turns out 91% of ransomware infections start with an employee clicking on a phishing email and 95% of all security incidents involve human error. Those are ugly statistics, but would be fine if they didn’t lead to material business impact. How big is that impact? Well, according to IBM, the average cost of data breaches to US organisations was $7.35 million. Human culpability doesn’t come cheap.
Better training from IT teams can help, but they fail to work as well as preventative and economical scans from up-and-coming cybersecurity companies.
So the solution to lowering risk is pretty simple: remove humans from the equation. Or, in the case of office workers and cyberattacks, make sure there is an AI security system that can augment human abilities. The fact is, humans are no longer our safest option for driving a car, or captaining a boat, or avoiding cyber scams and phishing. We – as humans – have set the bar so low, machines couldn’t possibly do much worse at this point.
But, if software is the solution, a challenging question arises:
Where should humans stay in the decision making process?
This is not a blue collar or white collar question — AI will impact every part of the global economy. For the most part, this will be a positive development, but we need to maintain a candid view of what humans aren’t good at, and where it makes sense to cede control to machines.
So, where do you think AI ends and human judgment takes over?
Chris Downer is an associate at XL Innovate, where he focuses on insurtech investments in North America, Europe and Asia, and leads due diligence and deal sourcing. He is a board observer at Pillar Technologies, an end-to-end environmental monitoring solution for construction sites and Stonestep, which provides microinsurance as a service in emerging markets.