The issues in this article will be addressed in a keynote to be given to the British Computer Society (BCS – The UK Chartered Institute for IT) at its January 22nd legal specialist interest group seminar:
Cyber insurers are in the difficult position of seeking to cover risks that are changing all the time (as the law and threat landscape change) and that are complex in nature, with cover that can be simplified so that in the majority of cases it can be sold via non-specialist brokers to the mass market.
Policies are still evolving and changing rapidly as the law and threat landscape change. The recent Morrisons case, where an employee posted personal details of 100,000 staff online, set a new legal precedent when it found the supermarket chain vicariously liable for the criminal actions of an employee for failing to put adequate safeguards in place to prevent it. It has been upheld in appeal and in the high court and is now progressing to the supreme court. At the same time there is an ongoing AI-powered cybersecurity arms race in which black and white hats are battling to uncover vulnerabilities that they can either exploit or patch. The problem is that the black hats only need to be lucky occasionally while the white ones have to be lucky all the time.
While awareness of cyber insurance is high – it is probably featured in more column inches of press coverage than any other line of insurance – and general privacy awareness is high – following the introduction of GDPR in the EU and CCPA in the US – this has yet to translate into take up of cover. The UK market still only has a 10% penetration rate for cyber insurance – way below the US where the penetration rate is at about 40%. And while cover has been offered for a number of years now, there is not yet a large enough book of business for effective risk mitigation for most insurers or a large enough claims history for detailed claims analytics.
There have been some cases where cyber claims have not been paid in full. Many insurers failed to pay out at all for the Wannacry and NotPetya claims when these widespread incidents occurred. For example, Zurich Insurance Group declined to pay for Mondelez’s $100m damages claim because NotPetya was considered a “hostile or warlike action in time of peace or war”.
And while Norsk Hydro had cyber insurance cover, it received an insurance payout of only $3.6 million – about 6% of the $60 million to $71 million incident costs as the cover was restricted to the cost of the fix and cover for business interruption and consequential damage were not included.
At the mass market end of the spectrum where small and medium sized businesses (SMBs) are seeking cover, it simply isn’t possible to do a detailed cyber risk audit or evaluation for every single client. This kind of problem was addressed in the credit risk sector by rating agencies like Equifax that developed credit risk scoring mechanisms that could be derived from available data, allowing their clients (the banks) to provide risk-based services that matched a customer’s risk position.
Likewise in cyber risk, cyber security rating agencies are emerging that provide organisations with a cyber risk rating or score. Many of these firms use tool to calculate risk that use web crawlers that test all externally-facing end points for known vulnerabilities. This is probably as effective as an automated mass-market service of this kind could be, but there are a number of flaws.
It a company uses an external marketing agency to run all its marketing campaigns then this marketing firm will hold much of the most critical personal data, but this will reside on systems that the web crawlers won’t associate with the company. Likewise, if all a firm’s data is held on the cloud with AWS or one of its rivals then the web crawlers won’t know which buckets are used by the firm to hold its data. And misconfigured AWS S3 buckets are one of the common causes of leaked data.
Indeed the CTO of security firm RedSeal recently commented that this kind of score “taken from the outside looking in, is similar to rating the fire risk of a building based on a photograph [taken] from across the street. You can, of course, establish some important things about the quality of a building from a photograph, but it’s no substitute for really being able to inspect it from the inside.”
It may be the only effective approach for the mass market end of the spectrum serving SMBs, but it simply does not scale to accurately assess the risk position for larger, more complex organisations – whatever the risk ratings firms would have you believe.
Again looking at the finance sector for comparison, the Equifax-style system of personal credit scoring works well at the mass market end of the spectrum and for larger more complex organisations you have Moodys, Standard and Poors etc. The problems are that at the mass market end of the spectrum the web crawler approach is a fairly crude way of measuring cyber security risk, and at the large complex end of the scale you can have more detailed cyber security audits to get a more accurate idea of real risk – but how do you ensure that such audits are accurate and independent, when the audit is being paid for by the firm being audited. This is exactly the problem that we had in the finance sector with the global financial crisis – here the ratings agencies, like Moodys, Standard and Poors etc, were criticised for not being objective enough as they were reviewing their clients.
This is the second in a three-part blog series. In the last part we looked at ‘Why traditional crisis management techniques don’t work with a cyber incident, and may even make things worse’ and in the next one we will be looking at ‘crisis preparedness’ (which is rapidly becoming a ‘source of competitive advantage’).