This is the first session of a RISCS/NCSC workshop that explored supporting boards in making decisions about cyber security in June 2017. It led directly to the research call that closes December 1, 2017. The day, which is summarised in full here, was broken up into three sessions. The other two are The problem with vendors and How do boards assess and manage cyber risk?.
Rachel C set the scene by discussing misunderstandings and failures on both sides to understand each other’s priorities and requirements. Both sides have to collaborate to work out the critical elements of cyber risk to make the best decisions. Practitioners complain that boards don’t ask the right questions or don’t fund the right things; boards complain that they are given an avalanche of technical detail they don’t understand that provides insufficient information about how those details relate to the level of risk. In some cases, the issue may be language; the RISCS cSALSA project is studying the different ways people talk about cyber security at different stages of their lifespan.
A key theme that emerged from the last Practitioners Panel was metrics, which can be an effective way of communicating. However, the fact that cyber risk can’t be reduced to a simple 1 to 5 measure leads to many questions about what can be measured and how, and what those measurements mean.
Rachel set four questions for the day:
- What would help the board to better manage cyber risk?
- What disciplines and approaches could help us solve this problem?
- What’s wrong with current approaches?
- What information does the board need, and what information do security practitioners want to report?
To further set the scene three presentations tackled the question, “What’s wrong with metrics?”
Tim Roberts, managing director of the IBM subsidiary Promontory, offered the board’s point of view. A day earlier, Roberts met with a company in the business of supplying sensitive data to clients. A severe data incident that led to the discovery that the company may have been delivering corrupted data for the last ten years means it will have a lot of compensation to pay, plus regulatory fines. The root cause will be a combination of failed processes, human error, negligence, neglect, and no one acting on things that didn’t look right. So one problem Roberts sees is too narrow a focus on a single definition, leading to his first question: should cyber metrics cover just external attacks or a wider set of risks to a business?
Boards, he says, want the latter: what are the risks to my business that are concerned with systems and data? What do I need to manage/steer the business?
He has observed some common problems:
- Metrics are absent or scarcely visible. He cited the example of a bank’s presentation to its risk committee; after myriad pages on individual loans, there was one page on operational risk, on which was a box discussing cyber risk.
- Presentations are not proportionate or clear, or offer only a partial picture.
However, he also sees good examples:
- There is a lot of business literature that explains what good metrics look like. Central banks, for example, believe they have a clear sense of best practice. However, setting a “risk appetite” is not common outside of financial services.
- Different dashboards and metrics can be used to provide separate information at board and practitioner levels. In a major UK bank, Roberts has seen a cascade of 300 operational metrics across 100 units that feed up into 12 master metrics the board uses to set its risk appetite. The board can look at a cross-section at any level in between if they want to focus on a particular area. Businesses need systems suitable to their scale.
- Metrics have to be updated in response to changes in the business and its risk environment. If a company has engaged in a major technology transformation, the risks associated with that programme need to be identified and presented.
- Risk data is helpful, but presenting information that highlights potential conclusions and/or actions is more so. That approach leaves scope for manipulation, but highlighting what’s changed, what isn’t in line with expectations, and what’s deteriorating is valuable. When the European bank mentioned above redesigned its information pack, it began with a single page of all risks, highlighting the ones that they deemed needed focus, but retaining the ability to roam freely through the supporting information.
Andy Jones, the CISO of Maersk Transport and Logistics, previously held the same position at Unilever and Sainsbury’s, and before that spent six years as a researcher; he set out a practitioner’s point of view. It appeals to him that “cyber” is derived from “cybernetics”, which was coined in 1948 by Norbert Weiner, who derived it from the word for “steersman”, because Maersk is the world’s largest container ship operator. The hype and FUD (fear, uncertainty, and doubt) we live in have been necessary in order to get cyber security taken seriously, but security practitioners have no other cards to play, and boards get tired of hearing the same old scare stories while reading newspapers that say otherwise. Businesses want to embrace the digital market for obvious financial reasons but are mired in threats and hype.
A crucial question is: who’s your audience? Generally speaking, CISOs are unpopular because they are a source of bad news and because board members don’t want to ask questions that make them look stupid. However, they are nonetheless smart and – particularly non-executive members who sit on other boards – often ask very good questions.
They are generally:
- Interested in return on investment;
- Uncertain how real the threat is;
- Unhappy to hear that it can’t just be “fixed”;
- Uncertain how to judge cyber risks against other risks;
- Uncomfortable with the topic;
- Concerned about both organisational and personal reputational risk.
Practitioners face a number of problems in trying to communicate with them such as different linguistic dialects and the varying levels of maturity across a company, although to Jones’s surprise cyber risk turns out to be one of the more advanced risk disciplines. A more complex problem is that so many metrics can be read any way you like: if a high number of viruses is detected, does that mean detection is effective or that patching is poor? If there’s a low number, is that rising or falling. The correlation between any of these and risk is hazy.
Boards do respond to the following, which unfortunately are all poor risk indicators:
- Peer benchmarking;
- Compliance measures;
- Graphs and colours;
- Legal and regulatory drivers.
Often, cyber risk translates badly into standard risk templates. For example, a cyber attack causing damage valued at £10 million is financially insignificant to Maersk, a £25 billion company whose bigger risks lie elsewhere such as the risk that political upheaval will close a port. That said, many risks, such as the chance of a cyber incident, are certainties, not probabilities and should be treated as such. However, it is often not clear where to report new risks as they emerge.
Communicating with any board has to take into account company culture, the language of the organisation, and the personal outlook and agenda of individual board members. Analogies and storytelling may help. Inspiration may come from other industries such as gambling, aviation, military intelligence (although Jones felt it’s been too dominant in security to date), polling, and advertising, plus new areas such as big data, fuzzy logic, quantum theory, and chaos theory. Above all, Jones concluded, challenge the assumptions. For example, it’s wrong to think that if you can’t measure something you can’t manage it.
Angela Sasse, director of RISCS, offered an academic’s overview of the frustrations of trying to collect data of sufficient quality to develop theories and models – and then more data to test them. In her career, Sasse has conducted hundreds of interviews with organisations and surveys with thousands of employees.
The two questions RISCS began with in its first phase require data to answer with scientific rigour:
– How secure is my organisation?
– How do we make better security decisions?
In Sasse’s experience, many organisations measure things that are easy to measure. The success of companies like PhishMe is based on the fact that it’s easy to compel people to take their training and easy to measure the results. But, Sasse asks, what does it really mean and what are the unwanted side effects? What if clicking on fewer links means losing customers?
Practitioners rarely understand that academics wanting to test a new metric need to take a baseline measurement so they can measure effectiveness. Instead, practitioners response to new ideas by taking one measurement and presenting the resulting data at a conference and saying it’s been validated by academics. New metrics must be put into context, and often this isn’t done.
In addition, it’s essential to take enough measurements to see whether the cost and benefits are in proportion and to measure the cost and benefits of the measurement itself. An organisation obsessed with password strength had logs it never looked at – and so they never spotted three hacking attempts on 100,000 accounts. In secure organisations, access has to be audited – but organisations refuse to buy the tools to make auditing humanly possible. In one organisation, they found eight years’ worth of logs that had never been looked at.
In one case, a high street bank, wanting to improve the security awareness of its corporate and SME customers, conducted roadshow events and webinars. But its only metric was the number of people who attended – nothing about whether attendees took the advice, changed anything, or felt better equipped. When Sasse’s group proposed asking attendees to fill out a short questionnaire, the bank refused even to let them ask the industry sector and size of company, saying it would breach client confidentiality. The researchers attended seven in-person events, and found they were probably too long and that the opening, in which a former police officer condemned many common practices, caused younger people to tune out. They tried to set up an anonymous interview system, but found the bank always had reasons why a particular event couldn’t be followed up and wouldn’t allow the researchers to write an explanation that could be showed to the clients. Meantime, businesses were being hit by payment scams in which a criminal would call a company on a Friday afternoon, use information obtained by social engineering, and get an authorisation code for payments of £50,000 up to seven-figure sums. The bank had taken a recording of such a call, and had actors reenact it. Everyone was gripped by this and trying to understand it and the subsequent deconstruction – but the bank then lost them again by following it with generic recommendations that had no relation to preventing the scam. The researchers’ conclusion: despite the resources and time being spent, the bank’s approach was mostly knee-jerk actions. Sasse also noted that groups like SANS and The Analogies Project are prone to sharing “craft knowledge” that has no scientific basis or evaluation, leading the field around in a circle.