Peter Davies: Forward security for emerging problems

Peter Davies

Peter Davies

“We are trying to solve a cyber security problem in a system that’s going away and ignoring the one that’s emerging. It’s a really big problem.” Practitioner panel member Peter Davies, director for security concepts “and head of cunning plans” for Thales e-Security, a company within Thales, could probably have a very successful second career doing stage shows scaring people though he says that’s never his intention. His job is not merely to protect his own company from attacks but to do the same for the many customer companies that depend on Thales products for their security and that of their millions of customers all over the world. “As a practitioner,” he says, “my job is trying to solve their problem going forward.”

What he’s getting at with that opening statement is that he sees a significant problem with security as we typically conceive it and researchers study it: too much of both theory and practice is looking backwards and solving the convenient or the irrelevent. We are always trying to solve the last problem or trying to defend against the last attack, but rarely looking ahead to coming risks.

Because of Thales’ particular position in the industry – its products are used to secure more than 54 governments, every major Western bank, every major IT supplier, and many other security suppliers and corporations – Davies is often holding the canary in the coal mine when new attacks are conceived and launched.

One of his favourite examples was his early discovery of the potential for individual targeting. In the case of supply chain companies, the attackers’ goal is often not to steal data, as it would be in most enterprises, but to infiltrate and use the company’s products as vectors for distributing targeted attacks based on autonomous bots. Many of these are designed to have their effect even behind air gaps. The 12 to 15 significant – that is, industrial-scale, custom-built – attacks Davies sees each year often create the effect of an insider attack by precisely targeting specific individuals based on their roles within companies. In one such attack a few years ago, the goal was to corrupt code destined for a particular customer. It used information the attackers had gathered on six company employees from four different networks in four different places, which included the targeted individuals’ home computers.

“When an attack is potentially business-terminating – not a matter of compliance but of survival – it seriously requires you to bring your A game,” says Davies. “When it’s simply a matter of compliance then security is generally done by a company, organisation, or department, but this had nothing to do with that. My attackers were breaching those boundaries that we’d conveniently put up.”

The first thing you notice about Davies is his elliptical way of speaking: he describes what he’s seen while leaving out all the names and labels. “It’s very difficult and not helpful to discuss some of the more specific things that are there,” he explains, “because the fact that you know these things becomes a vector of attack in itself. One of the things that makes you most exposed is what you know.” He often talks about knowing his attackers, and in some cases he means it literally. The serious attackers he talks about – organised like ordinary businesses – are often led and directed by social scientists, not techies.

That level of carefulness has been around him all his life: his father was Geoffrey Davies, one of the pioneers of miniature implantable pacemakers, and he grew up surrounded by safety-critical electronics. By the time Davies was 16, he knew the electrical routes through the human body that can be used to implant a pacemaker and was helping his father make circuit boards. At university in Wales he studied philosophy and logic – “I always picked things I couldn’t do”. Both proved good choices, as philosophy taught him to grasp and understand new concepts rapidly. Also good training was the state of the department at the time: the logic half was at war with the ethics half. Making the wrong argument in the wrong place to the wrong person could lead to failure. “It was very good training for working in business.” For his post-graduate degree, he did computer science and mathematical statistics, also in Wales. Oddly to modern ears, computer science was part of the English department, essentially because the professor there had come from GCHQ and Bletchley Park, where his work was applying computer science to language.

By the time he left in the 1980s, his choices were AI or security, and he picked security. “That was when people were talking about expert systems, and it seemed to me there wasn’t enough processing power to make them work. Or data.” In hindsight, he was correct. Most modern systems are based on ideas that were current then but that required time for Moore’s Law to do its work and provide the necessary processing power and the rise of companies like Google and Facebook to provide large enough databases. By contrast, security looked like a field where the complexity of attacks was increasing exponentially while defensive techniques were not. “According to Moore’s Law I’m on a linear improvement in my computing power. I will never catch up with the threat,” he says. “That means you have to be smart.” An important element of that is reducing the attack surface up front, “bringing you somewhat back into line with things you can do.”

Davies’ particular emphasis at the moment is on resilience. “You need to anticipate that it will go wrong and build so that it’s a manageable, rather than catastrophic, failure.” Some of today’s difficulty for enterprises in understanding their own systems, he argues, is directly traceable to the loss of the historical knowledge people had of how and when the system made mistakes. “The original knowledge of what they were trying to do is often not present any more,” he says.

This is the reason for his emphasis on thinking about resilience and planning how to recover when things go wrong. When asked what his hardest problem is, this is what he names: “Getting people to look at digital survivability and resilience as opposed to compliance.” It is, he says, hard enough to get them to do the things that NCSC talks about, but it’s much harder to get them to look ahead to understand where developments will take us over the next decade and what they need to do to prepare.

Even at RISCS meetings, Davies finds the cyber attacks he worries about are not well represented. “What I’m looking at is, if I’ve got personally identifiable information and a safety-critical system coexisting then they need starkly different security mechanisms. One may absolutely need to be encrypted, and one needs not to be so that I can monitor it. And where that data is 103 Exabytes of data a day sitting on the same processors in the same system at the same time, that’s an inherently contradictory set of things they’re doing. Whichever one I choose there’s going to be a hole I can’t magically get rid of that will result in opportunities for novel attacks that will succeed.” In line with that, “I’m particularly interested at the moment in the use of legitimate data in ways that the system can’t combat, resulting in unexpected and unanalysed system consequences – for example, data that won’t run in the accelerators of a real-time system.” Attacks of this kind, like every attack, have their antecedent analogues, but essentially involve finding ways to tie up a system’s resources so that it can’t function effectively. Such attacks are a frightening prospect because they don’t rely on errors or vulnerabilities but on using knowledge the attacker has gained about how a system works. “There’s no way to avoid this.” These are the more sophisticated attacks he finds coming his way and they are particularly aimed at the hyper-connected hybrid cyber-physical systems now being built.

Davies, therefore, takes the view that “I expect to operate in a failed system.” He already has some ideas of protective strategies. “One thing I know is you should be extremely careful when you invest in monocultures. I can find nowhere that a monoculture has survived, and when it fails it will almost certainly fail catastrophically.” In his view, it’s essential to consider carefully how you might introduce “randomisation” into a system to limit the extent and consequence of a compromise. Think 2 million autonomous cars, or railroads, or the electrical grid.

In addition, security can’t go on being a specialist subject in a silo. Where safety of life is involved, companies have to sign off that their product is safe. “You can’t say to the court, it’s safe as long as it isn’t a cyber attack.” Instead, it will become essential to be able to argue that it’s safe even after being attacked. “That’s worlds away from waving my hands and talking about how to get people to do passwords the right way.”

Also on his agenda is helping people understand that they may be collateral damage rather than the object of the attack, and that they don’t understand what a compromise of their machine learning systems might look like because they don’t understand what those systems are doing. “Those are the types of cyber attacks that I think are to do with the resilience of systems going forward, and you can’t just address them by coding correctly.” As he asked at a RISCS meeting in 2016 to illustrate the mismatch between today’s thinking about security and what will be needed tomorrow, “Will it be acceptable to a court that you drove into a wall simply because the instruction was correctly signed?”

So these, for Davies, are the emerging problems. Many security practitioners until now have been working in environments where “security” largely meant “compliance”. Look at cars: you think of the machine in your driveway as a “car” but in fact it’s a heterogeneous network that generates 103EB of data per day, according to IBM, and just happens to have wheels and a chassis.

“I think the cyber attacks we’re looking at now are just a moment of time. We’ve known about this stuff for 30 years, and we need to spend most of our efforts on leapfrogging it. Otherwise we’re in a situation where our improvement is linear but the attacks are escalating exponentially,” he says.

About Wendy M. Grossman

Freelance writer specializing in computers, freedom, and privacy. For RISCS, I write blog posts and meeting and talk summaries