27.3.18

Critical Infrastructure Interview with David Harley


WeLiveSecurity sat down with David Harley (ESET) to get a better understanding of Critical Infrastructure and the role he has played in the area throughout his career.
[Editor] How did you come to be involved with Critical Infrastructure?
[David Harley] I first took an interest in Critical Infrastructure (CI) when I started working on the security side of medical informatics. Not that what I was doing for the Imperial Cancer Research Fund (now Cancer Research UK) would have been formally considered to impact directly on CNI (Critical National Infrastructure) at that time. While healthcare has long been considered to play a part in CNI in the US and in the UK, it probably wouldn’t have been considered to include a charitable organization focused on medical research in the 1990s. Certainly I had no direct contact at that time with the relevant government agency.
However, I had a pretty wide range of responsibilities during the 11 years I was there: while I was already deeply involved with malware management issues (and had been since 1989) and therefore with security more generally, I was also deployed in first and second line customer support, even when my primary role was system administration. And since security came up time and time again, I found myself doing a lot of background research, subscribing to mailing lists that covered topics like national security, SCADA, and so on.
When I moved to the UK’s National Health Service as a senior manager in the NHS Information Authority (NHSIA) in 2001, I was initially involved in the dissemination of threat information distributed by a government agency that was specifically concerned with the maintenance of the CNI, among other things. Later my role expanded to management of the Threat Assessment Centre (TAC), which had a somewhat similar function lying somewhere between a WARP and a CERT. We don’t seem to hear so much about them now, but a WARP (Warning, Advice and Reporting Point) is a sort of small-scale CERT intended to support a small-ish community (20-100), and is more about information sharing than dealing directly with critical incidents. Since the NHS had around 1.25 million users and the TAC was plugged directly into a community of network managers who really had to solve issues rather than just flag and discuss them, the TAC was, functionally speaking, rather more than a WARP, but wasn’t large enough to be considered a CERT or CSIRT in its own right.
And I never quite escaped being the go-to person on any issues related to malware. So that got me into liaising with providers of messaging security, for instance, with the Department of Health, and other government agencies, so CNI was of more than academic interest. Not least, because I now had access to data and structured information that weren’t at that time in the public domain.
[E] So you always regarded security and Critical Infrastructure as related?
[DH] Sure. Security is and always was implicit in ‘critical’, suggesting that CI components are crucial to the security of the State and/or the population (not always the same thing, even in a democracy). You only have to look at the components that are typically considered to be ‘critical’ where CI or CNI is formally defined.
In the US, for instance, the sectors are defined here by the Department of Homeland Security, and in the UK are defined here. The UK’s Security Services administer ‘protective security’ through the Centre for the Protection of National Infrastructure (CPNI).
Of course, as governments and government policies change, so do the services for which the agencies they regulate are responsible, so both the security services and the healthcare services have changed dramatically since I left the NHS, but I doubt if CPNI or other state agencies have stopped thinking of healthcare or emergency response as being critical or in need of security. Well, we can hope…
[E] What do you see as the main difference between CI as it was regarded back in the day and as it is today, apart from the obvious improvements in technology?
[DH] When I first became interested in potential risks to CI (and SCADA – Supervisory Control And Data Acquisition – which is by no means exclusively unique to a given CNI framework, but is so often associated with CNI components such as energy utilities), most of the discussion 20-30 years ago was about potential risks. There was comparatively little understanding outside the Corridors of Power of the concept of a formalized CNI, and where damage to infrastructural components was actually sustained, it was rarely the result of specific targeting of those components. Or, at any rate, it wasn’t usually reported as such.
There are claims that the Siberian pipeline explosion of 1982 was caused by a CIA logic bomb, and a story about the NSA reprogramming a printer microchip in order to compromise Iraqi air defence computers (the so-called ‘Desert Storm virus’ of the early 1990s). However, these claims – at least, in the forms in which they were circulated – suggest a sophistication (or sheer luck) redolent of science fiction rather than historical fact. My friend and sometime co-author Robert Slade analysed the Desert Storm story rather effectively in his Guide to Computer Viruses, and we revisited the topic in Viruses Revealed. There’s rather less solid analysis (let alone evidence) to be found when it comes to the story of cyber-sabotage and the Siberian pipeline explosion.
Even before the turn of the century, however, there was recognition (if you looked in the right place) of the activities of state-sponsored or state-initiated cyber-espionage activity.
In the early noughties, reports began to become publicly available on allegedly state-sponsored hacker groups such as NCPH (subject of extensive analysis in the AVIEN book) while, there were less public reports of more ‘official’ groups. Targeting by various groups going into the second decade of the 21st century has included petrochemical companies, military and government agencies, research labs, utilities, and major IT players in the US, putting them all well into the CI bracket. Other targets included diplomats, the offices of the Tibetan government in exile, human rights activists, and government/media/finance organizations in Estonia and Georgia. The aims seem to have gone beyond disruption of services to the theft of sensitive correspondence and documents, proprietary code, and so on.
Stuxnet still came as something of a game-changer, though, as you might gather from the sheer volume of research it generated (including ESET’s, in the extensive Stuxnet Under the Microscope). This is less so because of its sophistication, though its payload pushed anti-malware specialists into areas of research with which they were unfamiliar. Rather, it’s because Stuxnet was an attack apparently targeting a specific state rather than the world at large, even though the malware was generalized enough to be found far beyond the borders of Iran. There have been many subsequent attacks that have been described as targeting specific organizations or sectors – for instance, the ransomware Win32/Filecoder.WannaCryptor.D (often referred to as WannaCry) is sometimes talked about as if it only affected the UK’s National Health Service, which is nonsense – but targeting is rarely as clear-cut as it was in the case of Stuxnet. Yet, ironically enough, much of the speculation about Stuxnet following its discovery focused on the possibility of its re-use in very different engineering contexts. Nonetheless, Stuxnet made it clear to the media and the general public that there are resources and expertise available and already deployed against utilities considered vital by the states that maintain or at least rely on them. That’s very different from the (mostly) speculative discussions of the 1990s, and even of the years around the turn of the century, when I was directly engaged in such areas.
Read more on: