19.12.17

Adventures in cybersecurity research: risk, cultural theory, and the white male effect


The digital technologies that enable much of what we think of as modern life have introduced new risks into the world and amplified some old ones. Attitudes towards risks arising from our use of both digital and non-digital technologies vary considerably, creating challenges for people who seek to manage risk. This article tells the story of research that explores such challenges, particularly with respect to digital technology risks such as the theft of valuable data, unauthorized exposure of sensitive personal information, and unwanted monitoring of private communications; in other words, threats that cybersecurity professionals have been working hard to mitigate.
The story turned out to be longer than expected so it is delivered in two parts, but here is the TL;DR version of the whole story:
·         The security of digital systems (cybersecurity) is undermined by vulnerabilities in products and systems.
·         Failure to heed experts is a major source of vulnerability.
·         Failure to heed experts is a known problem in technology.
·         The cultural theory of risk perception helps explain this problem.
·         Cultural theory exposes the tendency of some males to underestimate risk (White Male Effect or WME).
·         Researchers have assessed the public’s perceptions of a range of technology risks (digital and non-digital).
·         Their findings provide the first ever assessment of WME in the digital or cyber-realm.
·         Additional findings indicate that cyber-related risks are now firmly embedded in public consciousness.
·         Practical benefits from the research include pointers to improved risk communication strategies and a novel take on the need for greater diversity in technology leadership roles.
Of course, I am hopeful a lot of people will find time to read all of both parts of the article, but if you only have time to read a few sections then the headings should guide you to items of interest. I am also hopeful that my use of the word cyber will not put you off – I know some people don’t like it, but I find it to be a useful stand-in for digital technologies and information systems; for example, the term cyber risk is now used by organizations such as the Institute of Risk Management to mean “any risk of financial loss, disruption, or damage to the reputation of an organization from some sort of failure of its information technology systems”. (I think it is reasonable to use cyber risk in reference to individuals as well, for example, the possibility that my online banking credentials are hijacked is a cyber risk to me.)
The sources of cyber risk
Like most research projects, this one began with questions. Why do some organizations seem to “get” security while others apparently do not? Why is it that, several decades into the digital revolution, some companies still ship digital products with serious “holes” in them, vulnerabilities that leak sensitive data or act as a conduit to unauthorized system access. Why do some people engage in risky behavior – like opening “phishy” email attachments – while others do not?
These questions can be particularly vexing for people who have been working in cybersecurity for a long time, people like myself and fellow ESET security researcher, Lysa Myers, who worked on this project with me. Again and again we have seen security breaches occur because people did not heed advice that we and other people with expertise in security have been disseminating for years, advice about secure system design, secure system operation, and appropriate security strategy.
When Lysa and I presented our research in this area to the 2017 (ISC)2 Security Congress we used three sources of vulnerability in information systems as examples:
1.     People and companies that sell products with holes in (e.g. 1.4 million Jeeps and other FCA vehicles found to be seriously hackable and hard to patch, or hundreds of thousands of webcams and DVRs with hardcoded passwords used in the Mirai DDoS attack on DNS provider Dyn)
2.     People that don’t practice proper cyber hygiene (e.g. using weak passwords, overriding security warnings, clicking on dodgy email attachments)
3.     Organizations that don’t do security properly (e.g. obvious errors at Target, Equifax, JPMorgan Chase, Trump Hotel Collection)
Could it simply be that some percentage of people don’t accept that digital technology is as risky as experts say? Fortunately, the phenomenon of “failure to heed experts” has already been researched quite extensively, often in the context of technology risks. Some of that research was used in the project described here. (A good place to start reading about this research is CulturalCognition.net).
Technology risks in general
Risk is a surprisingly modern concept. For example, risk it is not a word that Shakespeare would have used (it does not appear in any of his writings). The notion of risk seems to have gained prominence only with the widespread use of technology. For example, advances in maritime technology enabled transoceanic commerce, which created risks for merchants shipping goods, which led to the development of financial instruments based on risk calculations, namely insurance policies (for more on the history of risk and risk management see: The New Religion of Risk Management by Peter L. Bernstein, author of Against the Gods: The Remarkable Story of Risk).
Over time, risks arising from complex and widespread technologies and behaviors became matters of public concern and debate. For example, the widespread use of fossil fuels created risks to human health from air pollution. The development of “cleaner” nuclear energy caused heated debate about the hazards of nuclear waste disposal. In Figure 1 below you can see how 1,500 American adults rated the risks from seven technology-related hazards in a landmark 1994 survey, broken down into four demographic groups: