Tricks that cybercriminals use to hide in your phone

While analysts figure out new methodologies for analyzing malware and users begin to understand how all this works, cybercriminals are seeking new ways to hide in phones and compromise devices.
The convoluted tricks used to increase the effectiveness of their attacks can be grouped into two distinct categories: First, Social Engineering strategies that seek to confuse users; and second, sophisticated technical mechanisms that try to obstruct malware detection and analysis.
This article summarizes some of the common behaviors of malicious Android code over the last few years.
Deceit based on Social Engineering
Use fraudulent accounts in the Play Store to distribute malware
Malware in the official Google store never stops appearing. For cybercriminals, sneaking their malicious applications into the marketplace of genuine apps is a huge victory, as they can reach much more potential victims so have an almost rock-solid guarantee of more infections.
What’s more, the fake developer accounts used to spread insecure or malicious apps try to look as similar as possible to real accounts, in order to dupe unsuspecting users who end up getting confused by them. In a recent example of this, researchers discovered a fake app for updating WhatsApp that used a Unicode character trick to give the impression of being distributed through the official account.
Take advantage of commemorative dates and scheduled app release dates
A common practice in the world of cybercrime is to make malware look like versions of apps – games, mostly – that have gained sudden popularity, which are either scheduled for release or are not available in official stores for certain countries. This happened with Pokémon GO, Prisma and Dubsmash, adding hundreds of thousands of infections worldwide.
Tapjacking and overlay windows
Tapjacking is a technique that involves capturing a user’s screen taps by displaying two superimposed apps. So the user believes that they are tapping on the app that they are seeing, but they are actually tapping on the underlying app, which remains hidden from view.
Another similar strategy, which is widely used in spyware for credential theft in Android, is overlay windows. In this scam, the malware continually tracks the app that the user is using, and when it coincides with a certain objective app, it displays its own dialog box that looks just like the legitimate app, requesting credentials from the user.
Camouflaged among system apps
By far, the easiest way for malicious code to hide on a device is to pass itself off as a system app and go as unnoticed as possible. Malpractices such as deleting the app icon once the installation is finished or using names, packages and icons of system apps and other popular apps to compromise a device are strategies that are emerging in code like this banking Trojan that passed itself off as Adobe Flash Player to steal credentials.
Simulating system and security apps to request administrator permissions
Since Android is structured to limit app permissions, a lot of malicious code needs to request administrator permissions to implement its functionality correctly. And granting this permission makes it more difficult to uninstall the malware.
Being camouflaged as security tools or system updates gives cybercriminals certain advantages. In particular, it allows them to shield themselves behind a trusted developer, and consequently users do not hesitate to authorize the app to access administrative functions.
Security certificates that simulate true data
The security certificate used to sign an APK can also be used to determine if an app has been altered. And while most cybercriminals use generic text strings when issuing a certificate, many go to the trouble of feigning data that correspond to the data used by the developer, going one step further in their efforts to confuse users who carry out these checks.
Techniques for complicating analysis
Multiple functionalities in the same code
A trend that has been gaining ground in recent years in the mobile world is to combine what used to be different types of malware into a single executable. LokiBot is one example of this, which is a banking Trojan that tries to go unnoticed for as long as possible in order to steal information from a device; however, if the user tries to remove the administrator’s permissions to uninstall it, it activates its ransomware feature by encrypting the device’s files.
Hidden apps
The use of droppers and downloaders, i.e., embedding malicious code inside another APK or downloading it from the internet, is a strategy that is not only limited to malware for laptops and computers, but is also universally used by malicious mobile code writers.
As the then known Google Bouncer complicated cybercriminals’ ability to upload malware to the official store, the attackers chose to include this type of behavior to try to bypass controls … and it worked! Well, for a while at least!
Since then, these two forms of malware coding have been added to the portfolio of most used malicious techniques.
Multiple programming languages and volatile code
New multiplatform development frameworks and new programming languages are emerging all the time. What better way to mislead a malware analyst than to combine languages and development environments, such as designing apps with Xamarin or using Lua code to execute malicious commands. This strategy changes the final architecture of the executable, and adds levels of complexity.
Some attackers add to this combo by using dynamic script loading or portions of code that are downloaded from remote servers and deleted after use. So once the server has been removed by the cybercriminal, it is not possible to know exactly what actions the code performed on the device.
Samples with these characteristics began to appear towards the end of 2014, when researchers published particularly complex malware analysis.
Synergistic malware
An alternative for complicating the analysis of a sample is to divide the malicious functionality into a set of apps that are capable of interacting with each other. By doing so, each app has a subset of permissions and malicious functionality, and they then interact with each other to fulfill a further purpose. Moreover, for an analyst to understand the true function of the malware, they must have access to all individual apps as if they were pieces of a puzzle.
And while this is not a commonly used strategy, there have already been samples that exhibit this type of behavior, as a publication on Virus Bulletin recently demonstrated.
Covert channels and new communication mechanisms
To communicate with a C&C server or other malicious apps, malware needs to transfer information. This can be done via traditional open channels or hidden channels (personalized communication protocols, brightness intensity, wake locks, CPU utilization, free space in memory, sound or vibration levels, and accelerometers, among others).
Furthermore, in recent months we have seen how cybercriminals are using social networks to transfer C&C messages, such as Twitoor, the botnet that uses Twitter accounts to send commands.
Other anti-analysis techniques
The use of packaging, anti-emulation, anti-debugging, encryption, and obfuscation, among other evasion techniques, is very common in malware for Android. To get around these types of protections, it is possible to use hooking of functions, perhaps through apps such as Frida.
It is also possible to use analysis environments that try to dodge these controls by default, such as MobSF—which includes some anti-emulation techniques, AppMon, or Inspeckage—where, for example, flat text strings can be seen before and after being encrypted, together with the keys used.
To prevent infections, don’t forget to check out these potentially malicious behaviors and find out how to check if your phone has been compromised.


How diversity in cybersecurity contributes to your company


If you’re a security practitioner or long-time reader of this blog, you may be all-too-familiar with the dangers of practicing “checkbox security”. By blindly following rules and directives without appreciating why they’re important, you may make short-term gains while ultimately dooming your long-term goals. That being the case, you may intuitively understand why “checkbox diversity” measures are doomed to fail.
Fairness vs. learning
Much as the purpose of securing a network is not simply to play by arbitrary rules, including a wider variety of people in security positions is not just about trying to hire an assortment of people that represents the population at large. In other words, security and diversity are not just about being compliant and fair. They are also about helping business get the widest possible range of perspectives, to help them take considered steps instead of leaping blindly without adequate information.
Taking the time to identify cost-effective measures that will protect your digital assets can help you identify potential problems earlier on, when they can be fixed at a lower cost in terms of both money and public goodwill. Likewise, ensuring that you’re finding – and retaining – people with a wider variety of life and work experiences will help ensure that you have the opportunity to learn from people with a broad range of perspectives from the outset, rather than after unforeseen missteps cause serious public relations problems.
Diversity in security perspectives
As my esteemed colleague Stephen Cobb discussed in a series of posts late last year, cyber-related risks are now firmly embedded in public consciousness, but the specifics of the ways in which risk is perceived may differ depending on a number of factors. Relative levels of perceived risk for security-related problems were assessed differently depending on a respondent’s age, income, gender, ethnicity and cultural alignment: there was no one source or type of risk that all groups identified as the most troubling.
In order to prepare for the widest variety of vulnerabilities, we need people who are attuned to all types of risks to participate in all levels of the discussion about risk assessment and mitigation.
Not just a pipeline problem
While the dearth of women and people of color in the pipeline for tech is a well-documented phenomenon that is beginning to change for the better, both recruitment and retention rates are very poor for people within these demographics. At every point, from middle school to mid-career, the pipeline has sprung a series of leaks and is periodically catching fire.
The good news is that the ways to improve this situation are not only beneficial for people in underrepresented demographics. By seeking new sources of qualified applicants and increasing psychological safety for employees, you can potentially decrease the time it takes to fill positions, and improve both retention and effectiveness of the people already in your employ. Improving your company culture is simply good business-sense.
Moving towards the future
To ensure an increasing supply of high-quality applicants to keep the pipeline flowing; we need to get kids excited at the idea of pursuing cybersecurity careers, we must identify people who could use mentorship and training to excel in this industry, and it’s imperative to include a wider variety of people in our recruitment practices. Here are a few ways that you can help:
1- Volunteer
There are a lot of national tech education groups such as TEALSGirls Who CodeWomen’s Society of Cyberjutsu, and CoderDojo as well as local STEM events, hackathons and boot camps that are in need of expert support. Each year many of ESET’s own researchers join a team of mentors who help teach kids during Securing Our eCity’s yearly Cyber Boot Camp in the San Diego area – this is a fun event that can always use more help from the community.
2- Scholarships
The cost of formal education is growing at a rapid pace, which may keep interested people from trying to get the necessary training and credentials that are helpful in getting a job in this industry. There are a lot of scholarships out there that have been set up to encourage people to pursue an education in security. The Women in Cyber Security (WiCYS) website maintains lists of resources for students seeking scholarships and internships.
ESET’s own Women in Cybersecurity scholarship is now open for submissions by students nationwide. Applications for this are being accepted until April 1, 2018.
3- Reaching underrepresented groups
There are a growing number of groups that are focused on the inclusion of a wider variety of people in cybersecurity and technology careers. National groups like Code2040 and Black Girls Code are helping to cultivate the next generation of developers. You may also be able to find local groups in your area, especially through sites like MeetUp.
4- Improving psychological safety
Even if you’ve not yet started efforts to improve diversity and inclusion within your organization, you can start looking at your company’s culture and see where you can improve conditions for psychological safety. Your employees are the eyes and ears of your organization; if they don’t feel comfortable speaking up about what they’re seeing and hearing, or discussing creative or unusual ideas, you are not getting their full value. This is especially true of people who may feel they are outside the majority of your company’s demographic.
5- Help your employees find support
Do you help pair your employees with peers, mentors and (especially) sponsorship within your organization? Ensuring that people have someone to call on for support and advocacy can have dramatic effects on people’s job satisfaction. As competition for cybersecurity talent can be especially stiff, investing in your existing employees is especially important.
The success of a company relies on that of its employees. By setting individual employees up for success, you’re also setting your business up for success. Populating your company with people who have different backgrounds and life experiences gives them a chance to learn from each other, and to be more effective in their jobs and careers.


OceanLotus utilise de vieux trucs pour introduire un nouveau backdoor

Les chercheurs d’ESET ont disséqué certains des derniers ajouts à la boîte à outils malveillante du groupe Advanced Persistent Threat (APT) connu sous le nom d’OceanLotus, ainsi que sous les noms APT32 et APT-C-00.
Fournisseur prolifique de logiciels malveillants, OceanLotus vise des cibles commerciales et gouvernementales de premier plan en Asie du Sud-Est, en particulier au Vietnam, aux Philippines, au Laos et au Cambodge. Le groupe, présumé par plusieurs d’origine vietnamienne, semble déterminé et pleins de ressources et est connu pour combiner ses créations sur mesure à des techniques reconnues depuis longtemps pour leur succès.
OceanLotus ne se repose certainement pas sur ses lauriers et poursuit ses objectifs en matière de cyberespionnage, de reconnaissance et de vol de propriété intellectuelle. L’un des derniers backdoors du groupe est un véritable outil malveillant qui permet à ses opérateurs d’accéder à distance à une machine compromise. La porte dérobée contient une suite de fonctionnalités, notamment un certain nombre d’outils pour la manipulation de fichiers, de registres et de processus, ainsi que le chargement de composants supplémentaires.
Pour introduire clandestinement le backdoor (ou porte dérobée) dans une machine ciblée, le groupe a recours à une attaque en deux étapes au cours de laquelle l’injecteur (ou dropper) prend d’abord place dans le système afin de préparer le terrain pour l’arrivée du backdoor. Ce processus implique certaines astuces couramment associées à des opérations ciblées de ce type.
L’attaque débute généralement par une tentative – très probablement via un courrier électronique d’harponnage, ou spearphising – pour inciter la victime ciblée à exécuter le dropper malveillant joint au message. Afin d’augmenter la probabilité que la victime non suspecte clique dessus, le fichier exécutable malveillant se fait passer pour un document ou une feuille de calcul en affichant une fausse icône.
Lorsque la victime clique sur la pièce jointe, le dropper ouvre un document protégé par mot de passe qui constitue une diversion visant à détourner l’attention de la victime pendant que le dropper effectue ses sombres desseins. Aucun logiciel exploité n’est nécessaire.
Les attaquants utilisent un certain nombre de documents de leurre. Pour renforcer son aura d’authenticité, chaque fichier a un nom plutôt soigneusement élaboré – et généralement anglais. ESET détecte les fichiers comme Win32/TrojanDropper.Agent.RUI.
De plus, OceanLotus est également connu pour utiliser des attaques de points d’eau, qui impliquent la compromission d’un site Web que la victime est susceptible de visiter. Dans ce scénario, la « proie » est piégée et en vient à télécharger et exécuter un faux installateur ou une fausse mise à jour d’un logiciel populaire à partir du site Web piégé. Quelle que soit la méthode de compromis, le même bacdoor est déployé au final.
La technique de l’attaque de point d’eau a probablement été utilisée pour distribuer l’injecteur appelé RobotFontUpdate.exe, qui est en fait une fausse mise à jour de la police régulière RobotFontUpdate.exe. Les détails de ce dropper sont détaillés ci-dessous.
Sous le capot
Les composants du paquet du dropper sont exécutés en plusieurs étapes; chacune implique une forte dose d’obscurcissement du code visant à protéger les logiciels malveillants de la détection. Pour confondre davantage encore les chercheurs et les logiciels anti-programmes malveillants, un peu de « garbage code » est également inclus.
S’il est exécuté avec les privilèges d’administrateur, le dropper crée un service Windows qui établit la persistance sur le système (de sorte que le logiciel malveillant survivra à un redémarrage). Sinon, le même but est atteint en altérant le registre du système d’exploitation.
De plus, le paquet dépose une application dont le seul but est de supprimer le document d’appât, une fois qu’il a rempli sa mission.
Il est important de noter que deux autres fichiers sont déposés et entrent en jeu au cours de cette étape – un exécutable utilisant la signature numérique d’un développeur de logiciels majeur et légitime et une bibliothèque de liens dynamiques (DLL) malveillante nommée d’après celle utilisée par l’exécutable légitime.
Les deux fichiers participent à une astuce éprouvée, appelée « DLL side-loading », qui consiste à adopter le processus de chargement de la bibliothèque d’une application légitime en plaçant une DLL malveillante dans le même dossier que l’exécutable signé. C’est une façon de passer sous le radar, puisqu’une application fiable disposant d’une signature valide est moins susceptible de causer la suspicion.
Dans les campagnes utilisant ces nouveaux outils OceanLotus, nous avons vu le déploiement, entre autres, des exécutables authentiquement signés RasTlsc.exe  de Symantec et mcoemcpy.exe  de McAfee. Lors de leur exécution, ces programmes appellent respectivement rastls.dll (détecté par ESET comme Win32/Salgorea.BD) et McUtil.dll  (détecté comme Win32/Korplug.MK).
La porte dérobée s’ouvre
Une fois déchiffrée, la porte dérobée prend une empreinte digitale du système. Ce backdoor renvoie diverses données, telles que le nom de l’ordinateur et des utilisateurs et la version du système d’exploitation, avant d’attendre que les commandes exécutent leur mission principale.
Un certain nombre de noms de domaine et d’adresses IP sont utilisés pour l’infrastructure du serveur de commande et contrôle (C&C). Toutes les communications avec les serveurs C&C sont chiffrées. Elles peuvent cependant être facilement déchiffrées, car la clé de déchiffrement est préenregistrée dans les données.
Notre plongée en profondeur (voir le lien ci-dessous) dans les dernières campagnes d’OceanLotus montre que le groupe ne relâche pas ses efforts et allie le code légitime et les outils accessibles au public à ses propres créations nuisibles. De toute évidence, le groupe fait beaucoup d’efforts pour contourner la détection de ses logiciels malveillants et, au bout du compte, brouiller les pistes pour les chercheurs.
Une analyse détaillée peut être lue dans le livre blanc : OceanLotus: Old techniques, new backdoor.


In the US, one in five healthcare employees willing to sell patient data, study finds

Almost one in five (18%) employees in the healthcare industry in the United States and Canada said that they would be willing to give access to confidential medical data about patients to an unauthorized outsider for financial gain, a survey for Accenture has revealed.
They would expect no more than $500 to $1,000 for their login credentials or for deliberately installing tracking software or downloading the data to a portable drive.
The remaining 82% said that no amount of money would make them sell the records, according to the survey, called Losing the Cyber Culture War in Healthcare: Accenture 2018 Healthcare Workforce Survey on Cybersecurity.
The problem was particularly acute among provider organizations, as opposed to payer organizations (21% vs. 12%). Also, and perhaps counterintuitively, staff with more frequent cybersecurity training were more inclined to such practices.
In addition, this way of compromising patient data is not a purely hypothetical phenomenon. Roughly one in four (24%) respondents said that they were actually aware of a co-worker who had made a profit by providing a third party with access to such information.
Accenture noted that such conduct contributes to the fact that healthcare organizations in seven countries spent an estimated $12.5 million each, on average, dealing with impacts of cybercrime in 2017. The figure comes from the firm’s report called 2017 Cost of Cyber Crime Study.
Meanwhile, there was an almost universal (99%) sense of responsibility among the respondents for data security. Nearly all (97%) also claimed that they understand the data security and privacy standards of their organization. And yet there is some disconnect, as one in five (21%) of healthcare workforce admitted to writing down their login credentials near their computers.
A total of 912 employees of provider and payer organizations in the US and Canada were polled for the survey, which was conducted online in November. All of the respondents have access to electronic health data such as personally identifiable information (PII), payment card information (PCI), and protected health information (PHI).
In another study by Accenture in 2017, 88% of patients in the US said that they trust their physicians or other healthcare providers to ensure security for their electronic medical data. A quarter said that they had experienced a breach of such data.
Author Tomáš Foltýn, ESET


Trends 2018: The ransomware revolution

This is actually where I came in, nearly 30 years ago. The first malware outbreak for which I provided consultancy was Dr. Popp’s extraordinary AIDS Trojan, which rendered a victim’s data inaccessible until a ‘software lease renewal’ payment was made. And for a long time afterwards, there was not much else that could be called ransomware, unless you count threats made against organizations of persistent DDoS (Distributed Denial of Service) attacks.
All-too-plausible deniability
While Denial of Service attacks amplified by the use of networks of bot-compromised PCs were becoming a notable problem by the turn of the century, DDoS extortion threats have accelerated in parallel (if less dramatically) with the rise in ransomware in the past few years. However, statistics may be obscured by a reluctance on the part of some victim organizations to speak out, and a concurrent rise in DDoS attacks with a political dimension rather than a simple profit motive. There are other complex interactions between malware types, though: there have been instances of ransomware variants that incorporated a DDoS bot, while more recently the charmers behind the Mirai botnet chose to DDoS the WannaCryptor (a.k.a. WannaCry) “kill switch” in order to allow dormant copies of the malware to reactivate.
The worm turns
Of course, there’s a great deal more to the malware ESET calls Win32/Filecoder.WannaCryptor than the Mirai factor. The combination of ransomware and worm accelerated the spread of the malware, though not as dramatically in terms of sheer volume as some of the worm attacks we saw in the first decade of the millennium, partly because its spread was reliant on a vulnerability that was already widely patched. However, its financial impact on major organizations caught the attention of the media worldwide.
Pay up! and play our game*
One of the quirks of WannaCryptor was that it was never very likely that someone who paid the ransom would get all their data decrypted. That’s not unique, of course: there are all too many examples of ransomware where the criminals were unable to recover some or any data because of incompetent coding, or never intended to enable recovery. Ranscam and Hitler, for example, simply deleted files: no encryption, and no likely way the criminal can help recover them. Fortunately, these don’t seem to have been particularly widespread. Perhaps the most notorious example, though, is the Petya semi-clone ESET detects as DiskCoder.C, which does encrypt data. Given how competently the malware is executed, the absence of a recovery mechanism doesn’t seem accidental. Rather, a case of ‘take the money and run’.
Wiper hyper
While the DiskCoder.C malware sometimes referred to as NotPetya clearly doesn’t eschew making some profit by passing itself off as ransomware, other ‘wipers’ clearly have a different agenda, such as the (fairly) recently revived Shamoon malware. Malware with wiper functionality aimed at Ukraine include KillDisk (associated with BlackEnergy) and, more recently, one of the payloads deployed by Industroyer.
What can you learn from these trends?
Holding your data to ransom is an easy way for an attacker to make a dishonest profit, and destroying data for other reasons such as a political agenda seems to be on the rise. Rather than speculate about all the possible variations on the theme of data mangling, let’s look at some measures that reduce the risk across the board.
1.     We understand that people choose to pay in the hope of getting their data back even though they know that this encourages the criminals. Before paying up, though, check with your security software vendor (a) in case recovery may be possible without paying the ransom (b) in case it’s known that paying the ransom won’t or can’t result in recovery for that particular ransomware variant.
2.     Protecting your data proactively is safer than relying on the competence and good faith of the criminal. Back up everything that matters to you, often, by keeping at least some backups offline – to media that aren’t routinely exposed to corruption by ransomware and other malware – in a physically secure location (preferably more than one location). And, obviously, backups defend against risks to data apart from ransomware and other malware, so should already be part of a disaster recovery plan.
3.     Many people and organizations nowadays don’t think of backup in terms of physical media like optical disks and flash storage, so much as in terms of some form of cloud storage. Which are very likely to be offsite, of course. Remember, however, where such storage is ‘always on’, its contents may be vulnerable to compromise by ransomware in the same way that local and other network-connected storage is. It’s important that offsite storage:
1.     Is not routinely and permanently online
2.     Protects backed-up data from automatic and silent modification or overwriting by malware when the remote facility is online
3.     Protects earlier generations of backed-up data from compromise so that even if disaster strikes the very latest backups, you can at least retrieve some data, including earlier versions of current data.
4.     Protects the customer by spelling out the provider’s legal/contractual responsibilities, what happens if the provider goes out of business, and so on.
4.     Don’t underestimate the usefulness of backup media that aren’t rewriteable/reusable. If you can’t modify what’s been written there, then neither can ransomware. Check every so often that your backup/recovery operation is (still) working properly and that your media (read-only, write-disabled, or write-enabled) are still readable (and that write-enabled media aren’t routinely writeable). And back up your backups.
5.     I’m certainly not going to say that you should rely on backups instead of using security software, but bear in mind that removing active ransomware with security software that detects ransomware is by no means the same as recovering data: removing the ransomware and then deciding to pay up means that the data may no longer be recoverable even with the cooperation of the criminals, because the decryption mechanism is part of the malware. On the other hand, you certainly don’t want to restore your data to a system on which the ransomware is still active. Fortunately, safe backups can save your data if/when something malicious slips past your security software.
And the future?
“Don’t make predictions about computing that can be checked in your lifetime” – wise words from Daniel Delbert McCracken. Still, we can risk some extrapolation from the recent evolution of ransomware in order to offer some cautious thoughts about its future evolution.
The AIDs Trojan was pretty specific in its targeting. Even then, not many people were interested in the minutiae of AIDS research, distribution of the Trojan by floppy disk was relatively expensive, and the mechanism for paying the ransom didn’t really work to the attacker’s advantage. (Of course, in 1989 Dr. Popp didn’t have the advantage of access to cryptocurrency or the Dark Web, or easy ways to use Western Union (the 419 scammer’s favorite) or to monetize nude photographs.)
The attack itself was ‘classic’ ransomware, in that it deprived the victim of his or her data. Later, DoS and DDoS attacks deprived companies of the ability to benefit from the services they provided: while customers were deprived of those services, it was the provider who was expected to pay. However, as the non-corporate, individual use of the Internet has exploded, the attack surface and the range of potential targets have also widened. Which probably has an influence on the promiscuous distribution of most modern ransomware.
While the media and security product marketers tend to get excited when a highly visible or high-value victim is disclosed – healthcare sites, academic institutions, telephony service providers, ISPs – it’s inappropriate to assume that these institutions are always being specifically targeted. Since we don’t always know what vector of compromise was used by a specific campaign, we can’t say ‘It never happens!’. But it looks as if ransomware gangs are doing quite nicely out of payments made by large institutions compromised via lateral attacks from employees who have been successfully attacked when using their work accounts. The UK’s NHS Digital, for example, denies that healthcare is being specifically targeted – a view I happen to share, in general – while acknowledging that healthcare sites have ‘often fallen victim’.
Could this change?
At the moment, there still seem to be organizations that are prepared to spend relatively large sums in ransom payment. In some cases, this is a reasonable ‘backup strategy’, acknowledging that it’s sensible to keep a (ransom)war(e) chest topped up in case technical defences fail. In other cases, companies may be hoping that paying up will be more cost-effective than building up complex additional defences that cannot always be fully effective. That in itself may attract targeting of companies perceived to be a soft touch or especially able to pay (financial organizations, casinos). The increased volume of wiper attacks and ransomware attacks where payment does not result in recovery may mitigate this unhealthy trend, but companies that are still perceived as unlikely to harden their defences to the best of their abilities might then be more specifically targeted. It is, after all, likely that a successful attack on a large organization will pay better and more promptly than widespread attacks on random computer users and email addresses.
Data versus Devices
Looking at attacks on smartphones and other mobile devices, these tend to be less focused on data and more on denying the use of the device and the services it facilitates. That’s bad enough where the alternative to paying the ransom may be to lose settings and other data, especially as more people use mobile devices in preference to personal computers and even laptops, so that a wider range of data might be threatened. As the Internet of Unnecessarily Networked Things becomes less avoidable, the attack surface increases, with networked devices and sensors embedded into unexpected items and contexts: from routers to fridges to smart meters, from TVs to toys, from power stations to petrol stations and pacemakers. As everything gets ‘smarter’, the number of services that might be disrupted by malware (whether or not a ransom is demanded) becomes greater. In previous years we’ve discussed the possibilities of what my colleague Stephen Cobb calls the Ransomware of Things. There are fewer in-the-wild examples to date of such threats than you might expect, given the attention they attract. That could easily change, though, especially if more conventional ransomware becomes less effective as a means of making a quick buck. Though I’m not sure that’s going to happen for a while…
On the other hand, there’s not much indication that Internet of Things security is keeping pace with IoT growth. We are already seeing plenty of hacker interest in the monetization of IoT insecurity. It’s not as simple as the media sometimes assume to write and distribute malware that will affect a wide range of IoT devices and beyond, so there’s no cause for panic, but we shouldn’t underestimate the digital underworld’s tenacity and ability to come up with surprising twists.
* Apologies to the shade of Henry Newbolt who wrote Vitai Lampada, from which I’ve misquoted: https://en.wikipedia.org/wiki/Henry_Newbolt