16.2.17

Gmail starts blocking JavaScript attachments: Alternative infector vectors to be expected?


As of February 13th, 2017, Gmail has started deploying their new restrictive policy on .js file attachments, extending their list of file types blocked for security reasons. After the full release, Gmail users won’t be able to send or receive mail containing .js attachments, even if they’re in a compressed and archived form.
Seeing that JavaScript attachments have proven to be one of the most popular ways for cybercriminals to spread their malicious work worldwide, this is good news. Only in the past six months, ESET LiveGrid® has recorded tens of millions detections of JS/Danger.ScriptAttachment, which is ESET’s detection name for malicious .js script spreading via email attachments.
Detected under the name are malicious scripts aiming to infect the device with different types of malware chosen by the attackers. Apart from various ad-clickers and banking malware, the most prevalent type of infection among the recent detections has been the much-feared crypto-ransomware, including the notorious families Locky, TorrentLocker and Crysis.
Although the change is likely to positively affect the safety of online communication worldwide, cybercriminals are known to be inventive when it comes to finding loopholes in security measures. With .js attachments blocked by one of the dominant webmail providers, attackers will most likely start looking for alternative ways into devices of their potential victims.
Just like Google advises Gmail users to use their storage solutions to share .js files used for legitimate reasons, also cybercriminals might start abusing those more frequently and lure users into clicking on corresponding links instead of opening attachments (as they did, for instance, when spreading another infamous ransomware Petya).
So while essentially good news, the update should also be a signal for users to consider these potential alternatives and pay extra attention to emails linking to third party storage services.

15.2.17

Proof-of-concept ransomware to poison the water supply


Ransomware is a big problem.
Home users and organisations around the world have found themselves at the sharp end of high profile attacks that have encrypted their files, and demanded substantial amounts of money for their data’s safe recovery.
The extortionists are earning themselves a fortune, as computer users and businesses feel compelled to pay up if they hadn’t taken adequate preventative steps before the attack took place.
This is the present we’re living in. But what might the future of ransomware looks like?
Researchers at Georgia Institute of Technology painted one picture this week, presenting their exploration of how ransomware could potentially attack industrial control systems (ICS), and demonstrating how new malware threats might target core infrastructure, holding entire cities hostage.
In their paper, “Out of Control: Ransomware for Industrial Control Systems”, the researchers describe how they developed their own proof-of-concept ransomware that was able to hijack control of a simulated water treatment plant, and poison the water supply.
“We were able to simulate a hacker who had gained access to this part of the system and is holding it hostage by threatening to dump large amounts of chlorine into the water unless the operator pays a ransom. In the right amount, chlorine disinfects the water and makes it safe to drink. But too much chlorine can create a bad reaction that would make the water unsafe.”
The threat of such an attack which would, of course, put the public’s safety at risk could merit the demand for a much higher ransom to be paid than those typically requested from businesses and home users.
Even if there is little prospect of danger to human life, the risk of an industrial ransomware attack causing downtime, and putting equipment health and worker safety at risk could make them an attractive target for some criminals.
History suggests that ICS networks, like schools and hospitals, have struggled to keep pace with modern security practices to combat digital attacks. In the case of educational and medical facilities that has often been because of a lack of funding, but with industrial control system networks it is more likely due to the relative rarity of real-world attacks and the perception that there are few threats out there.
But if criminals perceive that ICS systems could be a big cash cow then that could change very quickly, and key services may wake up to the fact that it may not be only state-sponsored attackers from another country who are interested in hacking into their networks.
As ESET security specialist Mark James explains, the right response is not to panic but to take sensible steps to reduce threat exposure by adopting a layered defence:
“Usually targeted malware is configured and aimed at a particular industry or sector. With so much of our industry digitally operated or maintained this could prove in its worst case scenario very bad indeed. But the same rules apply to any area that may be the target of ransomware, it has to be installed and it has to be able to gain complete control. With the right levels of security we can limit its attack vector and have mechanical failsafes to override anything software can instigate.”
“All environments in our digital world are susceptible to attack and need to be protected. Making sure operating systems, applications and security programs are kept up-to-date is one of the first lines of defence and one that often is overlooked or just not possible on bespoke systems designed to do a single task or job.”

Ransomware attacks against water treatment systems aren’t happening yet. It’s important to note that what the researchers achieved was just a simulation, not a real world exercise. But by painting a worrying picture of a potential future, they may have helped raise awareness amongst those who protect critical infrastructure to take the threat seriously.

UK government to roll out cybersecurity clubs for teens to address skills shortage


Thousands of teenagers across the UK are set to be given intensive training at cybersecurity clubs in a bid to minimize the skills shortage predicted for the near future.
The Cyber Schools Programme aims to offer support and encouragement to youngsters aged between 14 and 18 who demonstrate an early talent for the skills needed to help safeguard businesses against online threats in an increasingly digital economy.
The programme, which will be led by the Department for Culture, Media and Sport (DCMS), will benefit from up to £20 million of funding, with attendees to be guided by numerous expert instructors.
The aim is for 14-year-olds to get involved in the scheme, commit to four hours a week and subsequently stay on the programme for four years, during which time they will complete various modules.
Older students would be able to join later in the course, providing they meet the right criteria.
The overall goal of the scheme is to train at least 5,700 teenagers by 2021.
“This forward-thinking programme will see thousands of the best and brightest young minds given the opportunity to learn cutting-edge cyber security skills alongside their secondary school studies,” said Matt Hancock, the minister of state responsible for digital and culture.
“We are determined to prepare Britain for the challenges it faces now and in the future and these extracurricular clubs will help identify and inspire future talent.”
The pilot for the scheme, which forms part of the government’s National Cyber Security Programme, is set to begin in September 2017.  Its success will be reviewed following the first year.
The performance of the scheme is likely to be of interest to various other digital economies around the world, particularly the US, where the cybersecurity skills gap is also proving to be a problem.
Figures from CyberSeek recently found that there are 128,000 cybersecurity openings every day, but there are only 88,000 positions filled, creating a shortfall of around 40,000.


ILOVEYOU: The wrong kind of LoveLetter

By Editor

It was partly through taking advantage of our emotional rather than technical vulnerabilities that VBS/LoveLetter – also known as the Love Bug virus – caused such a trail of destruction when it hit the inboxes of its first victims on the morning of May 5th, 2000.
“Kindly check the attached LOVELETTER coming from me.”
Displaying the title I LOVE YOU in the subject line, the email was immediately effective. It included the following body message: “Kindly check the attached LOVELETTER coming from me.” The attachment was a file, titled: LOVE-LETTER-FOR-YOU.TXT.VBS, which contained the virus’s code.
According to David Harley, Senior Research Fellow at ESET, much of the virus’s success was a result of “unusually successful social engineering”. He explains: “It was unusual enough to persuade a victim to open it out of curiosity or in the expectation of reading some kind of joke.”
As its victims would find out, there was very little to laugh about.
Write me a letter
Originating in the Philippines, the Love Bug was the brainchild of two computer programmers, Reonel Ramones and Onel de Guzman. Although they were arrested, they were never prosecuted due to a lack of anti-malware legislation in the country at the time.  
From there, the virus spread to Hong Kong, to Europe and finally arrived reached the US just as offices were opening up in the morning, as Lysa Myers, Security Researcher at ESET, remembers:
“My day of the outbreak started at 5AM, when I was called in to help with the unprecedented number of reports we got from people who’d been affected. A huge variety of people wrote in with tales of woe; everyone from government offices whose email servers had been kneecapped by the load of virus-laden messages, to grandparents who were heartbroken to find that pictures of grandchildren had been irreparably destroyed by the virus.”
“Much of the virus’s success was a result of ‘unusually successful social engineering’.”
Adding to its seemingly innocent façade, the email appeared to come from a known contact – the worm would infiltrate a victim’s address book, sending replicas of itself to personal and business contacts.
In this way, LoveLetter was more harmful than its predecessor Melissa, which also took advantage of mass-mailing on its release in 1999.
Toxic
One (double) click on the attachment was all it took. Once released, the virus began its attack by overwriting files within the computer system (as well as mailing itself to contacts).
And its damage was widespread: it is estimated to have infected over 55 million computers around the world, causing billions of dollars of damage, estimated between US $5 billion and $10 billion.
“Many of the same vulnerabilities are [exploited] by today’s ransomware, as those used by LoveLetter.”
To counter its spread, Chey Cobb, head of INFOSEC in the US “advised all US government agencies to disconnect from the internet until the thing was contained”.
Many large corporations followed suit, with the British Parliament, the Pentagon and the CIA shutting down their internet connections to avoid damage to their systems.
Reach out
So, what came of this? For one, it did lead businesses to explore alternative ways of alerting users to potential inbox viruses. Some companies reverted to old fashioned methods and stuck paper notices on people’s doors; others left urgent voicemails; and, around the world, bosses did everything they could to ensure the first email in their employees’ inbox was a warning about LoveLetter.  
Bruce P. Burrell, yet another Security Researcher at ESET, explains the importance of establishing contact via any medium available, in the instance of an inbox virus: “When one medium is bogged down [we need to] use whatever other channels available to reach people …  Today that would include using social media, putting up a blurb on the company home page, on the internal network, etc.”
Additionally, as Myers explains, it helped security professionals “refine policies and procedures that were put in place to help us respond quickly and consistently even in the most overwhelming emergencies”.
Finally, whilst both computer security and methods of infiltration have evolved, security systems are often only as effective as their human users – many of us still fail to protect our systems with security software or to back up our data.
This Valentine’s … back up your data
Rather than letting our emotions sway our decisions, as a general rule, the advisable precaution would be to always double-check attachments before opening them by (a) never opening attachments or clicking on links in unsolicited email (or in Facebook, IMs, etc), even when they appear to be from those you know and trust and (b) before opening, contact the purported sender to see if s/he actually did send you something, and if so, exactly what it is.
No matter how enticing the subject matter may seem, the risk is never worth it.

14.2.17

Netsmart gebruikt InterSystems Data Platform en biedt gedragswetenschappers beter inzicht in patiëntendata


Centrum voor geestelijke gezondheidszorg in Denver past Netsmart elektronisch dossier toe voor toegang tot en analyse van ongestructureerde klinische data

Netsmart, een leverancier van technologie voor Amerikaanse GGZ instellingen heeft het InterSystems dataplatform geselecteerd voor het ontsluiten van tot voor kort verborgen informatie over het gedrag van patiënten. Netsmart brengt het systeem myAvatarTM  op de markt, een informatiebeheersysteem voor behandelaars die zich toeleggen op de geestelijke gezondheidszorg en die informatie willen uitwisselen met andere disciplines uit de zorgsector. Onderdeel van het door InterSystems geleverde platvorm is het product iKnow waarmee zich relevante informatie uit ongestructureerde, veelal tekstuele databronnen laat destilleren.

InterSystems werkte nauw samen met Netsmart bij de implementatie van het platform bij Mental Health Center of Denver (MHCD), een instelling die in de Verenigde Staten bekend staat om de innovatieve en kwalitatieve wijze van het behandelen van psychiatrische patiënten. Voorheen koste het MHCD veel moeite om verbeteringen in het genezingsproces te constateren uit de diagnose- en gespreksverslagen, zowel vanuit het perspectief van de patiënt als vanuit de visie van de behandelaar. Doorgaans leggen patiënten de voortgang in hun behandeltraject vast via een beknopte vragenlijst, waaruit artsen aan de hand van verschillende criteria hun conclusies trekken.

De Netsmart oplossing levert een voor de behandelaars begrijpelijk beeld van het rijke potentieel aan informatie die in de klinische verslaglegging verborgen zit. Ongestructureerde informatie in de vorm van psychiatrische evaluaties, notities van het instellingsmanagement, aantekeningen van de case manager of bewindvoerder, intake verslagen en behandelplannen zijn moeilijk te analyseren vanuit een traditionele gestructureerd opgezet elektronische zorgdossier. Door het gebruik van InterSystems technologie, waarmee conceptuele data extracten uit zowel gestructureerde als ongestructureerde gegevensbronnen zijn te ontsluiten, was MHCD in staat verder te kapitaliseren op de gegevensbestanden van het Netsmart elektronisch zorgdossier. Artsen kunnen, terwijl ze gewoon blijven werken volgens de gebruikelijke klinische werkwijze, het verhaal van hun cliënten/patiënten terugbrengen naar de formele data voor de besluitvorming.

InterSystems en Netsmart werkten heel nauw samen met behandelaars om te vernemen welke data in welke vorm voor hen van belang is. Deze aanpak wordt ook aanbevolen door American College of Physicians (ACP) in een notitie aangeduid als: "Clinical Documentation in the 21st Century," gepubliceerd in Annals of Internal Medicine.

“We introduceerden 16 maanden geleden de Netsmart oplossing bij MHCD ter vervanging van een elektronisch zorgdossier waar we al 12 jaar mee werkten”, aldus Wes Williams, arts en tevens CIO bij MHCD. “We waren de applicatie voortdurend aan het aanpassen aan nieuwe wensen. Dat houdt een keer op. Omwille van de patiëntveiligheid kozen we voor een nieuw stabiel platform, waardoor we zeker weten dat er geen informatie verloren raakt ongeacht de plaats waar gegevens worden vastgelegd. We willen breed inzicht in iemands gezondheidsdossier zodat we de best mogelijk zorg kunnen verlenen”.

Door het toepassen van InterSystems oplossing voor het analyseren van ongestructureerde data binnen het Netsmart systeem,  kan MHCD nu omvangrijke analyses los laten op alle patiënten data zonder complexe aanpassingen aan de bestaande IT infrastructuur. Daardoor kunnen artsen gemakkelijker de voortgang in het behandelproces van een patiënt beoordelen en een besluit nemen over de vervolgstappen. De kwaliteit van de zorg is daarmee gediend, terwijl er tevens doelmatiger wordt gewerkt. Ook in Nederland zijn er in de zorgsector met het toepassen van de  iKnow functionaliteit voor tekstuele analyse spectaculaire resultaten behaald, onder meer bij het voortijdig herkennen van indicatoren in psychiatrische verslagen en verpleegkundige logboeken.    

Over Netsmart
Het technologie platform van Netsmart leent zich bij uitstek voor het leveren van resultaatgerichte zorgdiensten. In de Verenigde staten maken meer dan 25 miljoen personen daar gebruik van via 25.000 zorgorganisaties, die diensten leveren op het gebied van gedragsafwijkingen, verslavingsproblemen, verstandelijke beperkingen, jeugd-en familiezorg, sociale zorg, thuiszorg, en stervensbegeleiding.


Next-gen security software: Myths and marketing

The Age of Dinosaurs
There is a view of the current security market that is often recycled by the media these days. It assumes a split between ‘first-gen(eration)’ or ‘traditional’ (or even ‘fossil’ or ‘dinosaur’) malware detection technology – which is invariably claimed to rely on reactive signature detection – and (allegedly) superior technologies using ‘next-gen(eration)’ signature-less detection. This picture is much favored by some ‘next-gen’ companies in their marketing, but it doesn’t reflect reality.
The Theory of Evolution
First of all, I’d take issue with that term ‘first-generation’. A modern mainstream security suite can no more to be lumped in with early ‘single layer’ technologies – such as static signature scanners, change detection and vaccines – than Microsoft Word can be with ed or edlin. They may have the same fundamental purpose as those long-gone applications – be it detection and/or blocking of malicious software, or the creation and processing of text – but they have a much wider range of functionality. A modern word processor incorporates elements that decades ago would have been considered purely the domains of desktop publishing, spreadsheets and databases.
The Origin of Species
A modern anti-malware-focused security suite isn’t quite so wide-ranging in the programmatic elements it incorporates. Nevertheless, it includes layers of generic protection that go far beyond signatures (even generic signatures). They have evolved into very different generations of product, incorporating technologies that didn’t exist when the first security products were launched. To talk about newcomers to the market as if they alone are ‘the next generation’ that goes beyond primitive signature-specific technology is misconceived and utterly misleading.
Signatures? What signatures?
Nowadays, even modern, commercial single-layer anti-malware scanners go far beyond looking for specific samples and simple static signatures. They augment detection of known, hash-specific families of malware with the inclusion of elements of whitelisting, behavior analysis, behavior blocking, and change-detection (for instance) that were once considered to be pure ‘generic’ technologies. Not that I recommend in general that people should rely totally on a single-layer scanner such as those often offered for free by mainstream companies: they should be using other ‘layers’ of protection as well, either by using a commercial-grade security suite, or by replicating the multi-layered functionality of such a suite, while using components drawn from a variety of sources, including a single-layer anti-malware scanner. However, the latter approach requires a level of understanding of threat and security technologies that most individuals don’t have. Come to that, not all organizations have access to such a knowledgeable resource in-house, which leaves them potentially at the mercy of marketing masquerading as technical advice.
Back to basics
“It’s clear that the distinctions between ‘fossilized’ and ‘next-gen’ products are often terminological rather than technological.”
Although some next-gen products are so secretive about how their technology actually works that they make mainstream anti-malware products look like open source, it’s clear that the distinctions between ‘fossilized’ and ‘next-gen’ products are often terminological rather than technological. I don’t consider that ‘next-gen’ products have gone further beyond these basic approaches to defeating malware, defined long ago by Fred Cohen (whose introduction and definition of the term ‘computer virus’ in 1984 to all intents and purposes jump-started the anti-malware industry), than have ‘traditional’ solutions:
·         Identifying and blocking malicious behavior
·         Detecting unexpected and inappropriate changes
·         Detecting patterns that indicate the presence of known or unknown malware
The ways of implementing those approaches have, of course, become immeasurably more advanced, but that progression is not the exclusive property of recently-launched products. For example, what we generally see described as ‘Indicators of Compromise’ could also be described as (rather weak) signatures. More than one vendor has failed to differentiate convincingly between mainstream anti-malware use of behavior analysis and blocking, between its own use of (for instance) behavioral analysis/monitoring/blocking, traffic analysis (and so on) and the use of the same technologies by mainstream anti-malware. Instead, they’ve chosen to promote a deceptive view of ‘fossil technology’ and peppered their marketing with a hailstorm of technological buzzwords.
Welcome to the machine
Consider, for instance, the frequent lauding of ‘behavior analysis’ and ‘pure’ Machine Learning (ML) as technologies that set next-gen apart from first-gen. In the real world, Machine Learning isn’t unique to one market sector. Progress in areas like neural networking and parallel processing are as useful in mainstream security as in other areas of computing: for example, without some degree of automation in the sample classification process, we couldn’t begin to cope with the daily avalanche of hundreds of thousands of threat samples that must be examined in order to generate accurate detection.
“The use of terms like ‘pure ML’ in next-gen marketing is oratorical, not technological.”
However, the use of terms like ‘pure ML’ in next-gen marketing is oratorical, not technological. It implies not only that ML alone somehow provides better detection than any other technology, but also that it is so effective that there is no need for human oversight. In fact, while ML approaches have long been well-known and well-used in the mainstream anti-malware industry, they have their pros and cons like any other approach. Not least, in that the creators of malware are often as aware of ML as the security vendors who detect malware, and devote much effort to finding ways of evading it, as is the case with other anti-malware technologies.
On your best behavior
Similarly, when next-gen vendors talk about behavioral analysis as their exclusive discovery, they’re at best misinformed: the term behavioral analysis and the technologies taking that approach have both been used in mainstream anti-malware for decades. In fact, almost any detection method that goes beyond static signatures can be defined as behavior analysis.
Natural and unnatural selection
Journalist Kevin Townsend asked me recently:
Is there any way that the industry can help the user compare and choose between 1st […] and 2nd generation […] for the detection of malware?
Leaving aside the totally misleading 1st versus 2nd-generation terminology, yes, of course there is. In fact, some of the companies self-promoted as ‘2nd-generation’ and claiming that their technology is too advanced to test have nevertheless pushed an already open door even wider by their own attempts to compare the effectiveness of their own products and those of ‘first-gen’ vendors. For example, at least one next-gen vendor has taken to using malware samples in its own public demonstrations: if different generations of product can’t be compared in an independent test environment, how can such demonstrations be claimed to be accurate in a public relations exercise? Other misleading marketing from next-gen vendors includes claims that “1st-gen products don’t detect ‘file-less’ malware in memory” (which we’ve done for decades). One particularly inept example used a poorly constructed survey based on Freedom of Information requests to ‘prove’ ‘traditional’ anti-malware’s ‘abject failure’ without attempting to distinguish between attacks and successful attacks.
Testing and Pseudo-testing
More commonly, VirusTotal (VT) is misused by misrepresenting its reports as if VT and similar services are suitable for use as ‘multi-engine AV testing services’, which is not the case. As VT puts it:
VirusTotal should not be used to generate comparative metrics between different antivirus products. Antivirus engines can be sophisticated tools that have additional detection features that may not function within the VirusTotal scanning environment. Because of this, VirusTotal scan results aren’t intended to be used for the comparison of the effectiveness of antivirus products.
VT can be said to ‘test’ a file by exposing it to a batch of malware detection engines. But it doesn’t use the full range of detection technologies incorporated into those products, so it doesn’t accurately test or represent product effectiveness. One nextgen vendor talked up its own detection of a specific ransomware sample a month before the same sample was submitted to VirusTotal. However, at least one mainstream/traditional vendor was detecting that hash a month before that next-gen detection was announced. You simply can’t measure a product’s effectiveness from VirusTotal reports, because VT is not a tester and its reports only reflect part of the functionality of the products it makes use of. Otherwise, there’d be no need for reputable mainstream testers like Virus Bulletin, SE Labs, AV-Comparatives and AV-Test, who go to enormous lengths to make their tests as accurate and representative as possible.
Towards cooperation
One of the more dramatic turnarounds in 2016 took place when VirusTotal changed its terms of engagement in order to make it harder for next-gen companies to benefit from access to samples submitted by “1stgen” companies to VirusTotal without contributing to VT themselves. To quote VirusTotal’s blog:
…all scanning companies will now be required to integrate their detection scanner in the public VT interface, in order to be eligible to receive antivirus results as part of their VirusTotal API services. Additionally, new scanners joining the community will need to prove a certification and/ or independent reviews from security testers according to best practices of Anti-Malware Testing Standards Organization (AMTSO).
While many vendors in the next-gen space initially responded along the lines of “It’s not fair”, “The dinosaurs are ganging up on us”, and “We don’t use signatures so we don’t need VT and we don’t care”, it seems that several big names were subsequently prepared to meet those requirements by joining AMTSO and thus opening themselves up to independent testing. (By that I mean real testing, not pseudo-testing with VirusTotal.) Since next-gen vendors have tended in the past to protest that their own products cannot be tested, especially by the ‘biased’ testers represented in AMTSO, perhaps this suggests the possibility of an encouraging realization that not all customers rely purely on marketing when they make purchasing decisions.
Share and share alike
“Vendors (of any generation) benefit from access to VirusTotal’s resources and that huge sample pool.”
Why have (some) next-gen vendors now decided that they do need to work with VirusTotal? Well, VT shares the samples it receives with vendors and provides an API that can be used to check files automatically against all the engines VT uses. This allows vendors not only to access a common pool of samples shared by mainstream vendors, but to check them against indeterminate samples and their own detections, thereby training their machine learning algorithms (where applicable).
And why not? That’s not dissimilar to the way in which longer-established vendors use VirusTotal. The difference lies in the fact that under the updated terms of engagement the benefit is three-way. Vendors (of any generation) benefit from access to VirusTotal’s resources and that huge sample pool. VirusTotal benefits as an aggregator of information as well as in its role as a provider of premium services. And the rest of the world benefits from the existence of a free service that allows them to check individual suspect files with a wide range of products. Widening that range of products to include less-traditional technologies should improve the accuracy of that service, while the newer participants will, perhaps, be more scrupulous about not misusing VT reports for pseudo-testing and marketing when they themselves are exposed to that kind of manipulation.
Whole-product testing
The way that AMTSO-aligned testers have moved towards ‘whole-product testing’ in recent years is exactly the direction in which testers need to go in order to evaluate those less ‘traditional’ products fairly. (Or, at any rate, as fairly as they do mainstream products.) It can be argued, though, that testers can be conservative in their methodology. It’s not so long ago that static testing was the order of the day (and to some extent still is among testers not aligned to AMTSO, which has discouraged it since the organization’s inception). AMTSO, despite all its faults, is greater (and more disinterested) than the sum of its parts because it includes a range of researchers both from vendors and from testing organizations, and marketing people aren’t strongly represented. Thus, individual companies on either side of the divide are less able to exert undue influence on the organization as a whole in pursuit of their own self-interest. If the next-gen companies can grit their teeth and engage with that culture, we’ll all benefit. AMTSO has suffered in the past from the presence of organizations whose agenda seemed to have been overly-focused on manipulation or worse, but a better balance of ‘old and new’ vendors and testers within the organization stands a good chance of surviving any such shenanigans.
Into the Cenozoic
Several years ago I concluded an article for Virus Bulletin with these words:
But can we imagine a world without AV, since apparently the last rites are being read already? … Would the same companies currently dissing AV while piggybacking its research be able to match the expertise of the people currently working in anti-malware labs?
I think perhaps we have an answer to that. But if the self-styled next generation can come to terms with its own limitations, moderate its aggressive marketing, and learn the benefits of cooperation between companies with differing strengths and capabilities, we may yet all benefit from the détente.