1.9.17

Google removes 300 Android apps following DDoS attack


Google has been forced to remove almost 300 apps from its Play Store after learning that apps were being hijacked for DDoS attacks, an attack that ESET warned its followers on social media about in early August.
The botnet, named WireX, is estimated to have possibly infected close to 70,000 devices before Google took action.
Once they became aware of the attack Google started the process of removing them, “we identified approximately 300 apps associated with the issue, blocked them from the Play Store, and we’re in the process of removing them from all affected devices”, said a Google spokesperson. “The researchers’ findings, combined with our own analysis, have enabled us to better protect Android users, everywhere”.
ESET detection engineer, Lukas Stefanko, first noticed the vulnerability 20 days before it was removed from the store and published technical details to keep users up-to-date, “”We detected this infiltration as Android/HiddenApp and Android/Clickerand, plus we were one of the first to disclose this threat and how to get rid of it”, said Stefanko
After discovering the issue and the new malicious apps, he immediately reported his findings to the Google Security team and shared the warning with users, “once I discovered this threat we immediately informed users through our social networks to be aware of these malicious apps and with instructions how to uninstall it”, he said.

If you are worried about inadvertently crossing paths with one of these nasty apps, Lukas has some helpful words to guide you in the right direction, “for people that only recently removed one of these infiltrators, or for people that could stumble upon them in the Play store, my advice would be to read comments and app reviews. You should mainly focus on the negative ones, make sure you have installed up-to-date security software and be aware when applications that you’ve installed change name or app icon”.

31.8.17

Don’t fall for Hurricane Harvey charity scams!


Hurricane Harvey is leaving plenty of destruction in its wake, but it could also impact the finances and computer security of internet users thousands of miles away from the danger zone.
As we’ve seen on far too many occasions before, scammers think nothing of capitalising off human misery. And the natural disaster striking parts of Texas is no different, as online scammers target the charitable and concerned. The sad truth is that scammers will often view a disaster like Hurricane Harvey as nothing more than a money-making opportunity.
The United States Computer Emergency Readiness Team (US-CERT) issued an advisory this week warning the public of the danger of falling for Hurricane Harvey-related charity scams.
Merciless scammers have no qualms about exploiting people’s kind-hearted nature by spreading their attacks via social networks and email, linking to counterfeit charity websites designed to steal the public’s payment card details.
There are already reports that Hurricane Harvey-related domains have been registered for the purposes of collecting funds. Even if these have been set up by well-meaning individuals, rather than enterprising criminals, with the intent of collecting donations for the relief of Harvey’s victims, it would be better if everyone dealt with established charities instead.
Meanwhile, online criminals could take advantage of the understandable keen interest in the breaking news story to spread malicious links and attachments in an attempt to infect unsuspecting users’ computers with malware.
BuzzFeed has collected details of some of the questionable Hurricane Harvey-related posts making the rounds, including some where people are sharing an “emergency” number which actually directs to an insurance firm.
Of course, it’s possible that people are simply sharing the number with good intentions – not realising that it’s not really appropriate for anyone in a genuine state of emergency.
Once again, the human race is finding it much easier to share information (even if it’s inaccurate) than spend a few minutes considering whether it might be truthful. This is the way that bogus news and advice can spread so rapidly online.
Many of us are shocked by the TV news reports from Texas and shocking personal stories of those affected. Which makes it all the more important to exercise caution about what we believe, and think before we act rashly.
My advice?
·         Visit legitimate news outlets if you want to keep up-to-date with developing news stories. There are many bogus fly-by-night news sites on the web who will publish anything in the hope of earning some advertising revenue – some will even own up in their small print that they are a satirical site – but how many of us bother to read that?
·         Always be wary of links that offer you dramatic video footage of a news story. Malicious hackers and scammers know that the public finds it hard to resist clicking on such links, and might have planted malicious content.Author Graham Cluley, We Live Security
·          

30.8.17

L’ESET Technology Alliance accueille GREYCORTEX, société de sécurité réseau, dans ses rangs

ESET annonce que GREYCORTEX, prestataire de solutions de sécurité réseau, devient membre de l’ESET Technology Alliance, étendant ainsi le spectre de cette dernière en y ajoutant sa solution évoluée de surveillance de performances, de gestion et de sécurité réseau MENDEL. Cette solution génère une couche sophistiquée de protection à l’usage des entreprises, des services publics et du secteur des infrastructures critiques tout en garantissant une détection et réaction rapide aux brèches de sécurité et autres types d’incidents.

Les entreprises doivent aujourd’hui faire face à des menaces nettement plus sophistiquées que celles qui prévalaient voici quelques années encore. Il suffit pour s’en convaincre de se rappeler les campagnes massives de malwares qui se sont déclenchées en 2017. Elles démontrent à quel point nous pouvons être rapidement privés de tout contrôle sur nos ressources les plus précieuses - à savoir, les données. L’importance cruciale de la cyber-sécurité a récemment été mise en lumière, tout d’abord par le déferlement du ransomware WannaCry, qui a paralysé nombre d’acteurs du secteur des soins de santé, et ensuite par l’arrivée du ransomeware Petya, une attaque ciblée visant la chaîne logistique qui s’est répandue en échappant apparemment à tout contrôle.

S’appuyant sur une riche expérience tant universitaire qu’industrielle, accumulée pendant une dizaine d’années, GREYCORTEX a recours à des techniques évoluées d’apprentissage automatique et d’analyse de données pour aider les clients à protéger leurs données sensibles, leurs réseaux, leurs secrets commerciaux et industriels ainsi que leur réputation, toutes choses qu’ils peuvent avoir omis de protéger sans en avoir conscience. “Afin de protéger leurs réseaux informatiques, les sociétés doivent être en mesure de réagir rapidement et efficacement”, déclare Petr Chaloupka, CEO, GREYCORTEX. “MENDEL a été élaboré sur base d’algorithmes spécialisés et d’importants travaux de recherche universitaire. La solution procure une compréhension fine du trafic réseau de telle sorte à rendre plus affûtée et fiable la détection de menaces évoluées, telles que WannaCry ou d’autres anomalies comportementales, tout en réduisant les coûts opérationnels.”

La solution MENDEL de GREYCORTEX MENDEL est capable de détecter des maliciels, rançongiciels, chevaux de Troie (inoculés ou déclenchés à distance), attaques de type Jour 0, etc - qu’ils soient connus ou inconnus - de même que des menaces ciblées visant l’infrastructure. Tous ces véhicules d’attaque peuvent se fondre dans un réseau pendant de longues périodes, sans être détectés par d’autres solutions de sécurité réseau. Qui plus est, MENDEL est en mesure d’identifier des menaces venues de l’intérieur, des problèmes de performances réseau et procurer une totalité visibilité sur chaque équipement du réseau, jusques et y compris la couche applicative. Les équipes de sécurité informatique y gagnent non seulement en capacité de détection des menaces mais également en visibilité réseau, portée à son maximum, afin de procéder à des enquêtes sur incidents.

On ne multiplie jamais assez le nombre de couches de sécurité pour son infrastructure réseau,” déclare Marc Mutelet, CEO de MGK Technologies, distributeur exclusif des produits ESET sur la Belgique et le Luxembourg. La solution GREYCORTEX procure une analyse de la moindre anomalie comportementale qui peut passer inaperçue. Qui plus est, la solution peut aisément s’intégrer dans l’infrastructure des sociétés, quelle que soit leur taille, et peut non seulement faire office d’instrument de surveillance et de détection mais peut également procurer la visibilité nécessaire sur les fonctionnalités de composants de sécurité complémentaires.” 

Lancée en 2013, l’ESET Technology Alliance est un partenariat d’intégration qui vise à mieux protéger les entreprises en leur procurant un éventail de solutions de sécurité informatique complémentaires. Tous les membres de l’ESET Technology Alliance sont soigneusement sélectionnés, devant satisfaire à une série de critères pré-définis afin de procurer une protection professionnelle optimale à travers les divers environnements informatiques.
Pour plus de détails à propos de la solution MENDEL de GREYCORTEX, nous vous invitons à cliquer sur ce lien.
Plus d’information à propos de l’ESET Technology Alliance sur ESET Technology Alliance
A propos de GREYCORTEX

GREYCORTEX, qui a bénéficié d’investissements de la part de YSoft Ventures, s’appuie sur une riche expertise universitaire et industrielle longue de 10 ans. La société a recours à des techniques évoluées d’apprentissage automatique et d’analyse de données pour aider les clients à protéger leurs données sensibles, réseaux, secrets commerciaux et industriels ainsi que leur réputation. En plus de son appartenance à l’ESET Technology Alliance, GREYCORTEX dessert des clients dans plus de 14 pays par le biais de son propre réseau de distributeurs.

Security and Education

Journalist Kevin Townsend asked me a few months ago for commentary on phishing, for an article he was researching. He said:
Phishing really comes down to 2 basic questions:
1.     Can technology ever solve the problem & what are the best approaches?
2.     Can awareness training ever solve the problem? How?
If the answer is ‘no’ to both; then should we concentrate on accepting that it will succeed, and concentrate on discovering and mitigating the effects of a successful phish?
The question is this: are phishing and other manifestations of cybercrime purely technological problems? Even if this were the case, does it follow that they could therefore be solved by technology alone?
To some extent, the security software industry relies on the idea that there is always a technological answer to a tech problem (as, indeed, it has persuaded many of its customers to expect), but ‘always’ is a big word.
In general, when we address an attack vector technologically, the bad guys start working on finding ways round the roadblock. That doesn’t mean we shouldn’t look for technical solutions, but it does mean that we can’t usually find a once-and-for-all-time fix. Sometimes we eventually abandon an approach altogether; more often we keep recalibrating as the nature of the threats changes.
It may be broke, but can you fix it?
There’s more to surviving in a threat and counter-threat ecology than technological thrust and parry, though. To expect the security industry to fix everything is about as realistic as expecting medical technology to eradicate disease, or forensic technology to eradicate crime in the physical world. The online world doesn’t have a single choke point where a single security solution can be applied and everyone will be protected, even if such a solution existed.
Perhaps we need a better word than solution. Something that sounds less like a ‘this is the glorious victory at the end of the war’ and more like ‘this might win us this skirmish.’ To quote myself (in an article for Heimdal Security to which I contributed):
The security industry is pretty good at providing a wide range of partial solutions to a wide range of technological attacks, but technology continuously evolves on both sides of the white-hat/black-hat divide, so – marketing claims notwithstanding – there is never 100 percent security across the board. Least of all from a single product. In most cases, organizations and individuals choose what defensive measures they take, and indeed whether to protect themselves at all.
Unfortunately, those choices will not always be the choices that security experts would consider to be the best.
Technology versus people
Phishing isn’t (just) a technical problem, and nor is cybercrime in general. (I’ll mostly be speaking about generic cybercrime in this article rather than just phishing.) In fact, cybercrime, like its pre-digital sibling, is primarily a social problem, or rather a cluster of interconnecting social problems:
·         Criminal behaviour (online or offline), and the economic, educational and psychological factors behind it. To quote myself further: “Society can actually cause deviant behaviour where the individual must subscribe to more than one code, yet elements of one code are incompatible with another, leading to an uncomfortable state of cognitive dissonance, which might lead to ‘irrational or maladaptive behaviour’. In other cases, perhaps it’s just that in an era where fake news dressed up as satire is the common currency of the social media, the evolution of technology has far outstripped the average person’s ability to apply the common precepts of everyday socialization to the online world.”
·         Victim behaviour, and similar underlying factors. By which I don’t just mean victims recklessly failing to take reasonable precautions, but banks and other institutions contributing to the problem by failing to meet a sufficient standard of security when communicating legitimately with customers. Every time a bank sends out an email addressed to ‘Dear valued customer’ or including a multiply-redirected ‘click here’ link, they make it harder for potential victims to distinguish between phishing mails and legitimate mails. If they don’t even know your name, how can you be sure that it’s really your bank mailing you? If you can’t tell where a link is pointing to, or if it goes to a site whose name appears unconnected with the bank, how on earth do you know it’s safe?
·         Legislation and law enforcement issues. Even where there is appropriate legislation, the will and the resources aren’t there to enforce it in a better-than-piecemeal fashion.
Awareness, training, education
“A great deal of work has been done in raising the general level of security awareness and self-protection through some form of education”
So can awareness training/education ever solve the problem? Well, we’ll probably never know for sure. Many times over the years, I’ve said something like ‘we don’t know whether user education works because no-one’s ever done it yet.’ That’s a rather glib and simplistic way of putting it, to be honest, though it will do as a response to the equally glib assertion that ‘if user education was going to work, it would have worked by now’. A great deal of work has been done in raising the general level of security awareness and self-protection through some form of education, and I like to think I’ve made some contribution myself, as in this paper by Sebastian Bortnik and myself from 2014: Lemming Aid and Kool Aid: Helping the Community to help itself through Education. In that paper we asked:
How can we strike a balance when it comes to teaching of computer hygiene in an increasingly complex threatscape to audiences with very mixed experience and technical knowledge? Can user-friendly approaches to security be integrated into a formal, even national defensive framework?
And we made some suggestions as to how that could be done.
Education, Education, Education
Since I first drifted into the security field, I’ve generally seen myself as more of an educator (by intent, anyway) than a researcher. I realized long ago that there are hordes of people who are much better than I am at disassembling malware and writing code to detect malicious activity. I consider it a privilege to be able to work with some of those people (not only at ESET, but in the security industry as a whole), and I’m honoured that they put up with me to the extent of reading my blogs and listening to my presentations.
So while I couldn’t do my job if I didn’t have a reasonable grasp of malicious technology and the technologies that we have evolved to address them, my interest and abilities lie less in bits and bytes than in the psychosocial aspects of criminology and victimology. After all, my academic background is in social sciences as well as computer science, which is perhaps why I sometimes see things a little differently to my more technically gifted peers in the security industry, and have more faith that people who are not particularly IT-knowledgeable can, to some extent, be educated into being less vulnerable, certainly to attacks that are at least partially psychological rather than purely technological. I’m afraid I’m going to quote myself again.
Very, very often… a threat is less dependent on the effectiveness of its technology than it is on how effectively it manipulates the psychology of the victim.
Psychological manipulation of the intended victim is a core component of what we often call social engineering. Susceptibility to social engineering can sometimes be reduced by technical measures – the textual analysis of email messages with the aim of detecting text that is characteristic of a certain type of criminally-motivated communication, for example. However, educationalists favour a complementary, longer-term approach that involves making individuals more difficult to manipulate. 
Threat Recognition
One step towards achieving this is through relatively simplistic training in threat recognition: for example, the ‘phishing quizzes’ that Andrew Lee and I looked at in 2007 in a paper for Virus Bulletin (Phish Phodder: is User Education Helping or Hindering?). But the KISS principle is not always enough. What works in engineering design doesn’t always work in education. There’s a perpetual tension between keeping communication within the bounds of an audience’s understanding yet accurate and comprehensive enough to go beyond soundbites. (The Eleventh Law of Data Smog: ‘Beware stories that dissolve all complexity.’)
Even a poorly designed quiz raises awareness of the problem, but may be worse than useless if it reinforces wrong assumptions on the part of the quiz participant. Some quizzes seem to promote a service: ‘Discrimination is too difficult for your tiny brain; buy our product, or even use our free toolbar/site verification service/whatever’. That’s not wrong in itself; a vendor is in the business of selling products or services. If the product or service in question is free, it seems even more churlish to criticize, but there is a problem in that this message fosters dependence, not awareness; worse, that dependence is on a technical solution that is likely to rely on detecting specific instances of malice, rather than a generic class of detection.
Clearly, there are other limitations in the effectiveness of a paternalistic ‘Gods and ants’ approach. By showing potential victims a few example threats, it may sometimes be that they’ll be able to extrapolate from those when faced with different examples in the same class. But not often enough. Yet, however desirable it might be in theory to provide everyone with the analytical skills of an effective security expert, that clearly isn’t a realistic possibility in the workplace, let alone at home.
Not all advice is good advice
The implementation of a scheme that stands half a chance of educating everyone who needs educating would require resources, understanding and coordination that make it highly improbable that such an implementation will be achieved in our lifetime, or that of our children. And not all advice is good advice.
There’s certainly plenty of free information available, from many sources: the media, security vendors, government agencies, law enforcement, and more-or less altruistically-minded individuals offering advice, product reviews and so on. Unfortunately, the quality of these resources is even more variable, and they’re aimed at the sector of the community that may be least able to discriminate between good and bad advice. Especially advice that is in some sense competitive with other sources of advice.
People Patching
But I’m not very hopeful that education could ever change human nature so dramatically that X would never dream of scamming Y, even if Y was naïve enough to fall for a scam anyway. Until education does achieve the impossible, scammers will continue to scam, and in a technological age they’ll use technology to achieve their crooked aims; laws and law enforcement will have only partial success; and victims will behave in the ways that cause them to become victims. However, education and training can help everyone living in the digital to behave less like victims.
User education is also an essential part of sociological evolution. The threats we face on the internet are not new in concept: only in technological implementation. Social engineering attacks have been around since well before Helen of Troy. However, the economy of scale in the execution of such attacks was so relatively small that widespread education in recognition of the techniques used was not deemed necessary. The story of the Trojan horse has been taught for centuries as history and as a metaphor, but not seen as an illustration of one of the integral risks of everyday life. The Internet has resulted in an exponential increase in the use of social engineering attacks to the point where knowledge of how these attacks are perpetrated is a required life skill in contemporary society.
(That’s from a paper by myself and Randy Abrams: People Patching: Is User Education Of Any Use At All?)
Defense and self-defense
While the proper use of multi-layered defensive technology goes a long way towards protecting people without requiring them to be security experts, technology can be deployed more effectively to supplement and implement the education of those who use it, as discussed long ago by Jeff Debrosse and myself in the paper Malice Through the Looking Glass: Behaviour Analysis for the Next Decade.
After much research, it has become clear that taking game theory to the next level – determining the most likely action that a user will take in a given situation, enabling the reinforcement of ‘safe’ decisions and the sanctioning (or at least monitoring) of ‘unsafe’ decisions – can make for a much more secure computing environment for the end-user because their security software would be able to more accurately determine the outcome of their actions.
These measures can help institutions to move away from grooming potential victims into accepting phishing messages uncritically by improving their own messages, as well as continually working towards improving their own security and that of their customers.
Teach your children well
Here is an extract from another article – Internet Safety for Kids: 17 Cyber Safety Experts Share Tips for Keeping Children Safe Online – to which I contributed, having been asked for ‘The most important internet safety tip I can share with parents’. As you’ll have gathered from the title, the focus of Erin Raub, who compiled that article, was on advice to parents. However, it doesn’t take a long acquaintance with Facebook and other social media sites to realize that many, many adults have never been educated in terms of critical thinking and healthy scepticism, and they too need help in order ‘to teach them to trust their own judgement rather than rely entirely on technical solutions and conflicting ‘official’ information resources …[and] direct them towards strategies for developing sound analysis and judgement—what educationalists call critical thinking. But it’s too critical a task to leave to educationalists…’
It’s important for everyone to recognize how unsafe the internet is, not only as a vector for direct attacks, but also as a source of information. So we shouldn’t abandon security education for adults or for children, and we should continue to use and improve technology so that it becomes harder for the bad guys to misuse. We should, of course, acknowledge that phishing and other elements of cybercrime will continue to find victims, and do whatever we can to minimize the impact on victims before as well as after the fact.

27.8.17

Malware coded into synthetic genomes By Raphael Labaca Castro


When I began researching this topic towards the end of 2013, I sensed a certain skepticism from the scientific community, particularly when people with different backgrounds started experimenting between disciplines, which can reveal new vectors of IT security attacks.
In late 2015, when I presented my Master’s thesis (in IT security) on “Malware that infects genomes,” I experienced that skepticism up close. During the revision process, one of the professors, who was a specialist in molecular biology, branded it as “erudite nonsense.” In his opinion, it was obvious that a DNA sequence could be modified for malicious purposes and that it was the researcher’s duty to verify that what was sequenced matched the originally published sequence. I do not disagree with this point of view, but beyond the many scenarios that open up in terms of security, it is difficult to explain how easy it would be for some of the checks to fail, particularly if the problem lies in the software. The simple fact that this could occur warranted further study, in my opinion.
Nevertheless, his perspective was not without grounds. My biological scenarios were merely theoretical, given that I did not have the resources to synthesize/sequence a modified genome and demonstrate a real case. Without this, it was difficult to verify the feasibility of a genome being compromised with malicious information in such a way that, if synthesized, it could be passed into the biological realm, carrying an arbitrary sequence, and then be sequenced and compromise the system. Furthermore, it wasn’t something we could see ‘in-the-wild’, but technically that didn’t mean it couldn’t happen one day.
And then, that day came.
Professor Tadayoshi Kohno and his team from the University of Washington managed to demonstrate it in their article published last week: “Computer Security, Privacy, and DNA Sequencing: Compromising Computers with Synthesized DNA, Privacy Leaks, and More.”
Kohno and his team carried out in-depth, detailed research into the subject, where they put into practice this theoretical scenario which I was wondering about too: “maliciously” modified DNA can be synthesized and sequenced, giving rise to the execution of arbitrary code. In this case, they created a vulnerability in an application called fqzcomp to demonstrate the code’s execution.
“Establishing whether or not they belong to the structure of the sequence may be no trivial matter.”
However, there are many different possibilities. In my work, for example, there was a simple script that parsed the FASTA file (which contains the genome’s information and is written using the four nucleotides: adenine, cytosine, thymine, and guanine) to decrypt and execute the “payload.” It wasn’t an elegant solution, and also it required the victim to be vulnerable in order to execute the script; therefore I wasn’t fully satisfied, but it did the job. To encode the string into the sequence, the procedure was similar to the biological process, whereby these four bases (A, C, T, and G) are grouped into triplets forming what are referred to as codons (which represent amino acids and are then translated into proteins).
This means you can take the groups of three as a basis and then code a symbol for each triplet, forming a “hidden” alphabet. In this case, ASCII was used, and the coding took the following form: ACA = “A”, ACC = “B”, ACG = “C,” and so on successively (there are various ways to code the message; this is just one example). As you can see, we have 4^3 combinations, so we can quite easily code the entire alphabet in uppercase, lowercase, numbers, and symbols, and we still have spares after covering the 64 possibilities. This system offers a way to “write” arbitrary code inside a genome. Naturally, you could write quotes, as J. Craig Venter did when he created a cell controlled by a synthesized genome, or inject malware or arbitrary code.
What kind of impact could this cause?
Below, I include a portion of my thesis that analyzes the potential scenarios that could be discussed.
“The impact of this type of attack could be classified as: digital, digital-biological, and biological.
1.     Digital impact: The fact that a malicious payload can be injected into a DNA sequence does not imply that this methodology aggravates the infection, but rather it would aggravate the complexity of identifying it and subsequently detecting it using traditional protection methodologies such as hashes to ensure integrity and solutions to detect corrupted files. For this reason, it has been demonstrated how this scenario would work in order to warn of the possible use of genome sequences as alternative vectors.
2.     Digital-biological impact: In the event that a genome sequence is maliciously modified, and that genome is successfully synthesized, the malicious code could remain in the cell without impacting it. It should be clarified that this was not verified by the author as it falls outside the objectives of this work. If this were to happen, this organism would load some malicious code, whose DNA could then be sequenced in a laboratory and generate a sequence file that would contain, for example, a portion of malicious code. An attacker would then just need to extract it and execute it in order to activate a digital attack. (This point is similar to the one demonstrated by the University of Washington.)
3.     Biological impact: This would be the case where a maliciously inclined person has the ability to cause a mutation in a sequence, which would have no malicious impact on the system but could set in motion a functional problem at the biological level, if it were synthesized without adequate checkpoints. (This would be a hypothetical case whose feasibility is more difficult to verify.)
As we saw with Professor Kohno’s publication last week, Scenario Two has already been addressed and demonstrated to be “feasible” under certain circumstances. Undoubtedly, it remains far from being a real threat, but it is no longer a merely theoretical problem as we imagined in the past.
In the future, could a bacterium infected with malware replicate itself?
In the hypothetical case that a piece of modified DNA has been successfully synthesized, then the malicious code could form a part of a synthetic cell capable of replicating itself autonomously in the biological realm. The malware could even be “propagated” biologically, given that bacteria inherently have all the equipment needed for reproduction. Furthermore, the malicious code would not affect the carrier cell accommodating it, but would use it to stay “alive” until its genome was sequenced in a laboratory and regained its digital form in order to then activate itself on a computer or device. However, pinpointing the correct location for this code is a complex matter if biological propagation is to succeed. Here are some of the areas where a malicious string could be inserted:
1.     Irrelevant area: the malicious code enters an area of little importance; it is likely to have no significant impact.
2.     Area of a gene: if it enters a gene sequence and produces a mutation, two possibilities arise: The mutation is lethal, in which case it may disappear from nature without propagating itself. Or, the mutation is beneficial or neutral, in which case the added portion may continue its propagation.
3.     Regulating area: In this case, it could alter a gene, as in the second scenario, or it could do nothing, as in the first.
As such, in the event that it does not produce a lethal mutation, the malware and the synthetic carrier cell could form a kind of “cybernetic commensalism,” to make a simple comparison to the kind of symbiosis by which one participant obtains a benefit while the other one is neither harmed nor benefits.
In the University of Washington’s research, more emphasis is placed on sequencing a piece of DNA without any biological objective, but it is not clear [to me] whether it was dismissed on grounds of feasibility or complexity. I believe that this, as much like science fiction as it sounds, could be another point to consider in the future.
Detecting malicious strings
As the information is coded into the sequence, detecting malicious strings could be a complicated procedure. This is because, regardless of whether an application is capable of identifying them, establishing whether or not they belong to the structure of the sequence may be no trivial matter, if the DNA in question has a biological objective (and has not been published) or is used to store information or for other purposes.
Conclusion
It is interesting to see that this topic is finally gaining more attention in the media and, possibly, among researchers and specialists thanks to the research done by Tadayoshi Kohno and his team. Despite the debatable elegance of the implementation — creating a vulnerability in an application — we can observe that one of the most important points from a security perspective is gaining ground: the notion of subjecting this topic to greater scrutiny in order to spark an interdisciplinary discussion of it, in which IT and bioinformatics specialists, security experts, equipment manufacturers, governments, and specialists in molecular and synthetic biology come together.
In my opinion, given the rapid speed with which sequencing devices are developing, and the dramatic reduction in costs, successfully achieving security in DNA sequences will require a lot more work than can be done by one research group and a few enthusiasts. Unfortunately, until there are real-life cases or economic losses, it is likely that we will not see anything more in the media than sensationalist articles predicting the “genome-alypse.”
It is true that the feasibility is still low and there is no reason to be alarmed, but we should also remember that with IT security, waiting for an attack to happen before finding a solution has never been a good strategy.
Disclaimer: Everything presented here makes no claim to be exhaustive and may contain errors, considering the interdisciplinary nature of the research and my background as a technician and not as a biologist. Therefore comments, suggestions, and improvements are welcome in order to keep deepening and expanding this fascinating topic.