13.1.17

Windows 10 anniversary update: Security and privacy, hope and change?


You may recall that last year WeLiveSecurity presented a detailed white paper examining Windows 10 from a privacy and security perspective. Apparently, many readers found this helpful, particularly IT professionals contemplating enterprise upgrades to Windows 10 from earlier versions. With a number of analysts now predicting that 2017 will be the year most enterprises make the move to Windows 10, ESET is publishing a new Microsoft Windows 10 white paper that covers changes to security and privacy features in Windows 10 Anniversary Update (aka Build 14393, Redstone 1, Version 1607).
Microsoft originally delivered Windows 10 Anniversary Update last August, celebrating the first anniversary of its flagship desktop operating system with new features and functionality, and designating it as the Current Branch (CB) for consumers. In late November, Microsoft announced that the Windows 10 Anniversary Update was designated as the Current Branch for Business (CBB). That means this build of Windows 10 Microsoft is the one expects its business customers to run on the majority of their desktop computers.
As Microsoft has promised, this build of Windows 10 contains improvements over Threshold 2 (aka Build 10586), the previous build of Windows that was released in November 2015. However, it also removes some features, makes changes to others, and has some issues which may impact its users’ security.
Our new white paper that looks at the removal of security and privacy related features such as Wi-Fi Sense and Kid’s Corner, changes to Group Policy and PIN-based login, and the latest versions of Windows Defender and Microsoft Edge. We also look at some of the issues affecting Windows 10 Anniversary Update, and how they may impact the security for consumers and businesses. The table of contents is as follows:
The new white paper can be downloaded here: Windows 10 Anniversary Update Security and Privacy (PDF).
If you have any security or privacy questions about the Windows 10 Anniversary Update, please feel free to ask them, below.
The previous white paper can be downloaded from ESET’s We Live Security blog here: Windows 10 Anniversary Update Security and Privacy.

11.1.17

Amazon Echo and the Alexa dollhouses: Security tips and takeaways

Warning: if you plan to read this article out loud in the vicinity of an Amazon Echo device you may want to turn off its microphone before doing so (for reasons that will become clear in a moment).
This article offers tips on securing the Alexa service on Amazon Echo devices; it is not about the security of dollhouses, although dollhouses do come into the picture, so to speak. The shorter version goes like this:
1.     The default Alexa settings allow anyone within hearing distance of your Echo device to order goods and services on your Amazon account;
2.     This includes children and voices on the radio or television;
3.     Alexa will offer to sell you things even if you are not looking to buy them, for example if you or your child were to say “Alexa, what’s a popular drone?” it will offer to sell you one;
4.     You cannot tell Alexa to cancel a purchase. You have to use the app or Amazon website;
5.     You can protect Alexa’s voice purchasing feature by adding a confirmation code;
6.     You can turn off the voice purchasing feature completely;
7.     You can turn off the microphone on the Echo, for example if you want to have a discussion about Alexa without it interrupting you;
8.     You can stop Alexa talking by saying: “Alexa stop”;
9.     You can change the trigger or wake word from “Alexa” to “Amazon” or “Echo’;
10.   The Amazon Echo has been around for a while, but because it was such a big seller this past holiday season, a lot more people are being exposed to this technology for the first time, exposing certain misconceptions about how it works.
The dollhouse connection
The longer version of this story began last week, in San Diego, California, which is where I live. A local TV station did a piece about a six year-old girl who ordered a $160 dollhouse from Amazon, via Alexa, without her parents’ knowledge or permission. At the end of the story, when the anchorman repeated what that little girl was reported to have said – Alexa, order me a dollhouse – people in San Diego started calling the TV station to complain. Why? Because the Alexas in their homes and offices had started to respond to that request.
So how could this happen? Amazon Echo devices connect to your smartphone, and your internet connection and, if you have one, to your Amazon Prime account (with its streamlined 1-Click ordering capability). That means they have a lot of information and processing power at their virtual fingertips, as well as extensive digital communication capabilities, not to mention financial resources (your preferred method of payment).
And the Echo is designed to respond to the human voice. If you say “Alexa what is the weather?” within 20-30 feet of the device it will answer. It can speak to you through its speaker or one you connect to it, either wired or wireless. Let’s be clear about what is meant by “respond to the human voice.” At this point in time, pending changes to the product, it means “responds to any human’s voice” and not just the voice of the person whose installed it or whose account is linked to the device. That means it could be the voice of a guest, a child, or a roommate. All of them could potentially buy things on your account if you’re the one who set up the device and you didn’t change the default settings – about which there will more in a moment. So a lot of people have been learning what XETV in San Diego discovered: the list of potential users of your Alexa includes people on television (see “News anchor sets off Alexa devices around San Diego ordering unwanted dollhouses“).
How can this be? Well, the standard settings on a freshly installed Amazon Echo make this all very easy. Consider this scenario: you and your friends are discussing drones and you decide to ask your newly installed Amazon Echo which drone is the most popular; you say “Alexa, what is the most popular drone?” Alexa will respond by telling you the make and model and price of the most popular drone sold on Amazon.
In one sense that’s pretty cool. The technology is impressive. But immediately after giving you those details, and I mean without even taking a breath, Alexa will say: “do you want to order?” If you say yes, tada! The item is ordered, charged to the card you listed in your 1-Click settings at Amazon.com, and shipped to your designated 1-Click shipping address. And get this: you can’t tell Alexa you have changed your mind. If you ordered in error you have to use the Alexa app or Amazon website to cancel the order.
Alexa, stop!
At this point you might be thinking: “just say no!” But here’s what happens in that scenario. If you say no to Alexa’s offer to ship you that first drone suggestion, then it will proceed to tell you about a different drone and ask if you want to buy that one instead. Based on my own research, I think that’s how you end up with a $160 dollhouse. Alexa’s first pick for a dollhouse costs about $80, but the second pick costs twice that. Basically, your child or roommate doesn’t need to know the make and model of the thing they want; Alexa is more than happy to supply multiple suggestions.
So how do you say no? How do you make this stop? In a moment I will get into changing the default settings for Alexa, but even before you get to that point you might want to know how to cut Alexa off when she is talking and pitching products.
I don’t recall seeing this addressed in the stylish but minimalist documentation that came with the Echo Dot device I bought. So I asked one of my ESET colleagues, a family man who installed an Echo at home some months ago. He replied: “I talk to Alexa like she is one of children, I say ‘Alexa stop’ and that seems to work.”
I tried this on the test device in my office and it works, but it would be nice if the product came with clearer instructions about how to control it at such a basic level. I found you can also say “Alexa cancel” and that will stop the current activity but bear in mind that phrase does not work to cancel an order after it has been placed.
It also bothers me that the default setting of the Alexa Echo system is Voice Purchasing On, Confirmation Code Off. Changing these settings is easy enough using the Alexa app that you installed on your phone during installation of your Echo, as shown in the above screenshot. When I have mentioned this concern in conversations with friends and colleagues the almost universal response has been: “Well, it’s in Amazon’s best interest to make it as easy as possible for people to buy stuff.”
What is not easy is having a conversation about Alexa within earshot of the device. There are a couple of ways around this. One is to turn off Alexa’s microphone – that’s what is happening in the picture above where Alexa is glowing orange instead of blue. Another option is to change the trigger word from Alexa to Echo or Amazon. However, both of those alternatives could easily come up in conversation. I would not be surprised to see Amazon upgrade the Alexa software at some point to enable you to choose your own trigger word.
The security takeaways
At this point you may be thinking that this is all very interesting, but in terms of cybersecurity it’s no big deal. After all, an unexpected dollhouse on the doorstep might be a tad inconvenient, but it pales in comparison with something like a ransomware attack that encrypts all of your family photos and holds them for ransom. In many respects I agree, but I do see some potential security lessons in the Alexa dollhouse story.
1.     Products should never ship with “insecure” default settings. Security professionals have been through this discussion many times in the past. If the default install is “allow all” rather than “deny all” you are likely to get some amount of unexpected or unwanted allowing, like a TV broadcast ordering a dollhouse.
2.     Technology purchasing decisions, even domestic ones, should be preceded, or at least accompanied, by a risk-benefit analysis.
3.     Consumers can do risk analysis, but they can’t do good risk analysis if they don’t have all the facts. Just to be clear, at this point in time I have no knowledge that Amazon is holding back facts. What I’m saying here is that the company could be more upfront about how the technology works and what its limitations might be.
4.     Risk tolerance varies between people. For example, some people stopped using the internet after the Snowden revelations. A certain percentage of people don’t bank online because they don’t think it is safe. And in the survey ESET did a few months ago, 40% of consumers were “not confident at all” that IoT devices are safe, secure, and able to protect personal information” (see Internet of Stranger Things).
5.     The security of any given technology depends on the environment in which it is deployed, and unfortunate realities can impose limitations. An open microphone to an artificial intelligence with the power to make things happen in the real world offers many benefits, and I have not yet seen any evidence that Alexa is being abused for malice or gain; but I am sure some people somewhere are thinking about doing just that.
6.     The potential for unexpected and unwanted consequences from deploying technology tends to increase in step with the capability and complexity of that technology. I don’t think Amazon contemplated about the TV news story scenario. Some of colleagues think Amazon did, but shipped anyway, perhaps figuring it is no big deal or, maybe Mr. Bezos decided there is no such thing as bad publicity.
One other topic that frequently comes up in discussions of Alexa and other voice-enabled technology is privacy. Sadly, I have run out of room and time to discuss that aspect here. Fortunately, I did make some time over the holidays to explore more than one voice-activated IoT device and will discuss what I see as the privacy implications in another article.


10.1.17

Security scare over hackable heart implants


A US government probe into claims that certain heart implants are vulnerable to hacking attacks, has resulted in emergency security patches being issued for devices that cardiac patients have in their homes.
The medical devices under the microscope come from St Jude Medical, recently acquired by Abbott Laboratories, who were informed by researchers last year that their devices could be forced to malfunction by administering a mild electric shock, pacing at a potentially dangerous rate, or tricked into suffering a high-risk battery drain.
Controversially, research company MedSec Holdings and hedge fund Muddy Waters reportedly profited by short selling stock in St Jude Medical, before telling the manufacturer about the serious vulnerabilities.
The St Jude Medical Merlin@home Transmitter connects the tiny computer inside a patient’s implanted cardiac pacemaker to a doctor’s surgery or clinic, using a telephone line, internet connection or 3G cellular network to communicate critical information about a patient’s heart activity.
The good news for patients is that they don’t have to make as many trips to the clinic, and don’t have to see their doctor in person so often. Remote monitoring allows a doctor to both monitor how a heart is behaving, and see if the implanted device is behaving unusually.
From this point of view, the technological advance can be seen as a good thing. But there is a genuine concern – as we have described before – that the rush to embrace technology to improve and save patients’ lives could introduce high-tech risks.
Perhaps most memorably, security researcher Barnaby Jack demonstrated in 2012 how he reverse-engineered a device to deliver a deadly 830 volt shock to a pacemaker from a distance of 30 feet, and discovered a method to scan insulin pumps wirelessly and configure them to deliver more or less insulin than patients required, sending patients into a hypoglycaemic shock.
In a press release announcing its security updates, St Jude Medical emphasised that it was “not aware of any cyber security incidents related to a St Jude Medical device.”
“We’ve partnered with agencies such as the U.S. Food and Drug Administration (FDA) and the U.S. Department of Homeland Security Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) unit and are continuously reassessing and updating our devices and systems, as appropriate,” said Phil Ebeling, vice president and chief technology officer at St. Jude Medical.
Carson Block, CEO of Muddy Waters, meanwhile believes that going public about the vulnerabilities forced St Jude Medical to take swifter action to fix them, and feels that the fixes do not go far enough:
“…had we not gone public, St. Jude would not have remediated the vulnerabilities. Regardless, the announced fixes do not appear to address many of the larger problems, including the existence of a universal code that could allow hackers to control the implants.”
Researchers claim that the St Jude Medical devices use very weak authentication, opening up potential opportunities for non-hospital staff to hack a home device into sending electrical shocks and malicious firmware updates to vulnerable implanted devices.
While more investigation is conducted into how the implanted devices themselves might be made more secure, patients are urged to make sure that their Merlin@home units are plugged in, and connected a phone line or cellular adapter to receive the current and future security updates automatically.


Connected car hacking: Who’s to blame?

I’ve just about recovered from the sensory overload that is CES to gather my thoughts from what was another fascinating event. This blog, on connected car hacking, is the first of two posts.
New cars are networked computers with an engine attached. Yours doesn’t sync with your phone when it detects you driving? That’s so 2016. At this year’s CES, we saw cars that attempt to connect all the dots along your morning commute, including suggesting routes with less congestion, reminding you of appointments and such. But when this complex ecosystem has issues, who do you call? Auto manufacturers point to the third party computer systems, and they, in turn, point to upstream providers. You’re now driving a tech mashup that just happens to be mobile.
Recently, I bought a new car, and the sales guy told me I needed the extended warranty because the computer replacement cost more than any other single component on the car, including the engine. Try to explain that to classic car collectors. It won’t skid on slippery surfaces, tries to park itself, and a host of other distracting things I haven’t quite figured out. Their manuals are big thick books, but who reads the manuals?
“It’s becoming clear to the folks at CES that your engine is really an accessory.”
It’s becoming clear to the folks at CES that your engine is really an accessory, which can be replaced by a very large electric one very soon, and your computer needs to keep track of voltage to that accessory and let you know about it, probably on an app on your smartphone, which seamlessly appears on your in-dash monitor when you get close to the car.
So we’ve come full circle. While years back you had an office computer where you sat at a chair and did a task, now you sit in a chair with a seatbelt surrounded by a computer that happens to be moving. But in the same way we’ve been fighting attacks for years on desktop computers (which still have issues), we’ll increasingly see issues with that whole mobile experience. But I’m just not sure who to call anymore.
I put that question to one of the booth staff. He had no idea. Apparently, the connectivity to the car is handled by a bulk communication company as a partnership with the folks who make the car, who also partner with the computer people at the booth I was visiting.
I have a colleague in the industry who tried to hack his car for performance with some software he got online. He managed to brick his car, or at least it dropped into limp mode with very limited functionality. He basically could only minimally drive it, and wound up going the dealer and just saying something was broken and he didn’t know what. They couldn’t understand it either, and eventually replaced the computer. They didn’t charge him. He was very lucky.
“Dealers will become more sophisticated in spotting hack attempts, even as the hacking market for performance modifications increase.”
Dealers will become more sophisticated in spotting hack attempts, even as the hacking market for performance modifications increase. There are a host of new doodads here that allow you to interface with your car more easily, and every year at DefCon there is a larger area devoted to the subject.
Manufacturers are at least working on better firewalls now to keep the computers all protected, but that won’t hit the showroom floors for years, meaning there are millions of cars on the road (basically all of them) that hackers will try to exploit.
If a vulnerability is found, they will have millions of vehicles to target that have no effective way of being updated, since few would heed the warning to take it to the dealer for a fix.
It’s not hopeless. There are lots of startups that are looking at building anti-hacking equipment for modern cars. It will remain to be seen whether manufacturers will let you use any of it without voiding the warranty and bricking a very expensive car. If they learn to work together with the community, however, we can bring to bear lessons learned over a long period of time from chairs in front of computers-on-desks and keep us all a little safer.