I’ve been fairly consistent in my opinions, and I hope my viewpoints are well known by now. People, process and policy are the ways to win the cyber war and secure your organization against the threats it faces in cyberspace. I truly believe it’s more a people problem than a technology problem. But today I am going to contradict myself – I’m going to say it’s all about technology. Specifically, basic technology. In this column, I’m going to cover two which, if done right, can be leveraged to significantly improve your security profile.
The first one is patches – and patch management. We need to do better at patching our machines, both end-user computers and back-end systems. WannaCry, using the EternalBlue exploit, hit the world stage for the first time in mid-May. A patch for the vulnerability exploited by WannaCry was released by Microsoft in March, two months prior to the attack. For properly patched systems, the attack was a non-event. For unpatched systems, it was devastating.
Then, to add insult to injury, NotPetya struck a month later, using a slightly modified version of the same EternalBlue exploit. Unpatched systems were again ravaged – and in a shockingly malicious manner. NotPetya seemed only interested in sowing discord, as the means to pay the ransom to retrieve your files appears to have been only half-heartedly implemented, without any real expectation that it would be used. The must-learn lesson from these attacks should be: Patch management is critical – patch your systems early and often.
Now I’ve been around a bit, managing a network of Windows for Workgroups systems in the mid-90s. And I remember the midnight release of Windows 95, so I remember patches that brought my systems to their knees. I remember reinstalling entire operating systems after a patch went sideways. I remember subscribing to patch newsletters, which gave the rundown on each aspect of a patch, and assessing how we needed to test before applying it to our environment. I know why we wait and test patches before releasing them to our users; I get it. I also know that I’ve been running Windows 7 and Windows 10 for years now on auto-update and haven’t had a single issue with a patch. It comes down to this: There’s a very slight risk that a modern patch will cause problems as opposed to a very real and present danger posed by malicious software and cybercrime.
If your organization is ultra risk-averse, then perhaps you should wait a day or two after a patch is released and monitor news feeds to see if there are any major issues with it before installing. That said, if your organization is truly ultra risk-averse, I imagine you are simply nodding your head right now, agreeing that patch management and quickly deploying patches is critical.
Does rapid deployment apply equally to all devices? Probably not. End-user machines can likely be patched immediately, while servers and key infrastructure may need to undergo swift testing before deployment. Other devices? One of the things that terrifies me about the Internet of Things devices, and their staggering growth, is the complete lack of any decent patch management strategy.
The second basic technology I want to discuss is backups. If patch management is all about not getting hit, backups are all about what happens if you do get hit. Ransomware can be destructive, but a good backup can be your saving grace.
What defines a good backup? There are a couple of key aspects to consider.
First, you need to be able to restore the data. You would think it goes without saying, but you would be surprised how many times we’ve gone on-site, asked to restore files, and found that no one had ever done it before. If you haven’t tested restoring files, confirming that the data is being backed up properly for recovery in an emergency, then your backups are less than worthless. They are giving you a false sense of security. Testing allows you to a) verify that backups cover data you need to retain; and b) make sure that the process of restoring files is understood and can be performed under pressure with short deadlines.
Second, and equally important when combating vicious malware like ransomware, is that the backups are offline and read-only. We’ve seen instances where online and nearline backups, including live cloud and Apple Time Machine backups, were targeted by malware for destruction. Not only did the malware encrypt every live file, but it specifically sought out the backups, deleting and wiping them.
I wrote previously about using Amazon’s S3 cloud storage with compliance options to create a Write Once Read Many (WORM) backup solution. Old school tape backups can offer the same functionality. The key here is that once the backup has been created, it needs to be locked down so that the malicious software can’t delete or overwrite it. I like S3 because it allows us to do this in an automated fashion and puts expiration dates on that lockdown so that the storage space can be reused. That said, something as simple as an external drive, disconnected except when backups are being made, works just as well.
There, I did it. I came down off my it’s-not-about-technology high horse and admitted it – technology plays an important role in cybersecurity. I could have said that patch management and backup policies are critical to success, but that’s not it. The work must be done. Policies are necessary, and they should be well thought out and documented. But the real answer is: We just need to do it.
Leave a Comment