Now that the dust has settled, it’s time to do a post mortem and see what lessons can be learned from the Wannacry outbreak.
Do we really understand the risk?
If you categorize Wannacry as simple ransomware, the impact wasn’t as bad as other outbreaks. Yes, it spread quickly, but anti-virus vendors were quick to respond and its progress was greatly slowed by a lucky researcher.
But Wannacry isn’t just a simple ransomware outbreak. It’s the first of something new.
Hackers and their ilk have been becoming more mature and sophisticated for a long time, but groups like the Shadow Brokers are taking it to a much scarier level. Their recent announcement of a 0-day vulnerability subscription service should increase your cyber-risk level to an all-new level.
The good news
For all the hair-on-fire and running-around-in-circles caused by Wannacry, when you take a breath and look back, it’s clearly an opportunity for companies and MSPs to make space in the budget and put the right protection in place. Sometimes you need a big scare to change.
The fear and pain are still fresh. It’s the right time to review security controls with your client and make recommendations to reduce the risk.
What we need to do better
It’s interesting when you look at the statistics for this attack and try to break out what we need to do better.
Better road-mapping and retirement of old operating systems? 98% of the infections were on Windows 7 systems, which is still supported by Microsoft.
Improve antivirus? Anti-spam? If you’re using a tier 1 vendor, they were catching Wannacry very quickly. Symantec blocked more than 22,000,000 infection attempts across 300,000 endpoints.
The answer is not one of technology. Technology isn’t the root cause of the spread. It’s not where some of us fell down.
We need to improve our security processes and better train people.
Incident response planning
What do you do when an incident like Wannacry happens to a client? How do you respond quickly and effectively?
Even if you didn’t get infected, as soon as Wannacry started to spread, your risk level should have gone to an 11. What did you do to ensure that your clients were safe? How did you communicate that safety to them?
If your client(s) did get infected, what did you do to lock down the issue quickly and get them back up and running? How quick was the recovery? How did you resolve the root cause of the issue?
Incident response planning is very important for MSPs. Develop clear process maps for your incident response process, including the technical, communications and human elements and write SOPs for every stage (store it all in IT Glue, of course).
Just writing SOPs isn’t effective though – run practice sessions regularly so you team knows what to do. If you’re really keen, do some tabletop gaming of incidents. Done right, tabletop gaming is both incredibly effective and a ton of fun for your team.
There’s many key security operations that MSPs need to be doing very well to ensure their clients are safe.
Patching needs to be at the forefront. With Wannacry, Microsoft had released patches for most operating systems months before. Strangely, I heard of many late-night emergency patching processes from several companies. Patching doesn’t just mean workstations and servers – all systems need to be included. Firewalls, websites, productivity applications, IoT devices, and anything with an external IP. PCI-DSS requires that critical patches be installed within 2 weeks. We should aim for better to ensure our clients stay safe.
Are you sure the front door is closed and locked? If you’re not doing external vulnerability scanning for your clients, you should be. There’s many vendors that provide this service – it’s even built into Network Detective, which many MSPs already use.
For many businesses, even if they didn’t get infected, confidence is shaken. This is when having good security conversations is vitally important. If you’re already doing quarterly reviews and don’t have a strong security component in it, it’s time to develop one.
Security Awareness Training
Forewarned is forearmed.
The bad guys focus on what gets results – and people are the weakest link. Why put in the effort at breaching a firewall when 30% of phishing emails are opened, and 12% of links clicked?
It’s vital that we train staff of all levels to recognize cybersecurity threats and know their role in the security puzzle.
Do a post-mortem yourself
Now that the incident is past, you should be doing an internal post-mortem for yourself.
The normal parts of a post mortem include:
- Summarize what happened
- Include impact analysis where possible.
- Do not blame-storm! This is about being self-reflective and improving process. If it’s someone’s fault, the team will disengage completely.
- Determine root cause
- “Uhh wannacry?” is not a root cause. Look at each impact and determine why that impact occurred.
- Review actions
- What was done during the incident?
- Include every part of the incident response, including communications with clients, processes and technological factors.
- What could we do better? How do we ensure this doesn’t happen again?
- Be open to ideas and inclusive through this process.
Post-mortems are a vital part of incident response. Doing them well can massively reduce the impact of incidents and improve the effectiveness of your whole team.
About the Author
Mike Knapp is an IT Project Superhero and Cyber-Security Simplifier focused on helping business increase success through technology and reducing the risk of cyber-attacks. He is a partner with Incrementa Consulting and the founder of Simple Security.