This article was originally published on The Guardian and is reproduced with permission from The Guardian.


After a ransomware virus swept the world in 2017, organisations globally were caught off-guard – yet, today, many still aren’t prepared to invest in resilient IT systems. Companies put off by the cost could face huge consequences.

In May 2017, ransomware virus WannaCry swept through organisations around the world, locking computer screens, encrypting files and demanding a ransom to be paid in Bitcoin. Nearly 18 months on, businesses are still battling to protect their networks and stave off the next cyber meltdown.

The virus exploited a vulnerability in the Windows operating system, forcing critical infrastructure offline for days as it spread through 150 countries, hitting an estimated 45,000 organisations. Despite the large numbers, WannaCry was quickly stopped in its tracks and only a few businesses paid the ransom. However, the attack exposed weaknesses in basic security processes, with many organisations leaving their networks unprotected and failing to update or “patch” security vulnerabilities. Ever since, the cybersecurity industry has thought long and hard about the lessons of WannaCry.

Scott Walker, senior solutions engineer at Bomgar, says: “It’s been a wake-up call in every industry that basic security practices need to be prioritised to reduce cyber-risk.”

Peter Usherwood, UKI & MEA head of security consulting, integration and compliance at DXC Technology, agrees, adding: “Dealing with these threats requires us to look well beyond typical mechanisms to protect, detect and respond. Those on the front line need to be doing security better and fighting the adversaries with more sophisticated toolsets.”

Some in the security industry believe WannaCry has given them a tool to argue for greater budgets and support. IT directors have long struggled to persuade boards and chief executives to take the threat of a cyber-attack seriously. Organisations had been reluctant to make the heavy investments in security systems and the constant updates needed to defend their networks.

WannaCry was a tipping point that made the threat real, the first global cyber event that most people had experienced. It was a painful way to bring home the dangers of ignoring cybersecurity.

In the UK, the NHS was one of the worst hit organisations. Over a third of trusts and nearly 600 doctors’ surgeries were hit by the virus, resulting in almost 7,000 patient appointments being cancelled. The vulnerable Windows XP operating system was widely used in the NHS, but support had been discontinued with the last update made in 2014.

Yet, the cause was, seemingly, trivial: Faced with legacy technology issues and large costs to update systems at a time when budgets were being cut, the vulnerable Windows XP operating system was still widely used across the NHS, despite commercial support having been discontinued in 2014, with custom support paid for by the government until April 2015 to allow additional time for migration away from XP. Two years after the deadline, the NHS paid a heavy price: The Department of Health recently calculated that the WannaCry attack cost the NHS £92 million.

In the aftermath of the attack, the NHS has taken steps to significantly improve its network security, such as moving to Microsoft’s Windows 10 operating system. Weekly threat intelligence alerts identifying new threats are sent out across health and care services, and text messages are sent if a major incident emerges. “Since WannaCry, there has been a collective focus across the NHS on strengthening resilience against cyberattacks,” says NHS Digital. The service is now focusing on improving the speed of response, communication and spreading of knowledge in the event of an attack.

The organisation has learned the importance of swift and effective patching of systems when new security updates are released, and others are now following suit. Greg Day, chief security officer, EMEA, at cybersecurity business Palo Alto Networks, sees the priority as creating an effective patching strategy; checking that systems thought to have been patched are, in fact, patched and rebooted; and for organisations to have accurate and trustworthy data about security updates, a situation many IT departments struggle to achieve.

“The reality is that patching is hard,” says Etienne Greeff, chief executive at SecureData. It can mean taking an x-ray machine, MRI scanner or industrial machinery offline for hours, causing delays to vital treatments and hold-ups in production lines; one reason the NHS seemed so unprepared. Patching can mean downtime for computer networks, as they are updated. He estimates it takes on average 60 days between identifying a vulnerability in code to completing the patching process.

“When a new vulnerability is released, attackers try to exploit it quickly, knowing that their window of opportunity is brief as vendors seek to patch it. Time matters, and automated systems can help with that,” says Usherwood.

As new attacks emerge, Usherwood believes that detecting and reacting fast to events through continual monitoring is crucial. Recent advancements in AI, machine learning and active cyber defence mean that organisations will be better enabled to remain one step ahead. However, the ultimate aim should be to design resilience in from the start – building a system that can be patched, updated and tested with minimal business impact.

Although he agrees, Greeff believes there is still a long way to go to build resilient networks that minimise the threat of viruses spreading quickly. “What should surprise us about WannaCry is not that an MRI scanner was taken offline, but that it was connected to the same network as the PC in the doctors’ surgery – and that you can take out both with one attack,” he says.

He points to the NotPetya malware attack in June 2017 – which started in Ukraine and soon spread all over the world – an event that shows how quickly integrated systems can be taken down. But he believes that it is beyond the means of most organisations to effectively segment their systems to create resilient networks that are not brought down by a single attack.

While the language of resiliency has become dominant in addressing security risks, “post-WannaCry, our challenge is to distill the complexities of security into risk-oriented language the board can digest and take decisions upon,” says Usherwood. He explains that there are some fundamentals that leaders can use to drive resiliency conversations with their team. “Firstly, do you know where your digital assets are and their value to the organisation? Are sensors positioned to monitor the health of those assets? And, do you have a security team that is capable of detecting, responding to and instigating recoveries from any breaches? Getting these basics right provides a stable foundation for best-practice cyber resilience.”