For the second of our two features focusing on data and cyber security, we spoke to Michael Abtar and Dr Bright Mawudor from IG Smart.
Michael and Bright discussed the key data and cyber security threats facing healthcare in 2023; embedding security into solutions by design; the importance of adhering to standards; how IG Smart supports healthcare and technology organisations to avoid risks; and more.
Michael: I’m the founder and CEO of IG Smart and also principal consultant for the data and information governance and cyber security side of the business. Since inception in 2009 I’ve been heavily involved in healthcare. It was where we started out, from working with primary care trusts to clinical commissioning groups to commissioning support units, right up to the national centre with NHSX, NHS England, the Department of Health and Social Care, and so on.
There are a lot of security threats facing healthcare and Bright is going to highlight some of those. We’ve been at the helm, as it were, with threats involving things like new models of care, test beds, wearable devices, use of IoT (Internet of Things) in clinical and non-clinical settings. We’ve designed the national information governance support package and helped deliver it for the projects and programmes involved in that area. We’re also involved in setting the framework for information governance and security.
Bright: My role at IG Smart to help organisations improve their cyber security resilience. Security is trying to protect your systems, whilst resilience is being able to understand what are the possibilities that could happen before they happen, and how you can continue from there. Currently I do a lot of security assessments and incident response. For example, if somebody has been hacked or feels they have been hacked, I would work with them to see how we can stop the spread of that hack before it goes further, and stop it from becoming a bigger problem.
Cyber security in 2023
Bright: If we’re looking at current threats, whether that’s worldwide or in specific areas, the main thing to say here is that those threats are forever increasing. The numbers are shocking. Hackers will never stop. Now, there are AI tools can do half the work for us – people don’t have to be really skilled with cyber security or even with hacking in order to write hacking scripts really quickly with the help of these tools. I’ve tried it myself, it’s quite scary.
Looking specifically at the healthcare industry, emerging threats will keep on growing as long as we have new technologies impacting how patients are seen. I’ve seen, for example, an app that can call the hospital for you telling them that you have been in an accident because it registered an impact. The technology advances will always be there and they shape the future of healthcare and also of security.
In the last few years, a report suggests there has been an increase of about 32 percent over 2020 of data breaches. COVID has had an impact, here; the pandemic gave hackers a lot of opportunities to find new ways to scam people and retrieve their data. Everyone was so focused on COVID and it distracts people from how they can be affected in the cyber world. With the hospitals so busy more people were wanting to contact them, more people were wanting to pay for medical to be delivered, for example – there were so many ways for information or data to be given out.
There are websites that can show you how many data breaches are happening. Commonly in healthcare breaches you will see that hackers are often looking for names, addresses – physical or email – date of birth, credit card numbers with expiration date, and demographic, clinical or health insurance information. They want this information because it can be pivoted into so many different uses and can be used for further attacks. There are so many phishing pages out to get your details and so many people can end up giving their information away.
Security vulnerabilities in healthcare
Bright: We are in an age where critical infrastructure, in healthcare especially, has become part of a national infrastructure. People cannot afford to lose any data that is important to them. GDPR has helped with this by shaping up security and changing people’s thought processes around it, to think about how they can start to take security very seriously. The thing with GDPR is that the standards actually affect them – people can get fined if they don’t conform to the data privacy rules. So I think there’s a steady growth here, but a lot more awareness is needed in general.
I’ve found that the main reason why most people get hacked or experience a breach is not because they don’t have solutions. The solutions are there, but people might not be exactly sure how to use them. Something I often see is that a business won’t have a dedicated security team. There will be someone who used to work in IT who has been placed in a security role be
Even if there is a security team, often they won’t have upskilled or learned anything new in the last three years. They’ll have done the same job every day in the same department – applying firewall rules, writing some policies. But they’re not learning anything new, and as I’ve said threats are always changing.
Another thing with the existing teams is that they often don’t work with other departments – IT, HR, finance. They tend to work more in siloes.
An example of a typical attack that I’ve seen happen for a very long time now is the concept of sending someone using a fake ID to send an email telling them that they need to pay an invoice for something. The finance department might not have the right processes in place to tell that this request isn’t right, and this leads them to pay the wrong bank account. It’s easy for hackers. They can send emails from people’s accounts and the email will arrive with the name, the profile picture – it’s a matter of seconds.
I’ve seen this happen in manufacturing, travel, housing, healthcare, from Africa to Europe. It comes down to social engineering, which means human interactions masking malicious activities. It’s becoming a very easy way to compromise people and it will forever be on the top most exploited vulnerabilities in the entire world. People don’t realise the extent of what can happen in the security space.
Bright: There are two main types of cyber threats – accidental ones, where somebody or something has been compromised and hackers use them to work their way into the entire network of systems, and malicious bad actors who are insider threats giving access to external parties.
In healthcare, there will be malicious bad actors who will give access to patient data to competitors and they get paid a lot of money for that. That has been going on for a long time and it is increasing, with healthcare seem as a major target.
Looking at current advances such as wearable devices or IoT technology, we have to think about potential ways in here too – for example, it’s really easy to find an IoT device on the internet. There are websites set up for that specific purpose. Hackers can use those sites to view publicly available technologies and devices, say a hospital’s heating system, and log into them to make changes.
A prediction for 2023 would be that IoT devices are going to get exploited more and more, and the same will be true for wearables, if they are not taken care of.
One of the biggest threats apart from social engineering is around applications – all the platforms that require people to enter confidential information. They often link to other systems; for example, a hospital might want to take a patient’s data such as their care plans and link it to an insurance company to make sure the patient is getting the right insurance. The insurance company will then be linking to the patient’s bank account. All of that is linked up using APIs (application programming interfaces). Here’s the problem – APIs are often not designed properly so they can leak out so much data. I’ve been doing a lot of security testing for some time now and the past two or three years I’ve seen more issues with API security than ever before.
Michael: It links back to the new models of care programme that I mentioned at the start – this is where security by design is so important, making sure that security is embedded in the design of the solutions. It extends beyond that now as well, to people’s homes. If they are using their own WiFi router, for example, and they haven’t got a secure WPA2 or 3 compliant router, if they haven’t changed their default password, then they are potentially exposing themselves to threats. We’re now seeing the sort of risks that we are trying to avoid by designing them into the system from the beginning.
Bright: Security by design is something a lot of people don’t take into consideration. People leave default credentials in systems and don’t change them because they think that the system or application already has security. But there is no security to a platform that just ‘arrives secure’ and is just plugged in and set off. There’s a saying that is going around – ‘zero trust is a model’. Trust, but verify. If someone logs in from a new place, verify that they are a real person who is supposed to be logging in at that time from that place.
When it comes to working with vendors, I tell organisations that they need to be able to do a lot of threat modelling. You need to understand what threat level you have, what potential avenues are hackers going to take, what you should spend your money on. Sometimes we can save money by doing this – you might just need an open source solution, for example, not the expensive solution you thought you needed. People can be suspicious of open source solutions but I would say they are likely to have been developed by people who have properly thought about the issues you might face and want to share ways to help.
At the moment, I’m seeing a lot of organisations moving over to cloud. With this, I tell a lot of organisations to go back to the basics. What does your environment look like? What devices, systems and data do you have? Develop your understanding of the processes you have to follow and then you can know what kind of technology to put in place, and how to design the right network or systems for you.
Another key aspect here is continuous monitoring, where we come to talk about threat intelligence. Monitoring threats can be a challenge for organisations because first of all they don’t know exactly what to monitor. And then there’s alert fatigue. If every day you get 10,000 alerts and you have a team of two or three people, you’ll spend a lot of time combing through trying to know what is real and what is a false positive.
Michael: We’re currently involved in advising and supporting health and social care organisations and those that manufacture and deploy software in these environments to implement the core standards. From a cyber security perspective, that is beginning with the Data Security and Protection Toolkit, Cyber Essentials and Cyber Essentials Plus as a bare minimum. ISO27701 is not mandated by the NHS but it could give people a competitive edge and looking into broader markets it could give you national recognition.
Additional NHS specific standards are the Digital Technology Assessment Criteria, DCB1029 and DCB1060. DCB1029 is for healthcare technology manufacturers – we work with them to essentially meet all of the criteria, covering data protection, technical security, interoperability, usability and accessibility, but also clinical system safety and clinical risk management with our team of clinicians as clinical safety officers. We work across from the cyber security side through the privacy side, whether it’s completing the data protection impact assessments, advising on what specific security vulnerabilities are in place, or working out what penetration tests might be appropriate.
We’ve got a few global pharmaceutical companies that we advise and manage their data privacy and governances programmes for them. We also chair committees and governance groups – we’re involved in a lot of advisory work as well as hands-on implementation of standards.
Bright: One framework that we see a lot of government and individual companies adopt is the NIST framework (National Institute of Standards and Technology). It helps people know how to identify a threat so they know how to protect, detect, respond and most importantly, recover. Ultimately, the recovery process is the difference between being able to run a business or not. This is key for resilience.
Bright: We help organisations by coming up with use cases regarding what to do if threats do occur. We will look at a particular scenario; for example, within a hospital. We’ll identify the problems, look at the protection that is in place and detect your monitoring tools, look at your responding team, and then say: okay, this has happened. This is how you can recover or stop the attack from spreading.
It’s important to understand what you’ve got and how it works. People can wonder why they keep getting hit by ransomware because they’ve got a good firewall, but ask the question: does the firewall work? Has it been stress tested enough for you to know the answer? Do you know if your antivirus software is working?
We are moving away from traditional antivirus software to what we call EDR (endpoint detection and response). They work more intelligently – they can see anomalies, they can see ransomware. For example, if an email arrives with an attachment that is zipped with a password and contains a macro, something isn’t normal there – why would somebody send patient data with macros for calculations? The macro could mean that there is potential malware that has been embedded.
I’ve shown this to clients in demonstrations, as a potential way that they could be exploited. It was something they had never thought about. An EDR solution can notice things like this and automatically kick them out from the network so that it doesn’t spread.
Michael: It’s a very real threat – we’ve seen situations where things like this have led to the whole network crashing. In a clinical setting, the consequences could be devastating. From a clinical perspective, it’s not just the loss of setting. If the integrity of that data is compromised, or if data is missing, we’re talking about patients’ lives. At best, we’re talking about their quality of care being diminished.
How to avoid risks
Bright: Follow the standards – for application security, web applications, networks, mobile applications. OWASP (Open Web Application Security Project) provides guidelines on what needs to be followed.
Start upskilling teams. New learnings are coming up every day.
There is a need for continuous monitoring of every single application, especially when it comes to API. Teams need to work on continuous patch management. Sometimes systems are not supposed to be hacked but just because a new vulnerability has been introduced and a new security patch hasn’t been applied, the system is exposed.
Going back to the basics, people leave out so many security keys, tools and critical information on the public web. It’s so easy to use Google to find those things. Again, follow the guidelines and processes, they are there for a reason.
Work on threat modelling, again and again, so that you know which areas need your effort. Otherwise, you’ll end up with a lot of fatigue. This also helps with budgeting.
Michael: Upskilling is a key point. It’s often lacking, I think, and training goes alongside that – both training your existing staff to raise their awareness, which is something we provide, and helping people to build and augment their teams. If you can offer your people the services of a chief information security officer, you’re giving them access to a high level of advice.
Bright: There has to be frequent awareness training for staff, and also you have to make sure that people are really paying attention. There are so many tools out there for awareness training where you can show people at video, but at IG Smart we like to go a bit deeper. We do a live demonstration, we show people the threat and what the attack looks like, and we do an interactive session to make sure that people get to really see what can happen. We pull up sample emails, for example, and ask people to tell us which is the fake one. We demonstrate how easy it is to hack into an email account by showing people how we can send emails on their behalf. It changes their perspective.
Then we do an official campaign and nobody knows when that is going to happen. We’ll send an email from HR out to staff, for example, and tell people that we’ve changed a portal and they need to fill in their details on a link. Then we collect the statistics and hold another training session following that to show how many people clicked, how many people filled it in, how easy it is to fall for these scams – and underline that we would now be able to hack into their accounts if we were hackers.
We also talk to technical teams and run through the tools they have but also how they are using them, how those tools are working for the organisation, how they can work with automation and orchestration.
Hopes for the future
Bright: In five years’ time I’d like to see a lot more policies being pushed and also the actualisation of them, where people should show exactly what they are doing with those policies.
There’s also a need for collaboration between entities. A lot of people don’t share information. I think we need a lot more workshops and organisations need to talk how they have been compromised. People tend to focus on the fact that they got hacked and they lost a certain amount of data rather than how it happened. If people talked about this, other organisations would know what to look out for. There’s a vulnerability in not sharing – if hackers have got into hospitals A, B and C, hospitals D, E and F are going to get hacked in the same way because hackers will have realised a way in.
Michael: I would agree with that, especially on the need for a collaborative approach. I think the NHS actually tends to be quite good at this with the umbrella organisations that people work through. But also, there’s something to be said for looking beyond the industry. Healthcare can learn from banking, defence, aerospace, in terms of technical advancements.
The NHS and healthcare in general hasn’t necessarily been at the forefront of cyber security; if anything it has tended to be on the receiving end of a lot of attacks. I think people’s fear of the reputation damage and regulatory fines that can be associated with a data breach really puts people into panic mode. But I think it will help to be proactive and prepared – to think of breaches as ‘when’ rather than ‘if’. Focus on resilience, essentially.
If data is encrypted at rest and in transit, it really makes a hacker’s job a lot more challenging. So in the future I’d hope to see a return to those basics too.
Bright: I would say: be aware, and beware. The cyber space is always going to be a daunting space for people to secure and there’s never going to be one size fits all solution. We just need to make sure that we can be adaptive.
Michael: In terms of what IG Smart can do, from starting with a single NHS client, today we have clients operating in over 150 different countries, across all sectors and most industries. We still work heavily in healthcare and healthcare tech, and we have a multidisciplinary team of consultants operating across four continents. From a security and data perspective, they possess certified cloud, cyber, information security professionals, ethical hackers and ISO27001 lead auditors and implementers. We can holistically provide solutions to the real-world challenges that health, social care and tech organisations face.
Bright: Finally, I’d say people need to consult. People try to solve everything and you can’t have all skillsets. None of us know everything.