You are here

Human is the strongest link! Cyber security story #3

White Papers & Publications
30 July 2020

After having set some technical practices to reduce exposure of the phishing attack, only one of our two security experts focused on a human-centric approach of the security.  Her company understood how essential people are for their security and their human investment will make the difference during this attack... Fortunately she had called Manu before ;-)

Discover below in our third chapter "how human can be the strongest link to face phishing threats"!    

Do you want to read the full story and get some advices to reduce phishing threats?  Download our cyber security story here.

A story written by Emmanuel Nicaise, our Human-centric Cyber Security Expert, our Storyteller and our famous Manu in the story. 

1. Human-Centric Security


Alice knows that Ben’s technical choices are excellent. She has served as a captain in the army, however, and knows full well that wars are won by people, not just with technology. Alice often quotes Helmuth von Moltke: “No plan survives first contact with the enemy”. Cyber threats are evolving too fast nowadays. We cannot expect any technology to be ahead of the hackers. On the other hand, humans can adapt to any new situation if properly trained. This means that we need our staff to change their behaviour accordingly so as to keep them, and us, safe and secure.

Alice has read many books on human management and company transformation. There are so many theories about behavioural changes that she did not know which one to trust. By chance, one of her consultancy firms had developed a framework for managing security while keeping humans centre stage. Their expert, Manu, stems from an academic background with years of experience in IT, risk and psychology. He and Alice got together and drew up a plan. Although a bit too simplified, Manu used the COM-B model to explain one of the bases of his framework.

 


According to the COM-B model, Behavioural change is the result of three different factors:

  • The Capability to perform the expected behaviour (necessary knowledge, training, physical or cognitive capabilities and limitations),
  • The Opportunity to enact the expected behaviour (we will not put our seatbelt on if we stay in our bed)
  • The Motivation to act. This motivation itself is mediated by the amount of effort required to perform this new behaviour (efficacy, self-perception, stress level) and the motivation to change (personal goals, personal values, meaning, norms, social / peer pressure, fear, rewards).

Unfortunately, there is no “one-size fits it all” solution to educate people let alone to change behaviours. As highlighted in the 2018 ENISA report entitled “Cybersecurity Culture Guidelines:  Behavioural Aspects of Cybersecurity”, we do not yet have a reliable model that can predict cultural changes.

Most of the current models do not take the context of the intervention into account. As culture has a significant impact on the way we can foster changes, it is necessary to put in place a specific framework. Alice also wanted the framework to include the classical Deming wheel, as in the ISO27001 framework, to make sure that we achieve our objectives. Manu’s framework meets her expectations.

Alice can use this Human-Centric Security Framework to: 

  • Define security controls based on relevant risks,
  • Ensure they are useful in mitigating the risks,
  • Ensure controls are efficient or at least cost-effective.

The framework can be depicted as follows in diagram form: 

The goal of the framework is to reduce resistance and improve the capabilities and motivations so as to act securely. Alice and Manu used this framework to define multiple possible remediation actions against the aforementioned phishing threats. Beyond the technological solution, they also worked on the human aspects from two angles: training and usability.

All users should be able to differentiate between a legitimate email and a malicious one. Not all users have the same knowledge of computer systems, however, so they are not all able to recognise all the signs of deception. Furthermore, even if they can, they are often in hurry and forget to think about it. Sometimes, they just click on a link simply out of habit.

 

2. Security Education


One of the most frequent reasons why people do not follow security rules is the lack of the necessary knowledge to do so. All too often, security professionals tend to take basic internet knowledge on the part of users for granted. That is a mistake. A large proportion of our population uses the Internet daily. Nevertheless, technical knowledge, such as the format of a URL and the exact notion of what a domain name is, is not always available. When we ask our users to check the domain name or the URL before clicking on the link, they may not understand what we are referring to. 

Alice therefore started by providing training to ensure her users have a basic knowledge of the environment in which they operate. A common and straightforward vocabulary was used to ensure all users could understand the material. In such situations, we often have a tendency to say something like “send a continuous flow of compressed air into the nasal cavity to remove any organic blocking material” instead of saying “blow your nose”. She wanted to avoid such gobbledygook. 


The consultants designed the training to make it adaptable to the user’s technical knowledge. In this way, people with existing knowledge just had to answer a few questions and were thus spared the frustration of being forced to listen to what they knew all too well already. 

They also prepared a second training course to instruct her people on how to recognise signs of deceptions in a phishing email:

  • incorrect sender’s domain,
  • suspicious link,
  • unexpected file type in attachment,
  • time pressure,
  • vague content.

At first, she wanted to add other clues like grammatical errors or bad graphical design. These are increasingly less relevant, however. Legitimate senders tend to make more mistakes than phishers.

As most employees are already quite busy, Manu suggested keeping training courses short. The attention span, the capacity to stay focused on one subject, is often less than 15 minutes.

They also used the human-centric security framework to make sure they addressed the right problems and that the training was effective. 

The consulting company also drafted communications and short videos to repeat critical messages. After all, repetition, repetition and repetition are the three fundamental rules of education, aren’t they? In an effort to fight habituation, noise, boredom and the potential lack of interest however, they used empathy, creativity, humour and surprises to reshape the other messages.

Ben also invested in some online training for his users. It was not always relevant, and lacked the personal touch to make it enjoyable and make people care, however. For Alice, knowledge can help to improve her user’s detection skills, but would it be enough?

3. Measuring success

 

Alice decided to ask her Red team to perform a phishing simulation. They prepared an email that looked like a press release, with a catchy title “Press Release – Embargo until Monday”. They attached a PDF with a small script, similar to the one used by hackers to download malware, to gauge how many people open the file. They sent it to the entire company on a Friday afternoon. By Monday noon, around 60% of the company had opened the file. Alice was surprised by the results. Was the training useless?


She called the security expert who was helping her with human aspects of cybersecurity.

  • Hello Manu, I just sent you the report of a red team exercise we just performed to measure our progress on Phishing. The result is catastrophic.

  • I see that. It is alarming, but we cannot draw any conclusion yet.

  • Why? It is not good at all!

  • We just started with the training. We just finished the CBT and we will start with the phishing exercises this week. You cannot compare pears and apples.

  • I don’t understand. That’s the result of a phishing email. How can it be different from another one?

  • Our own research, and that of others, show that the variance in the results of two phishing emails sent to the same population at the same time can be as high as 60%. That's why we have a quarterly process to measure our progress. Let me share my screen with you.

  • What is this?

  • In order to compare Apples with Apples, we use four scenarios that we send every quarter to different groups at A.com. The scenarios have a different level of complexity. We thus have a reliable measure of our training’s efficacy. If we use the same scenarios, but one per quarter, we would get a different view:

  • Indeed, with this view, I wouldn’t have seen any significant progress before the end of the year.

  • Exactly! There are also other issues with metrics associated with Phishing.

  • Such as?

  • When they measure click or failure ratio, companies often use the number of people to which they sent the email as the denominator.

  • That makes sense, doesn’t it?

  • Yes, except that some people have rules for deleting external emails or for archiving them automatically. So, it is better to use the number of people opening the email as the denominator.

  • Can we rely on this number? How can we know?

  • We added a tiny image to the email, and then added the server hosting these images to the list of trusted servers. When the email is open, the mail client contacts the server to display the image, and we can track the opening of the email.

  • OK, that makes sense too.

  • Yes. Furthermore, we track people reporting the phishing email to the security team. That’s the behaviour we want to see in our people: detect and report.

  • That’s why we have added the Phishing Alert Button in the mail client.

  • Indeed. It makes it easier to report phishing emails, and we can easily count the number of people alerting the teams when we do our phishing exercises. We combine it also with the ticketing system of the Security Incident Response Team.

  • Why?

  • So we can have a good idea of people reporting phishing and legitimate emails to them. We want our people to become smarter. They must be able to differentiate a phishing email from a legitimate one. If we just measure the people alerting the security teams during our phishing exercises, we miss the false positive.

  • What do you mean?

  • We put them under so much stress that they start behaving irrationally. If some people get paranoid, they will send a lot of legitimate emails to the security team, flooding them with unnecessary work. It also means we did not achieve our goal as they are not able to differentiate between phishing, spam and a legitimate email? That’s not a success!

  • OK. I understand now. So, I have to wait to see the results of the quarterly tests.

  • Yes, you do.

  • OK. Are you sure the phishing exercises will help that much?

4. Phishing exercises

 

  • Have you ever driven your car back home and had difficulty remembering how you got there? You were enjoying the music, listening to the news or talking with your passenger while driving at the same time. Having a discussion, understanding the content of an interview and driving are complex actions. However, as we become seasoned drivers, our brain develops automatisms. They enable to perform this complex task with minimum effort, almost involuntarily. That is what we call a heuristic. We can develop heuristics for many activities: speaking, reading, playing the guitar, running, drawing or kicking a ball. An action, a behaviour that we perform regularly can become a heuristic.

  • OK, but what’s the point?


  • Heuristics are fast and requires little attention, but they produce stereotypical responses and are susceptible to a lot of cognitive biases.

  • You mean hackers can use these biases like vulnerabilities in a software?

  • Exactly! On the other hand, if we want to solve a complex issue, like a puzzle, a riddle or a math problem, we will use a systematic process. We will gather and analyse information, connect the dots, summarise the content and find a solution. This process requires much focus, energy, and motivation. However, when we use a systematic process, we tend to be more vigilant, more cautious.

  • So we need our people to use the systematic process when they read their emails?

  • That’s the spirit. But, that’s not so easy. They can’t use the systematic process all the time. It will be exhausting, and they will lose much time.

  • OK. So, what can we do?

  • Our vigilance, our interest and our cognitive workload will influence our use of either the heuristic or the systematic process of information. So, when we are tired or stressed, we are more likely to fall into the “automated” (heuristic) mode. We will more likely click on a link in a phishing email or open a malicious attachment.

  • So?

  • So, it’s better to take the time to read emails, maybe twice a day. When we read emails on our mobile devices, between or during a meeting, we increase the risk by 50%.

  • That’s a lot!

  • Yes, it is. Email habits seem to be an essential factor in the susceptibility to fall for a phishing email. These emails try to make us react automatically. Sometimes, we are tricked by some words, some shapes or colours. We think that an email belongs to some category of messages that we know and trust (from our IT department, a newspaper, a store or a delivery service). We fall back into the heuristic mode too. And we click.

  • How can we change that? That makes for many pitfalls, doesn’t it?

  • Yes! And that’s where the phishing exercises come in play. By chance, if we have encountered these pitfalls in the past, we are more likely to remain vigilant the next time and to avoid the trap. That is why monthly training keeps us vigilant and tends to reduce the number of phishing incidents.

  • So, we can do a lot of these exercises, and it will solve the problem?

  • Not really. At some point, it won’t be efficient anymore, even counterproductive. People will get frustrated.

  • How often, then?

  • At least every month. Our research has confirmed the findings of other studies. The effect of phishing exercises fades out after around 30 days. 12 to 16 exercises a year is a good number.

  • Can we drop the training then?

  • Not really, we still need to explain how to spot the phishing emails. And...

  • Yes, I know: Repetition, repetition and repetition.

  • Yes. However, we must also make frequent changes in the training. Otherwise our mind will start ignoring it. It will be like these objects in our home we don’t notice any more. Until we leave for a few weeks and come back. This phenomenon, called habituation, is a normal function of the brain, that ignore repeated signals. It is what allows us to wear our clothes without feeling them all the times. It’s practical but not for the training.

  • OK, If I understand correctly, we continue the training, and we stick to our plan with 14 exercises per year.

  • Yes. It is the most common and efficient way to train our users to detect phishing attempts. It works like a vaccine: It enables our users to face attacks, to learn how to detect them and how to react without having the risks of a real attack. As it happens in the familiar context of the user, we have the optimal situation to train them.

After having use technology and training to react to a phishing attack, it is time to discover how our cyber security experts will act to facilitate detection in our fourth chapter, coming next week.

Need our support to implement a Human-Centric Security Framework?
Discover our security awareness solution and contact us!
Share this publication