You are here

How to facilitate the detection of phishing emails? Cyber security story #4

White Papers & Publications
11 August 2020

Our security experts have set technical practices and one of them focused a human-centric approach to reduce the exposure to phishing attacks.  Alice, our expert, is moving a step further by looking for other ways to facilitate detection as she knows that humans have limited resources and energy when it comes to threat detection.  

Discover more in our fourth chapter below " Usability and other ways to facilitate detection".

Do you want to read the full story and get some advices to reduce phishing threats?  Download our cyber security story here

A story written by Emmanuel Nicaise, our Human-centric Cyber Security Expert.

 

Usability and other ways to facilitate detection

 

Alice received a lot of positive feedback from the business about the training. It is efficient, and it does not take much time. They like it. Still, some users are confused. They have difficulty to recognise some phishing emails. A human being’s resources and energy are limited. Alice would rather have them spent on business-related tasks than on security. Furthermore, the easier a task is, the less motivation is required. She planned a meeting with the expert to find a solution.

  • Why is it so difficult for some users to spot a phishing email?

  • There could be many reasons. Humans are not machines. We all have different ways of looking at things, of doing things. And, for most users, security is not their primary concern. We must make it easy to see when they need to look out for.

  • How can we do that?

  • First, I would work on the lack of pattern.

  • What do you mean?
     

1. Patterns and toxicity

  • Our brain is a unique pattern recognition system. When I say the sequence “2, 4, 8…”, you probably already have the number 16 coming to your mind. You recognised the pattern. So, when all the emails sent by our colleagues end with the same domain name (like @a.com), we create a pattern, a rule, in our mind, linking this pattern with a group we trust.

  • Yes, that makes sense.

  • At the same time, we will perceive any other pattern as less trustworthy. We will see it as coming from outside our group. That’s why we ask our users to check the sender’s domain name.

  • Well, what’s wrong with that? 
  • Nothing! Except, the world evolves, and this simple request might not be so easy to perform anymore. Nowadays, with the cloudification, A.com uses multiple external third parties to deliver internal services like Service Desks, training, Human Resources, timesheets and so on. Each of these services sends emails to our users using their domain (like ticket@myservicedesksoftware.com or noreply@mycloudsoftwaresolution.com). It becomes challenging for the human brain to recognise a clear pattern. Consequently, when hackers send an email using a seemingly legitimate domain like helpdesk@itdepartment.xyz, it doesn’t trigger vigilance as we could expect it. This inconsistency becomes toxic for our security as it prevents our users from differentiating suspicious emails from a legitimate one. Even more, it might create unnecessary paranoia in some people as we tend to be uncomfortable when we do not recognise a pattern.

  • OK. I get that. What should we do then?

  • The best solution is probably to ask our SaaS providers to send emails using a subdomain of ours (like info@provider1.a.com). Our users will again be able to find a pattern.

  • OK. I will change our policy and ask procurement to contact our providers. What else?
     

2. Email client

  • We can facilitate the detection of suspicious emails by configuring or improving the user interfaces of our email clients. They can thus easily spot suspicious emails with limited technical knowledge and effort. We have already started to work on that:

    • We have improved the mail client to ensure the sender’s domain is visible in the list of emails. We use a formula to display the email address only if it is external to our organisation and untrusted.
    • We have added a banner in the body of the message (at the top). It informs the user that this email comes from a sender external to our organisation. We made it stand out, in red, boldface. We are working on how to vary it every month so as to avoid habituation.
    • Highlight the emails from an external source in red boldface in the list of emails.
    • We are looking for a system that categorises emails based on their level of threat: Internal emails in one folder, external emails from a known and trusted source in a second one and a third folder for emails from an untrusted external source.
    • We also configured a system to display a warning when we click on a link in an email from an untrusted source. The user needs to confirm the action before the client opens the URL or the attachment for good.
  • OK. It does not look like major changes are involved, but I guess it makes a difference.

  • Indeed, it does. All the feedback has been very positive. Also, it changes the way people perceive security teams. They are seen as facilitators, business enablers, and not the people blaming others all the time.

  • Did you implement these measures on all environments?

  • Yes, we ensured the consistency of these features throughout all the available environments: PC, Mac, Desktop and mobile devices.
     

3. Web browser

  • Is there something else we can do?

  • Phishing emails often contain links to malicious websites used to upload a payload. Sometimes they trick the user into giving his or her credentials or other sensitive data.

  • Yes, but our web browsers highlight the domain name, shows a padlock when HTTPS is used.

  • True, but people do not notice this anymore.

  •  Habituation, again?

  • Yes. We are working on an alert to warn the user when the website uses HTTP. It should help raise vigilance. Still, 80% of phishing websites use HTTPS nowadays.

  • OK. So, that’s not the panacea.

  •  No, it’s not. The most effective change we have implemented is probably on the web filter. We have changed the settings to block uncategorised websites. Nowadays, websites used for phishing campaigns exist for 8 to 24 hours before they disappear. We cannot expect they will be flagged as malicious before the attack hits us. We need to block most of it by default and allow access to all the identified legitimate servers.

  • But people will complain if they cannot access some website they need for their work.

  • That’s why we have implemented a smooth process to categorise a website manually in case of emergency. Security teams perform a quick assessment and update the list if the site is safe enough.

  • Good, I won’t receive too many complaints then.

  • No, you shouldn’t. On top of that, security engineers have reconfigured the web security gateways to detect forms requesting a password from unknown websites. When they do, it displays an additional alert to the user’s web page.

  • That’s excellent. So, we are all set?

  • Yes, for the moment. Let see how it goes. If needed, we still have a few cards to play, like gamification.

Need our support to reduce exposure and impact of phishing attacks?
Discover our security awareness solution and contact us!
Share this publication