Browser Warnings for Phishing Websites.
Phishing is the term used to describe the process by which fraudsters use fake websites, usually in conjunction with fake emails to lure victims to the fake websites, to elicit personal details from the victim. The personal details are then usually used as part of a scam to steal from the victim. For example, if the fraudsters have managed to ‘phish’ a victim’s bank login details, these will be used to steal money from the victims account.
Browser warnings are supposed to support the tasks of enabling users to spot fake (phishing) websites and distinguish them from real sites, i.e. browser warnings are supposed to enable users to spot phishing sites, and also, have confidence that the sites that they believe be to be genuine are genuine.
Browser phishing warnings differ from browser to browser, and also evolve and change with newer versions of the same browser. They can also be augmented by browser extensions that can give additional and/or more detailed warning information.
Examples of the types of design implementations of phishing warnings used by various browsers are:
- Highlighting (for example in bold text) the domain name of the URL in the address bar. This is because a common tactic used by phishers is to disguise the domain name of their phishing site to look like that of the reputable organisation they are trying to imitate. By highlighting the actual domain name of the website the user has connected to, the idea is that users will be able to spot more easily if it is not the website they believed they were connecting to.
- Showing (for example by displaying a padlock icon and the adding the ‘s’ onto Http) if the connection to the website is through an encrypted, TLS, connection with a certificate signed by a certificate authority recognised by the browser. The idea here is that a phishing website will either not be using a TLS connection and will have no security certificate, or if it is and does, then the certificate will not be signed by one of the reputable CAs recognised by the browser (in which case the browser will flash up a warning). Therefore, if a user is connecting to something like his bank website, which he should expect to run over a TLS connection with a security certificate signed by a CA recognised by his browser, but it is not, the user should then suspect the site.
- More explicit warning systems are provided by browser extensions. These are generally offered by companies like Norton and Mcafee, and work by different combinations of site analytics. One Common approach is crawling the net, in a similar way to search engines, testing each site encountered, and then maintaining white/black list of all tested sites. Then, if a user tries to connect to a website that is on a blacklist a warning is flagged up.
An important early study of users’ interaction with browser phishing warnings was that of Dhamija et al (2006). They conducted their research by studying how users went about assessing whether a website was a phishing or a genuine site.
The study had 22 participants, and they were asked to visit 20 websites (approx half genuine and half phishing) by clicking links presented in random order in a lab study. The participants were then asked to make a determination of whether a site was genuine or not, and to talk through the factors affecting their choice.
Amongst the studies interesting findings were:
1) Nearly a quarter of the participants did not look at the browser cues at all (primarily because they did not know what these cues meant), concentrating instead on the content of the website – which of course is completely within the control of the website author who could be a ‘phisher’. Unsurprisingly, these users were wrong in their determination 40% of the time.
2) Even amongst participants who did pay attention to the browser cues, their understanding of what the cues meant was poor. For example, most thought that a padlock sign displayed by the browser was less significant than one displayed within the page content.
3) All participants were susceptible (even the more knowledgeable and security conscious) to visual trickery. The ‘best’ phishing website of the study fooled 90% of the participants into thinking it was genuine. A more careful examination of the URL in the address bar would have revealed the domain name of the website to be bankofthevvest.com (with two ‘v’s), instead of bankofthewest.com (with a w).
More recent studies would tend to suggest that not much has changed. For example, Wang et al 2015 implemented a Chrome browser extension that picked out likely phishing websites based on their Alexa rank. The logic being that phishing websites tend to be short lived and should have a high Alexa rank (which corresponds to sites that are not popular). The user study part of this research found that users tended to just click away the browser warnings and proceed to the site anyway.
A study by Kumaraguru et al (2010), as well as some other research, would seem to indicate that more user educative approaches to the problem of phishing, and other HCI security aspects, might work better than the predominantly automated prescriptive approach which currently seems to prevail.