Countermeasures against Social Engineering Attacks

Posted on March 30, 2019 in Cybersecurity

In this blog post, the basics of social engineering attacks will be explained and corresponding ways to mitigate/prevent those attacks as well. The motivation behind this article is to strengthen IT security by providing a collection of countermeasures, which potentially can generate some level of protection. At least, I want to raise awareness for this topic and try to explain why it is so difficult to protect an organization from social engineering. The topic is approached from an (semi-)academic perspective and only aspects in the context of information security will be covered.

Motivation

According to a study, conducted by Ponemon Institute LLC and jointly developed by Accenture, 69 percent of the participating companies experienced phishing attacks and other social engineering attacks, as well.1. This is an alarmingly high number.

The study provides an insight into how often various types of attack methods were experienced by participating companies in 2017. In the survey 254 companies around the world were interviewed. Unfortunately, they didn't publish the questions or how they approached the survey. Besides, I have no clue why phishing is explicitly named in the category 'social engineering'. It's conceivable that the interviewed people were asked about both methods and the results were grouped together later on.

Aside from a tremendous high number of social engineering attacks (69% affected), almost all of the respondents claimed to be affected by malware. This is very interesting to the effect that in many cases malware infections occur through phishing mails, a technique of social engineering. Therefore, social engineering must be considered as a common and effective attack vector, and hence demands adequate protection.

Cost of cyber crime study 2017 by Accenture & Ponemon Institute

Definition

The best definition I have found is postulated by Christopher Hadnagy, who is well known in the field of professional social engineering. As stated by Hadnagy, social engineering is defined as follows:2

Social engineering is the act of manipulating a person to take an action that may or may not be in the “target’s” best interest. This may include obtaining information, gaining access, or getting the target to take certain action.

What we can learn from this definition: First, social engineering includes a wide range of techniques. Second, social engineering in general isn't all about harming people. It is more a neutral thing, which can be used for good or evil. This is very important to keep in mind when we talk about the term within no specific context.

To nudge the article more into the infosec field, I would describe social engineering - in the context of cyber security - as a bunch of very effective attack vectors, that rely on the art of persuasion and psychological manipulation. The trick is to not bypass any technical security mechanism, such as a firewall. Instead you want to exploit the human operating this technical system to ultimately bypass it.

Academic disciplines

The topic is very interdisciplinary, which means that more than one academic discipline play a big part in understanding social engineering. Basically, the academic fields of sociology and psychology play an extremely important role. We address the topic in the context of cybercrime, therefore knowledge in computer security is essential as well, which can be considered as a sub-discipline of computer science.

  • Sociology
    • Sociology is the study of social interaction and social relationships between individuals (normally a larger group).
  • Psychology
    • Psychology is the study of behavior, feelings and thoughts of the individual or at least a smaller group of people.
  • Computer security
    • Computer security is the protection of computer systems from theft or damage to their components or digital data.

Due to the fact that we need to address both psychology and sociology to understand social engineering, it would be more precise to speak about social psychology.

Although, I have expert knowledge in computer science and cyber security, respectively, I am not trained in the social psychology field ;)

Sociotechnical systems

An interesting term I briefly want to introduce is the sociotechnical system, which was established by the Tavistock Institute of Human Relations in the 1950s. The concept states, that every sociotechnical system consists of two components: a technical sub-component (e.g., machines, computers or networks) and a social sub-component (the humans who operate and use the technical subsystem).

What do we learn from that: it is important to understand that every IT infrastructure must be considered as such a sociotechnical system, in which humans use and administrate the system. As a consequence, we have two subsystems, that can be potentially exploited in terms of security issues. However, it is common to implement proper security measures only for the technical subsystem.

6 Principles of Persuasion

The psychological fundamentals of social engineering strongly relies on six key principles (meanwhile extended by a seventh principle) established by Robert Cialdini 3. Cialdini is a professor of psychology and marketing; also he is very well known for his book "Influence: The Psychology of Persuasion" published in 1984. According to the professor, there are thousands of different tactics that directs human behavior, however, these can be categorized in the mentioned six psychological principles, namely consistency, reciprocation, social proof, authority, liking, and scarcity.

Reciprocity

Reciprocity is a fundamental principle of human behavior. The principle states that almost every human feels obliged to return a favor of equal value, if he received something before. This include among others actions, objects and compliments. However, it is not relevant, whether a person receives something wished or unwished - we always feel the urge to give something in return. This social norm is very old and also known as "Golden Rule" or for the equivalent negative framing known as "eye for an eye" principle.

Especially, disclosure reciprocity (a kind of reciprocity) is very helpful on social engineers behalf. It means, that people both mutually and equally share personally meaningful details about themselves.

Commitment and consistency

The consistency principle states that people will change their attitudes, beliefs, perceptions and actions to be consistent with prior acts and statements. This is the case, even if the original motivation to commit is removed after the agreement. The need to be consistent can be even so strong, that we act in ways that are clearly against our own best interests.

Psychologist Thomas Moriarty demonstrated the power of consistency through an experiment conducted in the summer of 1972.4 Overall, 56 beach-goers were selected for the study. The procedure: a research accomplice (the "victim") placed his blanket within five feet of another beach blanket (the experimental subject) and turned on a portable radio at relatively high volume. After a few minutes, the victim left his blanket and said briefly to the subject a) to watch the portable radio (then the subject invariably agreed to his request) or b) that he is alone at the beach and he may have a cigarette lighter. Then the victim gets clear of the beach. A few minutes later, a second researcher (the "thief") picked up the portable radio (still playing music) and quickly walked away in the opposite direction that the victim took. The results show clearly what we have expected: 19 out of 20 subjects, who committed to watch the portable radio, tried to stop the thief (sometimes by force), whereas only 4 of the 20 subjects who didn't talk about such an agreement tried to prevent the theft. 16 people said in the post experimental interview they didn't notice the staged theft.

Social proof

Social proof describes a social phenomenon wherein people adapt their behavior to conform with a larger group. Usually, the person completely undertake the behavior of others, despite the behavior makes sense or not (a similar effect is known as herd behavior).

The phenomenon was originally demonstrated by the Asch conformity experiments in the 1950s. In a series of studies the psychologist Solomon Asch showed how social pressure influences the opinion of individuals. In sum, the experiments showed that study participants rate a obviously false statement as correct statement, as long as the larger group does.

Authority

The term authority refers to reputation attributed to an institution or person, which potentially can influence the thinking and behavior of another person. Instructions issued by a person of authority are often blindly followed. Besides the status of a person, symbols representing authority, such as a police uniform, seem to be very powerful in influencing people. Statements of so-called experts seem to be final and unfalsifiable to us. Authority is the most common used persuasion principle according to a meta-analysis, in which 74 scenarios were considered5.

The phenomenon is fairly good researched by the Milgram experiment on obedience to authority figures conducted by the psychologist Stanley Milgram in the year 1961. In Milgrams experiment, every study participant followed the given instructions to administer electric shocks with gradually increasing voltage levels to an innocent person (the "learner"). The participants (the "teacher") were told, that the experiment is conducted to do research on the relation between learning success and punishment. Of course, these electric shocks were faked and the shocked person was just an actor pretending to be tortured. All participants would have potentially killed the actor, if it were not faked. The majority of participants (26 out of 40 people) continued to administer this form of punishment, even they had to assume, that the punished person would have been already dead. Following experiments by other researchers showed, that the willingness to punish the learner significantly decreased with decreasing authority of the experimenter.

Liking

The main statement of this principle is that one is easily influenced by a person he likes or is attracted to. Key elements, which play a huge role include compliments, physical attractiveness and trust. Examples are the advice among friends seen in television commercials ("n percent of your friends would recommend this product") or the halo effect, a classical cognitive bias.

Scarcity

Whenever something is barely available or totally unavailable it seems more valuable to us. For instance, saying something is a limited edition or on sale for a limited time encourages you to buy it, despite the fact that you would never buy it if its availability were unlimited. According to Cialdini, people seem to be more motivated by the thought of losing something than by the thought of gaining something of equal value. He elaborates this with an example, that he would prioritize the call from an unknown number while having an interesting face-to-face conversation. The reason is simple: potential unavailability.

Interestingly, when you restrict information, people are more motivated to get these information and also tend to assign a higher value to it.

Unity

This principle was proposed by Cialdini quite recently. It says, that we are highly influenced by social groups. People want to belong to such social structures and identify themselves with the group (or more than one). The more a person identifies himself with a group, the more he can be potentially persuaded by this group.

Taxonomy of attack vectors

Due to the lack of an accepted taxonomy, I decided to classify the attack vectors (av) of social engineering by myself. The following dendogram shows all the attack vectors covered in this article, however, with any claim to completeness.

Basically, the avs are distinguished in technical based social engineering (avs with the help of hardware or other technical equipment) and human based social engineering (no tools). This differentiation is simple, but also powerful particularly with regard to the countermeasures (technical nature and non-technical). It is also similiar to the classification scheme of other researchers.6 7

Taxonomy

The following two sections will be extended soon:

Attack vectors and specific countermeasures

Attack vector Countermeasure Category
Phishing Email filtering detection, prevention
Phishing Sender Policy Framework prevention
Phishing DomainKeys Identified Mail prevention
Phishing verification via 2nd medium prevention
Phishing Email Whitelisting detection, prevention
Phishing Email Blacklisting detection, prevention
Phishing Digital signature prevention
Phishing Antivirus software detection, prevention, incident response
Baiting Mobile device management prevention
Baiting Malware scanning kiosk prevention
Baiting Antivirus software detection, prevention
Baiting Disabled autorun prevention
Baiting Disabled interfaces prevention
Baiting Policy on using external devices detection, prevention, incident response
Baiting USB Whitelisting detection, prevention
Forensic Analysis Data erasure prevention
Forensic Analysis Physical destruction of devices prevention
Forensic Analysis Professional disposal of E-waste prevention
Forensic Analysis Disc encryption prevention
Forensic Analysis Crypto shredding prevention
Electronic Badge Surveillance Chip authentication detection, prevention
Electronic Badge Surveillance Time measurement detection, prevention
Electronic Badge Surveillance On-Chip private key / Chip authentication detection, prevention
Electronic Badge Surveillance Certificate revocation list prevention, incident response
Evil-Twin TOFU prevention
Evil-Twin VPN prevention
Evil-Twin Wired network access prevention
Dumpster Diving Access control detection, prevention
Dumpster Diving Mechanical access control systems prevention
Dumpster Diving CCTV / security alarm detection, prevention
Dumpster Diving Paper shredder prevention
Dumpster Diving Data erasure prevention
Dumpster Diving Professional disposal prevention
Dumpster Diving confidential waste bins prevention
Shoulder Surfing Monitor filter prevention
Tailgating CCTV / security alarm detection, prevention
Tailgating Security personnel detection, prevention, incident response
Tailgating Access control detection, prevention
Badge Surveillance Document security features​ detection, prevention
Waterholing Website Whitelisting prevention
Impersonate / Pretexting Social events prevention
Impersonate / Pretexting Organizational chart prevention
Impersonate / Pretexting Visit Control Software detection, prevention, incident response
Malware Antivirus software detection, prevention, incident response
Malware Content Disarm & Reconstruction prevention, incident response
Malware Sandboxed execution prevention
Malware Air gap prevention
Malware Backup incident response
Malware Patch management prevention
Malware Principle of least privilege prevention

Malware can be a consenquence of phishing attacks or baiting. No feasible specific countermeasures: Quid pro Quo and People wachting

General countermeasures

  • (Social Engineering-)Penetration tests
  • User awareness / trainings
  • Patch management
  • Multi-factor authentication
  • Document Control
  • Business Continuity Management
  • Incident Response Plan
  • Risk Management
  • Logging-, Monitoring- and Reporting-Tools
  • Two-man rule
  • Identity Access Management
  • Policies
    • password policy
    • information classification
    • clean desk policy
    • social media policy

Acknowledgement

I would like to thank my friend and fellow student Saed Alavi for his contribution to this article.


  1. Accenture and Ponemon Institute LLC. (2017). „2017 Cost of Cyber Crime Study“ 

  2. Christopher Hadnagy (2010). Social Engineering: The Art of Human Hacking. Indianapolis: Wiley Publishing, Inc. ISBN-13: 978-0470639535 

  3. Robert Cialdini (2009). Influence: Science and Practice. Boston, MA: Pearson Education. ISBN-13: 978-0-06-189990-4 

  4. Thomas Moriarty (1975). Crime, commitment, and the responsive bystander: Two field experiments. Journal of Personality and Social Psychology, 31(2), 370-376. http://dx.doi.org/10.1037/h0076288 

  5. Jan-Willem Hendrik Bullée et al. (2017). On the anatomy of social engineering attacks - A literature-based dissection of successful attacks. J Investig Psychol Offender Profil. 2018;15:20–45. DOI: 10.1002/jip.1482 

  6. Cik Feresa Mohd Foozy et al. (2011). Generic taxonomy of social engineering attack. Malaysian Technical Universities International Conference on Engineering & Technology. 1-7. 

  7. Koteswara Ivaturi and Lech Janczewski. (2011). A Taxonomy for Social Engineering attacks. CONF-IRM 2011 Proceedings. http://aisel.aisnet.org/confirm2011/15