A Comprehensive Guide to Social Engineering Prevention
Let’s start with defining social engineering. It’s the art of manipulating, influencing, or deceiving you to gain control over your computer. Hackers use various means to gain access – phone, email, snail mail, or even direct contact. To say it’s prevalent presents an understatement as 98% of cyber attacks rely on social engineering.
Phishing presents the most common form of social engineering attack. Seventy-five percent of worldwide organizations, for example, experienced a phishing attack. More important, these attacks have a high percentage of success, with 74% of organizations in the United States suffering from a successful attack. Verizon reported that 43% of breaches involved phishing in 2020.
But phishing isn’t the sole method used by hackers. Other common social engineering attacks include vishing, baiting, scareware, quid pro quo, piggybacking, and pretexting. And there’s even more.
IBM reports the average cost of a cyberattack at $3.86 million. Even worse, it takes organizations more than 200 days to detect a breach. It’s little wonder cyber-attacks project to hit $6 trillion in annual losses this year. That’s double the amount since 2015.
How to Avoid Social Engineering Attacks
The good news is that there’s hope. You can avoid social engineering attacks. This guide presents some of the most popular ways to mount a defense against social engineering attacks.
Defense-in-depth is your best starting point. In short, no single approach will stem cyber-attacks. As a result, you need to implement multiple, overlapping defenses. If one defense fails, another defense may block the attack.
Just as you should layer in multiple defenses, you should do so in various locations – the network edge, between networks, on ingress and egress points, on individual hosts and devices, and in the cloud. An administer can deploy defenses on a 24/7 basis or on the fly by an administrator.
Network security boundaries help stem exploitations by limiting unnecessary network traffic and connections. Firewalls, for instance, block or allow network traffic based on IP addresses, ports, or other network packet data characteristics. Unfortunately, network security boundary devices have little impact on social engineering prevention. Most social engineering attacks take place over familiar, allowed network pathways.
So, let’s look at some of the more reliable options for preventing phishing and social engineering attacks.
Content-filtering services that block spam and phishing content present a line of first defense in many cases. However, even the best content filters allow some malware and phishing content to get by to the user. In addition, all content filters need to be tuned to block as much actual, malicious content as possible without incurring any false positives and blocking legitimate content.
Most major email services and browsers include built-in content filtering. Network perimeter devices, intrusion detection devices, antivirus inspection services (both host and network), and email servers often do content filtering as well.
Many anti-phishing software programs and services automatically identify phishing, spam, and other types of related threats. Once identified, the threat takes one of three routes:
- Blocks the threat from reaching the end-user altogether
- Quarantines the identified content for additional analysis and treatment
- Passes the threat to the end-user with another label appended to the subject text like “SPAM” or “LIKELY PHISHING.”
Many programs automate some or all the actions via rules and artificial intelligence.
A relatively new IT solution, detonation sandboxes intercept malicious content—mostly file attachments and Internet content from clicked URLs. They temporarily block or prevent them from executing in the user’s current security context to not harm.
They then open them in various virtual environments that attempt to mimic the core components of a device’s existing environment where the blocked content would have been executed or opened otherwise. Once opened in that safe location, the content gets analyzed for legitimacy versus malicious intent.
If the content is deemed safe, it executes on the user’s device in the original, intended manner. Detonation sandboxes have gained widespread use but are still not as ubiquitous as other types of more common defenses, such as firewalls and content filters. Although more accurate than antivirus software, they are not failproof.
Reputation services advise, block, or allow content based on its origination URL pathway, domain name, or IP address. The earliest (and still popularly used) reputation services were crude blacklisting services that contained lists of domains previously reported as malicious. These lists or databases could be downloaded or referenced by other services, such as email servers or browsers, to help allow or deny content.
Early, popular blacklisting services include Spamhaus, DNSBL, Ospam, and Google Safe Browsing. Another, known as the Blacklist Master, contains pointers to more than 100 individual blacklists.
At an extreme, some organizations deny all content and network traffic originating from entire countries like Russia or China by using IP addresses, Border Gateway Protocol assigned number addresses, or high-level domain names given to countries (such as .ru and .ch).
Today, many vendors offer sophisticated reputation services that use frequently updated dynamic whitelists and blacklists as a starting point. However, they include content-filtering, dynamic, “intelligent” rules, and machine learning engines that inspect dozens to hundreds of attributes to determine intent. Users can often submit new links and content for inspection, and the resulting reputation check puts the content or link on a permanent blacklist or whitelist.
All implementation types, crude or sophisticated, legitimate content and URLs may get incorrectly flagged as malicious. It can often take extraordinary effort to get a wrongly listed piece of content or URL delisted when that happens.
Some parties have noticed that phishing emails often (but not always) come from newly created domains. So, they create services or scripts that analyze the domain name of an incoming email or Internet URL and block those which seem strangely young or contain other highly suspicious behavior (such as originating from a dynamic DNS service).
These types of DNS checks do an excellent job at blocking malicious content originating from anomalous domains, with a reasonably low incidence of wrongly blocking legitimate content. But a large portion of phishing attacks originates from legitimate and long-established domains.
For example, phishers often use Google’s Gmail service to create fraudulent email addresses. Google’s gmail.com domain is one of the most famous and legitimate domains possible, and as such, fails to block any phishing emails originating from it.
Malware mitigation services, known as antivirus (AV), detect and prevent malicious URLs, content, and file attachments. More sophisticated versions, known as Endpoint Detection & Response (EDR), are becoming more popular.
Although AV/EDR vendors often self-report high accuracy rates (often claiming100%), their ability to detect and block the millions of new malware programs created every week challenges those claims.
Malware creators often monitor Google’s Virustotal (https://virustotal.com) service, which runs over 70 different AV/EDR engines to see when their malicious creation starts to get identified. The malware program then updates itself to a new, less detected variant. Using this method, a malware program can go days to months without reliable, widespread AV/EDR detection.
Indeed, many ransomware programs exist for months to more than a year in the compromised environment, without any detection, before they execute their malicious behavior. Even with these accuracy issues, most organizations still feel obligated to run AV/EDR. Even if they don’t always catch malware, whatever they detect, and block represents a win for the protected environment.
Least Privilege Permissions
The least-privilege permissions concept assigns the bare minimum security permissions needed to accomplish a task to a security principal (i.e., user, computer, device, group, service, daemon, network, etc.). As a result, any abuse of that security principal’s security context, either by the principal or some other malicious actor in the principal’s security context, can do the least amount of harm.
Hackers and malware, including socially engineered Trojan Horse programs, always want to operate in the highest security context possible. If they can get access to a user’s desktop or programs, at the very least, they get the security context (and whatever privileges and permissions) the user has. On the other hand, if they can access or take over an elevated program or service, they can get security access to the program or service.
For example, a buffer overflowed program allows the attacker or malware to take over the security context of the buffer overflowed program. So, suppose a Windows service is running in the Local System security context. In that case, the malware or attacker captures the security context of the all-powerful Local System built-in account.
However, suppose an attacker or malware doesn’t get an elevated level of security context during the initial stages of their attack. In that case, they will often try to do secondary “escalation of privilege” (EoP) attacks to get elevated access. Still, an EOP attack method is not always guaranteed to be available or successful.
Administrators and users complicate hacker and malware malicious attempts by not letting them gain elevated access. Least privilege accomplishes that by controlling high-security contexts.
Practicing least privilege permissions includes:
- Giving the least level of permissions and privileges necessary to a security principal needed to do their assigned active task
- Not allowing administrators or users to be logged in with elevated security contexts while performing tasks not needing high-level access
- Minimizing the number of permanent members of any elevated group
- Requiring admins to “check-out” privileged accounts when needed and time-limit the ability for the account’s use
- Protecting elevated logins using multifactor authentication or other secure authentication mechanisms if possible
- Ensuring passwords for high-security context accounts should always be long and complex (at least 16 characters) and changed at least annually
- Monitoring elevated accounts for appropriate use
- Removing unnecessary members from elevated groups
- Periodically audit accounts with high-security contexts to ensure they are still needed and used
- Using elevated security context on less trusted devices and workstations sparingly
The concept of least privilege permissions is one that all organizations should follow and apply whenever possible. Doing so decreases the chances that hackers or malware will be successful and reduce overall cybersecurity.
Email Client Protection
Today, most email clients come with strongly configured default security settings, including many anti-phishing features. For example, most email clients will not automatically download externally linked content when an email is opened or allow the opening of a potentially malicious file attachment. Instead, it will display placeholders and prompt the user to click on an additional button to download the potentially malicious content or file attachments.
Most of the time, the best protection that an admin or a user can implement is not to weaken the already solid and secure security settings enabled by default.
Like most email clients, most browser clients have strong, default security settings. Internet browsers have been popular attack targets for decades. Those attacks have forced browsers to become extraordinarily strong at defeating known attacks.
Most major browsers include content filtering, reputation services, and almost an obnoxious number of warning prompts if a user goes to download or execute potentially malicious content. With that said, browsers routinely get dozens of found bugs patched each month. As a result, browsers are always full of readily exploitable vulnerabilities.
Malware writers and phishers are constantly looking for, finding, and exploiting newly discovered vulnerabilities. It is truly a war of constant attrition between the browser vendors and malicious actors, and the browser vendors are often playing catch up. But like email clients, the best thing most admins and users can do is keep their browser patched and up to date and not weaken the already relatively strong security configuration settings.
Global Phishing Protection Standards
There are three global email security standards you should be using:
- Sender Policy Framework (SPF)
- Domain Keys Identified Mail (DKIM)
- Domain-Based Message Authentication, Reporting, and Conformance (DMARC).
They’ve been around for many years and are used and trusted by millions of people. SPF, DKIM, and DMARC allow organizations to prevent malicious third parties from spoofing the organization’s legitimate email domains to others who might rely on it. They don’t work perfectly, but they will cut down on some forms of email maliciousness when enabled.
Each works by having the sender’s email domain administrator enable them in DNS using TXT records or alternately, by allowing them in their email host provider’s administrative console). When enabled, receivers of emails from activated domains can check the additional information to verify whether or not a particular email came from the email domain presented.
Sending domains enable these protocols to verify that emails that claim to be from the sender’s domain really are from the sender’s domain. Senders will allow it so other people can’t claim to be them. And receivers enable it to verify whether or not a particular email is from where it says it’s from.
It takes both sides to be enabled to work. Enabling them can’t hurt anything unless you decide to take the draconian step of rejecting all emails which fail any of the checks.
Network Traffic Analysis
Malware and hackers often establish unusual network connections within a compromised network or outbound to the Internet to destinations that the originating network would never connect to during the ordinary course of business. One of the best methods for detecting hard-to-find malware or hacker exploitation is through network traffic flow analysis.
Here’s the basic idea: most servers don’t talk to other servers. Most servers don’t connect to most workstations, and workstations rarely talk to another workstation. By the same token, most workstations don’t talk to every server. Most workstations don’t connect to the Internet using server-to-server protocols (e.g., SMTP, POP, IMAP, etc.).
Malware and hackers don’t appreciate the subtlety of what typically connects with what and how. They are usually unaware and uncaring of the legitimate, expected traffic flows, and in any case, don’t expect anyone to be looking for unusual connections. Thus, if you understand the legitimate, expected network traffic flows in your environment, you can discover threats with a tool that detects abnormal network flows and generates alerts.
To do this, you need a good NetFlow analysis tool. Many network packet analysis programs do a decent job at data collection and NetFlow representation sans dedication to NetFlow analysis. There are open source and commercial tools dedicated to Netflow analysis, which can be found by simply searching on ‘NetFlow analysis.’
Data-Leak Monitoring and Prevention
Data-Leak monitoring and prevention tools prevent critical data from leaving the safe confines of an organization’s network. Any device you can use to prevent data leaks, however they occur, should be considered part of your defense.
A honeypot is a computer device or resource that exists solely to detect hackers and malware. Many vendors offer “deception technology” devices and software to mimic different operating systems and servers. You can also take a production device or server you are preparing to decommission because it is becoming aged or no longer needed and turn it into a honeypot.
Since it’s not a production asset, no one should be trying to log into it. If someone tries to log into a honeypot, it almost always indicates unauthorized activity and potential maliciousness. Honeypots are low-cost, high-value, early warning assets that should be a part of anyone’s environment.
Extreme Control: Red/Green Systems
In some environments with low-risk tolerances and significantly high-value assets, senior management has decided to provide all users with two systems: colloquially known in academic circles as red/green systems.
The red system is highly secured and only contains mission-critical business software and services. Users can only do business tasks on their red system. The green system is less secured and used by the employee to do Internet surfing, personal tasks, and email.
The idea is to take the highest-risk tasks (such as surfing the web and picking up email) and physically separate them from the mission-critical assets and data. Early on, organizations implementing red/green systems used two physical computers. Today, it is more likely to be accomplished logically using two different virtual machines.
Some organizations even used highly locked-down virtual computers or desktops, which still appear to the end-user as a single desktop. But the different icons and applications they click on belong to and execute in highly separated, secured areas. The exploitation of one side (usually green) does not impact the other side (usually the red side).
Red/Green system implementations do significantly reduce risk, but it doesn’t eliminate all risks. There are always tasks that will cross over between red and green systems that the user is involved in, and if this is the case, social engineering of the human is still possible. Still, it does significantly reduce many risks.
The downside is that providing two different physical systems to one person nearly doubles operational costs. It’s two pieces of hardware to buy and support, plus two network connections. It also requires additional licenses. Using virtual machines significantly reduces those costs and licenses but still results in higher operational costs. But for some organizations, it is the right solution for their level of risk and asset value.
Don’t Forget Cyber Awareness Training
Despite all the technical recommendations covered, the human element remains your weakest link. More than 85% of data breaches stem from human mistakes. So, any initiative to prevent social engineering attacks requires cyber awareness training.
Unfortunately, many companies (upwards of 50%) lack formal cyber awareness training. Training helps familiarize your employees with what to look for with social engineering attacks.
Dollar for dollar, training may be your most effective defense when it comes to social engineering prevention.
Looking to Reduce Your Risks?
If you’re looking for an IT company near you that helps prevent social engineering attacks, give us a call. We have a host of cybersecurity services, including cyber awareness training, that substantially reduce your risks.
As a managed service provider, we provide IT solutions to small and mid-sized businesses throughout Pennsylvania. Find out more about how we can help drive your IT success now.