Threat Intelligence Best Practices Tips – Unite.AI
Many people say threat intelligence (TI) tastes good, but few know how to cook it. There are even fewer who know what processes to engage in to make IT work and generate profits. Also, a negligible number of people know how to choose a feed provider, where to check for a false positive indicator, and whether it’s worth blocking a domain that your colleague sent you via WhatsApp.
We had two commercial APT subscriptions, ten news exchanges, a dozen free streams, and a long list of TOR exit nodes. We also used a few powerful inverters, Powershell master scripts, a Loki scanner, and a paid VirusTotal subscription. Not that a security incident response center doesn’t work without all of that, but if you’re ready to catch complex attacks, you should do everything you can.
What I was particularly concerned about was the potential automation of Indicators of Compromise (IOC) checking. There is nothing as immoral as artificial intelligence replacing a human in an activity that requires thought. However, I realized that my business would encounter this challenge sooner or later, as the number of our customers increased.
For several years of permanent IT activity, I have walked on a lot of rakes and I would like to give some tips that will help newbies avoid common mistakes.
Tip 1. Don’t expect too much to catch stuff by hashes: most malware is polymorphic these days
Threat intelligence data comes in many different formats and manifestations. It may include the IP addresses of the command and control centers of the botnet, the email addresses involved in phishing campaigns, and articles on evasion techniques that APT groups are about to start exploiting. In short, it could be different things.
In order to sort out all this mess, David Bianco suggested using what is called the pyramid of pain. It describes a correlation between different indicators you use to detect an attacker and the amount of “pain” you will cause the attacker if you identify a specific IOC.
For example, if you know the MD5 hash of the malicious file, it can be easily and accurately detected. However, this won’t cause the attacker much pain because adding a single bit of information to this file will completely alter its hash.
Tip 2. Try to use indicators that the attacker will find technically complicated or expensive to modify
Anticipating the question of how to know if a file with a given hash exists in our corporate network, I will say the following: there are different ways. One of the easiest methods is to use a solution that manages the database of MD5 hashes of all executable files within the company.
Back to the Pyramid of Pain. Unlike detection by a hash value, it is more productive to identify the attacker’s TTP (tactics, techniques, and procedures). It’s harder to do and takes more effort, but you’ll inflict more pain on the opponent.
For example, if you know that the APT team that targets your sector of the economy sends phishing emails with *.HTA files on board, creating a detection rule that looks for these email attachments -mails will hit the attacker below the belt. They will have to change the spamming tactics and maybe even spend a few dollars to buy 0-day or 1-day exploits which are not cheap.
Tip 3. Don’t place excessive hopes in detection rules created by someone else, because you need to check those rules for false positives and refine them
As you create detection rules, you are always tempted to use those that are readily available. Sigma is an example of a free repository. It is a SIEM-independent detection methods format that allows you to translate rules from the Sigma language to ElasticSearch as well as Splunk or ArcSight rules. The repository includes hundreds of rules. Sounds like a good thing, but the devil, as always, is in the details.
Let’s take a look at one of the mimikatz detection rules. This rule detects processes that have attempted to read the memory of the lsass.exe process. Mimikatz does this when trying to get NTLM hashes, and the rule will identify the malware.
However, it is essential for us – experts who not only detect but also react to incidents – to make sure that it is indeed a malicious actor. Unfortunately, there are many legitimate processes that read lsass.exe memory (for example, some anti-virus tools). Therefore, in a real scenario, such a rule will lead to more false positives than benefits.
I don’t want to blame anyone for this – all solutions generate false positives; it’s normal. Nevertheless, threat intelligence specialists must understand that there is always a need to double-check and refine rules obtained from open and closed sources.
Tip 4. Check domain names and IP addresses for malicious behavior, not only in the proxy server and firewall, but also in the DNS server logs – and be sure to focus on both on successful and failed resolution attempts
Malicious domains and IP addresses are the optimal indicators from the point of view of ease of detection and the amount of pain you inflict on the attacker. However, they only seem easy to handle at first glance. At least you should ask yourself a question where to get the domain log.
If you limit your job to checking the proxy server logs only, you may miss malicious code that attempts to directly query the network or request a non-existent domain name generated with DGA, not to mention DNS tunneling – none of those- ci will only be listed in the logs of a corporate proxy server. Criminals can also use VPN services out there with advanced features or create custom tunnels.
Tip 5. Monitor or block – decide which to choose only after finding out what type of indicator you have discovered and recognizing the possible consequences of blocking
Every IT security expert has faced a non-trivial dilemma: block a threat or monitor its behavior and start investigating once it triggers alerts. Some instructions unequivocally encourage you to choose blocking, but sometimes that’s a mistake.
If the indicator of compromise is a domain name used by an APT group, don’t block it – start monitoring him instead. Current tactics for deploying targeted attacks assume the presence of an additional secret connection channel such as, for example, cell tracking apps which can only be discovered through in-depth analysis. Automatic blocking will prevent you from finding this channel in this scenario; moreover, opponents will quickly realize that you have noticed their shenanigans.
On the other hand, if the IOC is a domain used by crypto-ransomware, it should be blocked immediately. But remember to monitor all failed attempts to query blocked domains – the malicious encoder configuration may include multiple command and control server URLs. Some of them may not appear in the feeds and therefore will not be blocked. Sooner or later the infection will contact them to get the encryption key which will be instantly used to encrypt the host. The only reliable way to ensure that you have blocked all C&Cs is to invert the sample.
Tip 6. Check the relevance of all new indicators before monitoring or blocking them
Keep in mind that threat data is generated by error-prone humans or by a machine learning algorithms that are not error proof That is. I’ve seen various paid APT group activity reporting providers accidentally add legitimate samples to lists of malicious MD5 hashes. Since even paid threat reports contain low-quality IOCs, those obtained through open-source intelligence should definitely be checked for relevance. IT analysts don’t always check their metrics for false positives, which means the client has to do the checking work for them.
For example, if you got an IP address used by a new iteration of TrickBot, before exploiting it in your detection systems, you must ensure that it is not part of a hosting service or a service emanating from your IP address. Otherwise, you will struggle to deal with lots of false positives every time users visiting a site residing on this hosting platform land on completely benign web pages.
Tip 7. Automate all threat data workflows as much as possible. Start by fully automating false positive checking via a warning list while having SIEM monitor for IOCs that do not trigger false positives
In order to avoid a large number of false positives related to intelligence and obtained from open sources, you can carry out a preliminary search for these indicators in the warning lists. To create these lists, you can use the top 1000 websites by traffic, internal subnet addresses, as well as domains used by major service providers such as Google, Amazon AWS, MS Azure and others. It’s also a great idea to implement a solution that dynamically changes warning lists consisting of the top domains/IP addresses that company employees have accessed over the past week or month.
Creating these warning lists can be problematic for a mid-sized SOC, so it makes sense to consider adopting so-called threat intelligence platforms.
Tip 8. Scan the entire enterprise for host metrics, not just hosts connected to SIEM
Typically, not all hosts in a company are connected to SIEM. Therefore, it is impossible to find a malicious file with a specific name or path using only the standard SIEM functionality. You can fix this problem as follows:
- Use IOC scanners such as Loki. You can use SCCM to launch it on all hosts in the company, then transfer the results to a shared network folder.
- Use vulnerability scanners. Some of them have compliance modes allowing you to check the network for a specific file in a specific path.
- Write a Powershell script and run it through WinRM.
As mentioned above, this article is not intended to be a comprehensive knowledge base on how to properly manage threat intelligence. Judging from our experience, however, following these simple rules will allow beginners to avoid critical errors while managing different indicators of compromise.