In the digital age, mobile banking has become an integral part of our lives. The convenience of accessing your bank accounts, transferring money, and paying bills from your smartphone has made banking more accessible than ever. However, this convenience comes with its own set of risks, including the presence of fake banker Android apps that can compromise your financial security. In this blog post, we will explore what fake banker Android apps are, how they work, and most importantly, how to protect yourself from falling victim to them.
What Are Fake Banker Android Apps?
Fake banker Android apps are malicious applications designed to impersonate legitimate banking apps. These apps are created by cybercriminals with the intent of stealing your sensitive financial information, such as login credentials, credit card details, and personal identification numbers (PINs). These fake apps often closely mimic the appearance and functionality of real banking apps, making it difficult for users to distinguish between the two.
How Do Fake Banker Android Apps Work?
Impersonation: Cybercriminals typically create counterfeit versions of popular banking apps. These counterfeit apps may have names and icons that closely resemble the real ones, making it easier to deceive users.
Phishing: Once a user installs a fake banker Android app, it often prompts the user to enter their login credentials and other sensitive information. This information is then sent directly to the cybercriminals, who can use it for fraudulent activities.
Keylogging: Some fake banker apps use keyloggers to record every keystroke made on your device. This means that even if you access your bank’s website through a browser, your login information can still be captured.
Overlay Attacks: Fake banker apps may display convincing overlays on top of legitimate banking apps. When you enter your information, it’s captured by the malicious app instead of the real one.
Data Theft: Beyond login credentials, these apps can also access your personal information, contacts, and other sensitive data stored on your device.
Recently we have received a SMShing SMS from JX-REWRDL
Where it consists of a Malicious URL where it leads to download malicious APK file. 7856754rewards.apk de06f6ddf2345607333060fee3896719b767661260c822d8aa64ff69a3c773a0
Finally, after a couple of follow-ups with respective hosting providers they have suspended the respective domain.
Please beware of downloading Android apps from unknown sources.
Here are few inputs to Protect Yourself from Fake Banker Android Apps
Download Apps from Official Sources: Only download banking apps from official sources such as the Google Play Store or the bank’s official website. Avoid third-party app stores or links from suspicious sources.
Check App Reviews and Ratings: Read user reviews and check the app’s ratings before downloading. Legitimate banking apps will have a high number of downloads and positive reviews.
Review App Permissions: Pay attention to the permissions an app requests during installation. If a banking app asks for unnecessary permissions, it could be a red flag.
Keep Your Device Updated: Regularly update your Android operating system and apps. These updates often include security patches that protect against vulnerabilities exploited by fake banker apps.
Install a Mobile Security App: Consider using a reputable mobile security app that can detect and block malicious apps.
Enable Two-Factor Authentication (2FA): Whenever possible, enable 2FA for your banking apps. This adds an extra layer of security and makes it more difficult for cybercriminals to gain access to your accounts.
Educate Yourself: Stay informed about the latest cybersecurity threats and best practices for protecting your financial information. Knowledge is a powerful defense against scams.
Microsoft’s latest Storm-0558 findings and summarizes the key learnings cloud customers should take away from the incident.
On September 6th, 2023, Microsoft published a follow-up to their initial investigative report from July 11th about Storm-0558 — a threat actor attributed to China who managed to acquire a signing key that allowed them to gain illicit access to Exchange and Outlook accounts. Microsoft should be applauded for the high level of transparency they have shown, and their willingness to share this information with the community. However, we feel that the latest blog post raises as many questions as it answers.
Estimated attack flow leading to MSA signing key capture by Storm-0558
Newly revealed information
The following is a summary of the new information provided in Microsoft’s latest report about how the signing key may have been compromised by the threat actor (see the diagram above for a visual representation of the attack flow as we currently understand it):
There is evidence that a Microsoft engineer’s corporate account was compromised by Storm-0558 “[at some point] after April 2021.”, using an access token obtained from a machine infected with malware.
This engineer had permission to access a debugging server in Microsoft’s corporate network.
This debugging server contained a crash dump that originated in a signing system located in Microsoft’s isolated production network.
This crash dump, which was the result of a crash that occurred in April 2021, contained the aforementioned MSA signing key.
The inclusion of the signing key in this crash dump was the result of a bug, and a separate bug caused the signing key to remain undetected on the debugging server.
Based on the events described above, Microsoft has concluded that the most likely method by which Storm-0558 acquired the MSA signing key was through this compromised account, by accessing the debugging server and exfiltrating the crash dump that contained the key material.
Besides providing the above information about how the key was most likely to have been compromised, Microsoft’s latest report also publicly corroborates our own conclusions (published July 21st) about the contributing factors to this incident, namely:
Prior to the discovery of this threat actor in June 2023, the Azure AD SDK (described in the report as a “library of documentation and helper APIs”) did not include functionality to properly validate an authentication token’s issuer ID. In other words, as we explained in our previous blog post, any application relying solely on the SDK for implementing authentication would have been at risk of accepting tokens signed by the wrong key type.
As mentioned in Microsoft’s original report, Exchange was affected by a vulnerability that caused it to accept Azure AD authentication tokens as valid even though they were signed by an MSA signing key – this vulnerability was ultimately exploited by Storm-0558 to gain access to enterprise accounts. In their latest report, Microsoft clarified that this issue was in fact a result of the missing validation functionality in the SDK: at some point in 2022, the development team in charge of authentication in Exchange incorrectly assumed that the Azure AD SDK performed issuer validation by default. This caused validation to be implemented incorrectly, leading to a vulnerability.
What does this mean?
The timeline that can be deduced from the latest report seems to indicate that due to log retention policies (understandable, given that the activity might have stretched over two years), Microsoft can only partially account for all of this threat actor’s activity within their network between April 2021 and May 2023. Additionally, the report does not explicitly state when the crash dump was transferred to the debugging environment or when the engineer’s account was compromised; only that each of these events occurred sometime after April 2021. If we assume that they both happened at the earliest possible point on the timeline — let’s say May 2021 — then that would mean that the threat actor might have been in possession of the signing key for over two years prior to being discovered in June 2023. Furthermore, while Microsoft have reviewed their logs and definitively identified the use of forged authentication tokens for Exchange and Outlook accounts throughout May 2023, we are nevertheless led to the conclusion that the threat actor might have been forging authentication tokens for other services during this two-year period.
As we explained in our last blog post on the subject, someone in possession of this MSA signing key was not limited to forging authentication tokens for just Exchange and Outlook – they could have forged tokens that would have allowed them to impersonate consumer accounts in any consumer or mixed-audience application, and enterprise accounts in any application that implemented validation incorrectly, such as Exchange. In other words, Storm-0558 was in a position to gain access to a wide range of accounts in applications operated by Microsoft (such as SharePoint) or their customers. As we explained in our previous blog post, this was a very powerful key.
Key takeaways from the key takeaway
Based on what we can learn from Microsoft’s latest report, cloud customers should have the following takeaways from this incident:
Organizations should scan their logs for evidence related to this activity in a time window spanning the period between April 2021 and June 2023 (Microsoft could narrow this window by stating precisely when the engineer’s account was compromised).
Organizations should use a hardware security module (HSM) for key storage whenever possible — this will ensure that key material is never included in crash dumps. As others have noted, the scale at which Microsoft operates might have made this impossible for them to do, but smaller organizations should certainly make it a priority.
As a precautionary defense-in-depth measure, debugging and crash dump data should be purged on a regular basis, since they can contain decrypted information which might be a gold mine for threat actors once they gain access to the environment. In general, sensitive secrets can often be found in unexpected places, such as bash history, hidden image layers, etc.
Additionally, organizations should maintain an inventory of assets in which debugging and crash dump data is collected, stored, or catalogued, and ensure that access controls are in place to limit these assets’ exposure.
Sensitive production environments should be properly isolated from corporate environments which are at higher risk of compromise. While there is no evidence to indicate that the threat actor managed to break through Microsoft’s security boundaries or reach the production environment itself, the root cause here was a failure of data hygiene when transferring potentially sensitive data between the two environments.
Signing keys should be rotated on a regular basis, ideally every few weeks. In this case, the acquired signing key was issued in April 2016 and expired in April 2021, but remained valid until it was finally rotated in July 2023 following Microsoft’s investigation of this incident. This means the key was very long-lived and in use for over 7 years. While Microsoft rotated their signing keys following this incident, at least one (key id -KI3Q9nNR7bRofxmeZoXqbHZGew) appears in both a current key list and in the same list where it appeared in October 2022. If this key remains in use, it should be rotated as well, if only to limit the impact of any (admittedly unlikely) similar potential incident.
Secret scanning mechanisms — particularly those put in place to mitigate the risk of keys leaking from high-to-low trust environments — should be regularly monitored and tested for effectiveness.
Defaults are powerful, and documentation alone isn’t good enough for shaping developer behavior. SDKs should either implement critical functionality by default, or warn users if and when they’ve missed a vital implementation step that must be performed manually. If developers at Microsoft misunderstood their own documentation and made this critical mistake, it stands to reason that any one of their customers might have done the same.
Unanswered questions
Although Microsoft’s report answers some of the burning questions related to this case, there remain several unanswered questions:
Was this, in fact, how Storm-0558 acquired the signing key? Microsoft have stated that their investigation has concluded, meaning that they have exhausted all evidence available to them. Therefore, we will probably never have a definitive answer to this question.
How likely is it that other signing keys that were valid during the two-year period were compromised in the same way? Is there evidence to the contrary? (This would obviously be very hard to prove.)
When exactly was the engineer’s account compromised? Most importantly, what is the earliest possible point in time at which Storm-0558 could have acquired the signing key?
Was the threat actor targeting this engineer specifically because of their access to the debugging environment, or did they have other goals in mind?
Was the engineer’s account and the machine infected with malware the only known compromised entities within Microsoft’s corporate environment during this period? Did the investigation identify other compromised users or systems? When (and how) did the attacker establish their initial foothold in the environment?
When Microsoft says that they haven’t observed the threat actor targeting the users of any applications other than Exchange and Outlook, does this mean that they have definitively proven that the threat actor did not forge access tokens for other services? In other words, do they actually have the necessary logs (going back far enough in time and containing the required data) to reasonably verify this?
At what point did the threat actor identify the vulnerability in Exchange that allowed them to use forged authentication tokens signed by an MSA signing key to impersonate AAD users? Could they have somehow discovered it independently of acquiring the signing key? Might they have discovered the same vulnerability affecting other applications before Exchange became vulnerable in 2022?
Regarding the last question about how the threat actor might have discovered the issuer ID validation vulnerability in Exchange, we can posit a theory that they initially realized that the SDK (which is open source) did not include endpoint validation by default, and correctly assumed that at least some of the SDK’s users — including Microsoft developers — would therefore fail to correctly implement this validation.
Responding to a ransomware attack requires a well-defined and organized approach to effectively mitigate the threat, minimize damage, and restore systems. Here’s a checklist for a Security Operations Center (SOC) during a ransomware attack:
Preparation Phase:
Incident Response Plan (IRP): Ensure your SOC has a well-documented and up-to-date incident response plan that includes specific steps for handling ransomware incidents.
Team Activation: Initiate the incident response team, including representatives from IT, security, legal, communications, and management.
Isolation: Isolate affected systems from the network to prevent further lateral movement and propagation of the ransomware.
Secure Communication Channels: Establish secure communication channels for internal and external communication, considering potential compromise of regular communication channels.
Identification and Analysis Phase:
Confirm Ransomware: Determine if it’s indeed a ransomware attack by analyzing the ransom note, encrypted files, and other indicators.
Collect Evidence: Gather information such as log files, network traffic captures, system snapshots, and any ransomware-related artifacts for analysis.
Ransomware Variant Identification: Identify the specific ransomware variant, if possible, to understand its behavior and potential decryption options.
Scope Assessment: Determine the extent of the infection and affected systems, including critical assets and data.
Containment Phase:
Isolation: Continue isolating affected systems to prevent the spread of the ransomware. Disconnect infected systems from the network.
Implement Firewall Rules: Update firewall rules to block any malicious network traffic associated with the ransomware.
Endpoint Security Measures: Apply security patches, updates, and configurations to the affected systems to prevent further exploitation.
Eradication Phase:
Malware Removal: Use updated antivirus and anti-malware tools to remove the ransomware from affected systems.
Root Cause Analysis: Determine how the ransomware entered the network and identify vulnerabilities that were exploited. Patch and secure these vulnerabilities.
Recovery Phase:
Data Restoration: Restore systems from clean backups, ensuring that backup data is not compromised.
Data Verification: Thoroughly verify the integrity of restored data to ensure its accuracy and completeness.
System Reintegration: Gradually reintegrate cleaned systems back into the network while continuously monitoring for any signs of re-infection.
Communication and Reporting Phase:
Internal Communication: Keep key stakeholders informed about the status of the incident, actions taken, and progress towards resolution.
External Communication: If necessary, communicate with law enforcement, regulatory bodies, affected customers, and business partners, as required by law and company policy.
Public Relations: Prepare statements for public relations and communications teams to address media inquiries and manage the company’s public image.
Post-Incident Phase:
Debriefing: Conduct a thorough post-incident analysis to identify lessons learned and areas for improvement in the incident response process.
Documentation: Document all actions taken, evidence collected, and decisions made during the incident response for legal and regulatory purposes.
Continuous Improvement: Update the incident response plan based on the lessons learned to better prepare for future incidents.
Remember that ransomware attacks can vary significantly in terms of complexity and impact. It’s important to tailor this checklist to your organization’s specific needs and circumstances. Regular training, simulations, and staying up-to-date with the latest threat intelligence can significantly enhance your SOC’s ability to effectively respond to ransomware incidents.
If you are looking for SOC services, feel free to contact us via email info@cysys.io
In recent years, the threat of ransomware attacks has escalated, posing a significant risk to individuals and organizations worldwide. Ransomware is a type of malicious software that encrypts valuable data and demands a ransom payment in exchange for its release. This blog post explores the critical strategies and best practices for defending against ransomware attacks and safeguarding your digital assets.
Regular Data Backups: One of the most effective defenses against ransomware is maintaining regular and secure backups of your data. Ensure backups are stored offline or in an isolated network environment to prevent ransomware from infecting them. Regularly test your backup restoration process to guarantee data recovery in case of an attack.
Employee Training and Awareness: Human error remains a significant entry point for ransomware attacks. Educate your employees about phishing scams, suspicious email attachments, and unsafe browsing habits. Conduct regular training sessions to keep staff informed about the latest ransomware tactics.
Robust Endpoint Protection: Invest in advanced endpoint security solutions that include real-time threat detection, anti-malware software, and behavior-based analysis. Implement firewall and intrusion detection systems to prevent unauthorized access.
Patching and Software Updates: Regularly update operating systems, applications, and software with the latest security patches. Cybercriminals often exploit known vulnerabilities to deliver ransomware. Automated patch management tools can help streamline this process.
Network Segmentation: Divide your network into segments to limit the lateral movement of ransomware. This containment strategy prevents an isolated incident from spreading throughout your entire network.
Ransomware-Specific Tools: Consider using dedicated anti-ransomware tools that can detect and stop ransomware activity in real time. These tools often employ behavior analysis and machine learning to identify and block ransomware threats.
Incident Response Plan: Develop a comprehensive incident response plan that outlines the steps to take in case of a ransomware attack. Assign roles and responsibilities, establish communication protocols, and conduct regular drills to ensure a swift and coordinated response.
Zero Trust Architecture: Implement a zero-trust security model, where no user or device is trusted by default. This approach minimizes the attack surface and requires continuous authentication and authorization for access.
Encryption and Data Protection: Encrypt sensitive data both at rest and in transit. In the event of a breach, encrypted data is significantly harder for attackers to exploit.
Collaboration and Threat Intelligence: Stay informed about the latest ransomware threats and trends by collaborating with industry peers and sharing threat intelligence. Organizations that work together can collectively improve their defenses.
Please contact us via Support@cysys.io for more information on how we are providing services to SMBs to prevent ransomware attack from latest threat vectors.
This post aims to provide a core set of ideas for threat hunting — particularly in an intel-driven fashion which CN SYSTEMS follows in general. The intended audiences are detection engineers, threat hunters, and those aspiring to be one of the two.
It will also examine the traditional nomenclature of TTPs (Tactics, Techniques, and Procedures) and where time is spent hunting compared between the three.
Lastly, it will end with some smaller anecdotes and tips.
Caveats
We cannot stress this enough — MITRE ATT&CK is not a checklist — they even said so themselves. What this means practically, for this post, is that 100% MITRE coverage does not mean you are “secure”. It means that you can contextualize hunts and detections in a kill chain (more on this later).
To properly implement things discussed in this post, you will need process data (with command line), the ability to automate things (Python is recommended), and a VirusTotal Intelligence API key.
We are not suggesting the methods in here are the best it’s our best hypothesis way of hunting threats in your organization — or better than anything else — simply what has worked for us.
This content focuses heavily on hunting process data. Other styles of hunting (Yara/RFC violations/long-tail analysis/etc.) are definitely valid however not in scope.
MITRE ATT&CK Context
This post we will focus on Procedures rather than Techniques, so we want to give some examples.
Take T1566.001 for example. The tactic is Initial Access, the technique is Phishing: Spearphishing Attachment, and there are a ton of procedures listed. If I told you to hunt for this technique, there are a lot of ways to do it because the procedures vary widely; the payloads in the procedures include, but are not limited to, Word documents, Excel sheets, and PDFs. All of these payload behave slightly differently and will look different at the process level.
For this post, a procedure is any combination of programs, files, and/or arguments, that — when combined — achieve some technique. Here are a few spearphishing procedures:
Word (program) contacting a remote server for a template (file & argument)
Word (program) launching PowerShell (program) with “-enc” (argument)
Hunting “Known Bad” Procedures is Priority
Relying on the definition above, a known bad procedure is any combination of programs, files, and/or arguments that achieves some technique and has been documented to be used by a threat actor.
We can break this down into programs, arguments, and files:
bitsadmin.exe (program)
/transfer (argument)
http (argument)
ProgramData (file [path])
Temp (file [path])
Now if you wanted to build a hunt for this specific procedure, it would look for any time bitsadmin.exe ran with all of the arguments and file [paths] seen in the command line details.
However, where this level of granularity provides value is in breaking out the hunts and looking for any combination of the above:
bitsadmin.exe with “/transfer”
“/transfer” with “http”
bitsadmin.exe with “ProgramData”
“http” with “Temp”
etc.
By creating multiple smaller hunts, you still have a chance to catch the activity if APT10 changes their procedure, or if someone else uses a similar one.
Consider this, the goal of the procedure is likely Ingress Tool Transfer (downloading files) (see bitsadmin.exe transfer docs). So if APT10 alters their procedure — like by renaming bitsadmin.exe to svchost.exe — then the first hunt above won’t catch it but the next 3 will. Additionally, if they use a different binary to download it but still write to the same folder (and everything is supplied in the command line) then the third one will catch it.
That is the power of breaking a procedure down to its core artifacts and hunting for combinations of said artifacts. Not only will you catch the activity you were looking for but you will also catch slight variations. This way, you can be more confident in your ability to detect what you are interested in.
Not every potential combination of artifacts is a useful hunt. Some may produce unmanageable haystacks while others may simply produce irrelevant results. The trick is finding those combinations that provide coverage for more than one version of the procedure without bringing in too much noise. A easy way to do this is to focus on arguments (e.g. “/transfer” and “http”).
Have Reliable Sources of Intelligence
You need at least one source of known bad procedures that is regularly updated, trustworthy, and accurate. An easy answer here is MITRE ATT&CK.
our recommendation is to find a MITRE Group you want to hunt, using the MITRE Groups page, and open all of the references at the bottom of their listing. Take APT1 for example:
By reading through these, you are likely to gleam more technical data on their procedures and able to build hunts around them. I also walk through this in our post about hunting Lazarus Group.
Having a source of malware hashes specific to the threat actor and access to a VirusTotal Intelligence API key are crucial for scaling. You can extract “Processes Created” from VirusTotal’s Behavior information for a hash.
It shows vssadmin.exe deleting shadow copies. This would be considered a known bad procedure for WannaCry since it accomplished the goal of MITRE Technique T1490 (Inhibit System Recovery). If we apply the same logic here as we did for bitsamdin.exe and break this down to its core artifacts then we can build a series of hunts for this procedure and its variations.
Vssadmin.exe and “delete”
“delete” and “shadows”
Vssadmin.exe and “shadows”
etc.
By having a reliable source of known bad procedures — either from hands-on-keyboard operations or malware — and the ability to automatically extract them (e.g. VirusTotal API) one can build an extensive library of hunts for demonstrably malicious activity and its variations.
Contextualizing With MITRE
If you map each procedure you develop a hunt for, and any relevant ones you already had, to it’s MITRE Technique and Tactic then you can use the ATT&CK Navigator to visualize the change in coverage. Take the following example where a threat actor likes to use PowerShell to run payloads, Scheduled Tasks and BITS Jobs for persistence.
MITRE ATT&CK Navigator
If you let green be existing hunts and blue be new hunts, then you can see that prior to the most recent R&D cycle, you had hunts to catch the threat actor at Execution, Persistence, and Privilege Escalation and then the R&D cycle added hunts at Persistence and Defense Evasion.
Also, we are not claiming this method will catch the threat actor you are interested in; however, we suggest that creating multiple smaller hunts off one procedure using the method described above will increase you chances at catching them compared to creating one very specific hunt.
Know What You Are Paying For
If you take the total time spent on a single hunt/detection it can be broken down into 2 categories, R&D versus actually hunting through the results. Now generally speaking (again, for process data hunts) the more time spent in R&D means less time will be spent actually hunting it. This is because the more time spent in R&D tends to mean it is more specific. Looking at the hunts made above for bitsadmin.exe and vssadmin.exe, we spent our time making multiple hunts for known bad procedures which are unlikely to actually produce results unless suspicious activity occurs. Contrast that with just looking for all executions of bitsadmin.exe and vssadmin.exe; those take comparatively very little R&D but would take more time to review the results. This is roughly illustrated below as the “Temporal Costs of TTPs”.
Made with RapidTables
What this chart represents is that the core components of TTPs (Tactics, Techniques, and Procedures) impose temporal costs at different stages of hunting and by extension serve slightly different functions.
Firstly, Procedures cost more in R&D but take less time to hunt. Secondly, Tactics are typically easier to R&D but take longer to hunt due to larger result sets. Lastly, Techniques typically sit in a sort of middle ground between the two.
You get the opportunity to choose where to pay that cost and what you choose will largely depend on what you care about more:
Do you want to hunt in the unknown?
Or, do you want to hunt for known bad?
One is not better than the other. That is the crux of this illustration. However, I do hope this at least gives you an additional (and valuable) consideration when writing hunts and detections.
Admittedly, we are sure most of SOC analysts, hunters, and detection engineers intuitively know that the more specific you make your hunts (the more time you spend in R&D) the less results you will have. That’s just natural — the more specific your criteria, the less results fit those criteria. However, we have never seen it specifically called out and we find this framing of hunt to be useful in managing priorities. we are concerned about all PowerShell abuse? Or just how APT X tends to abuse PowerShell? The answer will vary depending on multiple things, but the answer will also inform us of how the time is about to be spent and we get to choose what is most effective for the situation at hand.
Known Bad vs Unknown
When we started out in Blue Team work, we heard over and over that threat hunting was looking at the unknown. While we think the sentiment is nice, we never really found much utility in the statement. we would have heard “you have to know what is normal to find the abnormal” and that was not a particularly encouraging statement because it implied, we to know a lot about a lot to even get started. Don’t get us wrong, it is a valid approach, we just did not find it very helpful.
So instead, we took the route of studying the known bad. we thought we would modify the adage to something like, “you must be able to recognize the abnormal when you see it”.
As we said earlier that hunting Tactics, Techniques, and Procedures serve different functions — this is where that comes in. Kind of by definition, to hunt the unknown (very broadly), you are also hunting a tactic. For example, say you are looking at all commands ran on a file server (tactic: Collection) — you are looking at the unknown. As you move over to a Technique like Data From Local System (T1005) you might focus on SCP and FTP commands — you are moving closer to known bad procedures. Finally, the last step would be looking for a variation of a known bad procedure you were interested in and sourced from a threat actor or malware sample. An example of this might be SCP commands that copy specific directories.
Sort Your Data
It sounds trivial but hands down the easiest way to make a long list of potentially suspicious commands more digestible to hunt is to sort them alphabetically.
At first, sorting alphabetically makes almost no sense. It seems as if sorting by user or device would be better. However, in hunting the unknown, sorting processes alphabetically tends to group similar processes together (whether by their process name, folder path, or arguments) allowing you to more easily identify outliers.
Sorting this way does not require you to actually know what the known good commands are doing, only to recognize common procedures and that those are likely benign; so, focus on the others.
Yes, some threat actors are experts at hiding in the noise. Some will absolutely spend the time to craft a legitimate looking DLL and/or software package to blend in with really common procedures in the environment. However, every hunt you have that is a variation of their known procedures is another trip wire on their path. There are threat actors that are exceptionally hard to hunt in this manner (because they craft every procedure to blend in with the noise) but that appears to be the exception, not the rule; most tend to have a least a few procedures that can be distinguished from normal day-to-day operations. In those rare cases where the threat actor blends in well, hunting in the unknown and being more attentive to detail than normal is almost necessary.
You Probably Knew This Already
As we touched on earlier, we fully recognize that most people who have experience in a SOC probably already know the things that have been discussed here — whether they were consciously aware of this knowledge or not. However, we do think it is still important to shed some light on some of the more fundamental parts of hunting, challenge some ideas, and introduce new ones that break this discipline down into learnable chunks.
At the core of all this is one principle — threat hunting is achievable — and it is not just learning common TTPs and how to detect them. It is sometimes referred to as “the art of threat hunting”. A threat hunter can absolutely be compared to an artist, because they can both have their preferred style. However, threat hunting itself is a skill, just like painting.
If you are looking for a Threat Hunting Services Please feel free to contact us via email support@cysys.io