Archive for March, 2015

Chuck Leaver – Narrow Indicators Of Compromise Are Not Sufficient For Total Endpoint Monitoring

Presented By Chuck Leaver And Written By Dr Al Hartmann Of Ziften Inc.

The Breadth Of The Indicator – Broad Versus Narrow

An extensive report of a cyber attack will generally supply information of indicators of compromise. Frequently these are narrow in their scope, referencing a specific attack group as seen in a specific attack on an organization for a restricted amount of time. Typically these slim indicators are particular artifacts of an observed attack that might make up specific evidence of compromise on their own. For the attack it means that they have high specificity, but typically at the expense of low sensitivity to similar attacks with various artifacts.

Basically, narrow indicators offer really limited scope, and it is the factor that they exist by the billions in huge databases that are constantly expanding of malware signatures, network addresses that are suspicious, harmful computer system registry keys, file and packet content snippets, filepaths and invasion detection guidelines and so on. The continuous endpoint monitoring system supplied by Ziften aggregates some of these 3rd party databases and threat feeds into the Ziften Knowledge Cloud, to take advantage of understood artifact detection. These detection factors can be applied in real time and also retrospectively. Retrospective application is important given the short-term attributes of these artifacts as hackers continually render obscure the info about their cyber attacks to irritate this narrow IoC detection method. This is the reason that a continuous monitoring solution should archive monitoring results for a very long time (in relation to industry reported normal hacker dwell times), to provide an enough lookback horizon.

Narrow IoC’s have substantial detection worth but they are mainly inefficient in the detection of brand-new cyber attacks by knowledgeable hackers. New attack code can be pre checked against typical enterprise security solutions in lab environments to validate non-reuse of artifacts that are detectable. Security solutions that work merely as black/white classifiers suffer from this weakness, i.e. by supplying a specific determination of harmful or benign. This technique is really quickly averted. The protected company is most likely to be completely hacked for months or years prior to any detectable artifacts can be determined (after extensive examination) for the particular attack circumstances.

In contrast to the simplicity with which cyber attack artifacts can be obscured by normal hacker toolkits, the particular methods and strategies – the modus operandi – utilized by hackers have actually endured over numerous years. Common strategies such as weaponized websites and docs, new service installation, vulnerability exploitation, module injection, delicate folder and registry area modification, brand-new set up tasks, memory and drive corruption, credentials compromise, destructive scripting and numerous others are broadly common. The proper use of system logging and monitoring can spot a great deal of this particular attack activity, when appropriately combined with security analytics to focus on the greatest risk observations. This completely removes the chance for hackers to pre test the evasiveness of their harmful code, considering that the quantification of dangers is not black and white, however nuanced shades of gray. In particular, all endpoint danger is differing and relative, throughout any network/ user environment and period of time, and that environment (and its temporal characteristics) can not be replicated in any laboratory environment. The fundamental attacker concealment approach is foiled.

In future posts we will analyze Ziften endpoint risk analysis in more detail, as well as the important relationship between endpoint security and endpoint management. “You can’t protect what you don’t manage, you can’t manage what you do not measure, you cannot measure what you do not track.” Organizations get breached since they have less oversight and control of their endpoint environment than the cyber assailants have. Keep an eye out for future posts…

Carbanak Case Study 3 Indicators Of Compromise With Continuous Endpoint Monitoring – Chuck Leaver

Presented By Charles Leaver And Written By Dr Al Hartmann

Part 3 in a 3 part series

 

 

Below are excerpts of Indicators of Compromise (IoC) from the technical reports on the Anunak/Carbanak APT attacks, with talk about their discovery by the Ziften continuous endpoint monitoring system. The Ziften solution has a focus on generic indicators of compromise that have actually been consistent for decades of hacker attacks and cyber security experience. IoC’s can be identified for any operating system such as Linux, OS X and Windows. Specific indicators of compromise likewise exist that indicate C2 infrastructure or particular attack code instances, but these are not utilized long term and not typically utilized once again in fresh attacks. There are billions of these artifacts in the cyber security world with thousands being included every day. Generic IoC’s are embedded for the supported os by the Ziften security analytics, and the specific IoC’s are used by the Ziften Knowledge Cloud from memberships to a number of market risk feeds and watch lists that aggregate these. These both have value and will help in the triangulation of attack activity.

1. Exposed vulnerabilities

Excerpt: All observed cases used spear phishing e-mails with Microsoft Word 97– 2003 (. doc) files attached or CPL files. The doc files manipulate both Microsoft Office (CVE-2012-0158 and CVE-2013-3906) and Microsoft Word (CVE- 2014-1761).

Comment: Not actually a IoC, critical exposed vulnerabilities are a major hacker exploit and is a large warning that increases the risk score (and the SIEM priority) for the end point, particularly if other signs are likewise present. These vulnerabilities are indicators of lazy patch management and vulnerability lifecycle management which causes a weakened cyber defense position.

2. Geographies That Are Suspect

Excerpt: Command and Control (C2) servers located in China have actually been recognized in this campaign.

Remark: The geolocation of endpoint network touches and scoring by geography both contribute to the danger rating that increases the SIEM priority. There are valid reasons for having contact with Chinese servers, and some organizations might have installations located in China, however this need to be confirmed with spatial and temporal checking of anomalies. IP address and domain details need to be included with a resulting SIEM alarm so that SOC triage can be conducted rapidly.

3. Binaries That Are New

Excerpt: Once the remote code execution vulnerability is effectively exploited, it sets up Carbanak on the victim’s system.

Comment: Any new binaries are always suspicious, however not all them should be alerted. The metadata of images must be evaluated to see if there is a pattern, for example a new app or a new variation of an existing app from an existing vendor on a likely file path for that vendor and so on. Hackers will try to spoof apps that are whitelisted, so signing data can be compared in addition to size, size of the file and filepath etc to filter out apparent circumstances.

4. Uncommon Or Delicate Filepaths

Excerpt: Carbanak copies itself into “% system32% com” with the name “svchost.exe” with the file attributes: system, concealed and read-only.

Remark: Any writing into the System32 filepath is suspicious as it is a delicate system directory, so it goes through analysis by examining abnormalities right away. A classic anomaly would be svchost.exe, which is an essential system process image, in the unusual place the com subdirectory.

5. New Autostarts Or Services

Excerpt: To guarantee that Carbanak has autorun privileges the malware produces a brand-new service.

Remark: Any autostart or brand-new service prevails with malware and is constantly examined by the analytics. Anything low prevalence would be suspicious. If inspecting the image hash against industry watchlists results in an unknown quantity to most of the anti-virus engines this will raise suspicions.

6. Low Prevalence File In High Prevalence Folder

Excerpt: Carbanak produces a file with a random name and a.bin extension in %COMMON_APPDATA% Mozilla where it saves commands to be carried out.

Remark: This is a classic example of “one of these things is not like the other” that is simple for the security analytics to inspect (continuous monitoring environment). And this IoC is totally generic, has definitely nothing to do with which filename or which folder is produced. Although the technical security report notes it as a specific IoC, it is trivially genericized beyond Carabanak to future attacks.

7. Suspect Signer

Excerpt: In order to render the malware less suspicious, the latest Carbanak samples are digitally signed

Comment: Any suspect signer will be treated as suspicious. One case was where a signer supplies a suspect anonymous gmail e-mail address, which does not inspire confidence, and the risk rating will be elevated for this image. In other cases no email address is provided. Signers can be quickly listed and a Pareto analysis performed, to determine the more versus less trusted signers. If a less trusted signer is discovered in a more sensitive directory then this is extremely suspicious.

8. Remote Administration Tools

Excerpt: There appears to be a preference for the Ammyy Admin remote administration tool for remote control believed that the hackers utilized this remote administration tool since it is frequently whitelisted in the victims’ environments as a result of being utilized regularly by administrators.

Remark: Remote admin tools (RAT) always raise suspicions, even if they are whitelisted by the organization. Checking of abnormalities would take place to identify whether temporally or spatially each brand-new remote admin tool is consistent. RAT’s undergo abuse. Hackers will constantly choose to use the RAT’s of a company so that they can prevent detection, so they should not be granted access each time just because they are whitelisted.

9. Patterns Of Remote Login

Excerpt: Logs for these tools suggest that they were accessed from two different IPs, most likely used by the hackers, and situated in Ukraine and France.

Comment: Always suspect remote logins, due to the fact that all hackers are presumed to be remote. They are also used a lot with insider attacks, as the insider does not want to be identified by the system. Remote addresses and time pattern abnormalities would be inspected, and this should expose low prevalence use (relative to peer systems) plus any suspect locations.

10. Atypical IT Tools

Excerpt: We have likewise found traces of many different tools utilized by the attackers inside the victim ´ s network to gain control of additional systems, such as Metasploit, PsExec or Mimikatz.

Comment: Being sensitive apps, IT tools ought to always be checked for abnormalities, since lots of hackers overturn them for destructive functions. It is possible that Metasploit could be utilized by a penetration tester or vulnerability researcher, but circumstances of this would be rare. This is a prime example where an unusual observation report for the vetting of security staff would result in corrective action. It likewise highlights the problem where blanket whitelisting does not help in the recognition of suspicious activity.

Chuck Leaver – Carbanak Case Study Part Two Continuous Endpoint Monitoring Is Very Effective

Presented By Charles Leaver And Written By Dr Al Hartmann

 

Part 2 in a 3 part series

 

Continuous Endpoint Monitoring Is Really Efficient

Capturing and blocking harmful software before it is able to jeopardize an endpoint is great. But this method is mostly ineffective against cyber attacks that have been pre evaluated to evade this kind of method to security. The real issue is that these evasive attacks are conducted by experienced human hackers, while traditional defense of the endpoint is an automatic process by endpoint security systems that rely mainly on standard antivirus innovation. The intelligence of humans is more imaginative and versatile than the intelligence of machines and will always be superior to automated defenses. This highlights the findings of the Turing test, where automated defenses are attempting to adapt to the intellectual level of an experienced human hacker. At present, artificial intelligence and machine learning are not sophisticated enough to totally automate cyber defense, the human hacker is going to win, while those infiltrated are left counting their losses. We are not residing in a sci-fi world where machines can out think people so you should not think that a security software application suite will automatically look after all of your problems and avoid all attacks and data loss.

The only real method to prevent a resolute human hacker is with an undaunted human cyber defender. In order to engage your IT Security Operations Center (SOC) personnel to do this, they need to have complete visibility of network and endpoint operations. This type of visibility will not be attained with traditional endpoint antivirus solutions, instead they are developed to remain silent unless enabling a capture and quarantining malware. This standard technique renders the endpoints opaque to security workers, and the hackers utilize this endpoint opacity to conceal their attacks. This opacity extends backwards and forwards in time – your security workers have no idea what was running across your endpoint population in the past, or at this moment, or exactly what can be expected in the future. If thorough security workers discover clues that require a forensic look back to discover hacker traits, your antivirus suite will be unable to help. It would not have actually acted at the time so no events will have been recorded.

In contrast, continuous endpoint monitoring is always working – offering real time visibility into endpoint operations, supplying forensic look back’s to take action against new proof of attacks that is emerging and spot indications earlier, and supplying a baseline for typical patterns of operation so that it understands exactly what to anticipate and alert any abnormalities in the future. Offering not just visibility, continuous endpoint monitoring supplies informed visibility, with the application of behavioral analytics to detect operations that appear irregular. Irregularities will be constantly evaluated and aggregated by the analytics and reported to SOC personnel, through the company’s security information event management (SIEM) network, and will flag the most worrying suspicious abnormalities for security workers attention and action. Continuous endpoint monitoring will enhance and scale human intelligence and not replace it. It is a bit like the old game on Sesame Street “One of these things is not like the other.”

A kid can play this game. It is simplified because most items (called high prevalence) resemble each other, but one or a small number (called low prevalence) are not the same and stand out. These different actions taken by cyber bad guys have been quite consistent in hacking for decades. The Carbanak technical reports that noted the signs of compromise are good examples of this and will be gone over below. When continuous endpoint monitoring security analytics are enacted and show these patterns, it is basic to recognize something suspicious or uncommon. Cyber security workers will be able to carry out fast triage on these unusual patterns, and rapidly determine a yes/no/maybe reaction that will differentiate unusual but known to be good activities from malicious activities or from activities that require extra tracking and more insightful forensics investigations to confirm.

There is no way that a hacker can pre test their attacks when this defense application is in place. Continuous endpoint monitoring security has a non-deterministic threat analytics component (that informs suspect activity) as well as a non-deterministic human aspect (that carries out alert triage). Depending on the current activities, endpoint population mix and the experience of the cyber security workers, developing attack activity might or may not be discovered. This is the nature of cyber warfare and there are no assurances. However if your cyber security fighters are geared up with continuous endpoint monitoring analytics and visibility they will have an unfair advantage.

Chuck Leaver – Part One Of The Carbanak Case Study For Indicators Of Compromise With Continuous Endpoint Monitoring

Presented By Chuck Leaver And Written By Dr Al Hartmann

 

 

Part 1 in a 3 part series

 

Carbanak APT Background Details

A billion dollar bank raid, which is targeting more than a hundred banks across the world by a group of unknown cyber bad guys, has remained in the news. The attacks on the banks began in early 2014 and they have been expanding across the globe. The majority of the victims suffered dreadful infiltrations for a variety of months throughout numerous endpoints prior to experiencing financial loss. Most of the victims had carried out security measures which included the implementation of network and endpoint security software, however this did not supply a lot of warning or defense against these cyber attacks.

A variety of security companies have actually produced technical reports about the attacks, and they have actually been codenamed either Carbanak or Anunak and these reports noted indicators of compromise that were observed. The companies consist of:

Fox-IT of Holland
Group-IB from Russia
Kaspersky Laboratory of Russia

This post will act as a case study for the cyber attacks and investigate:

1. The reason that the endpoint security and the standard network security was not able to identify and defend against the attacks?
2. Why continuous endpoint monitoring (as provided by the Ziften solution) would have warned early about endpoint attacks then triggered a reaction to prevent data loss?

Standard Endpoint Security And Network Security Is Inefficient

Based on the legacy security design that relies excessively on blocking and prevention, conventional endpoint and network security does not provide a balanced of blocking, prevention, detection and response. It would not be difficult for any cyber criminal to pre test their attacks on a small number of standard endpoint security and network security products so that they could be sure an attack would not be detected. A variety of the hackers have in fact looked into the security services that remained in place at the victim companies then became skilled in breaking through unnoticed. The cyber crooks knew that the majority of these security products only react after the event however otherwise will do nothing. What this means is that the typical endpoint operation stays primarily nontransparent to IT security personnel, which suggests that destructive activity ends up being masked (this has actually already been checked by the hackers to prevent detection). After an initial breach has actually occurred, the destructive software application can extend to reach users with higher privileges and the more delicate endpoints. This can be quickly achieved by the theft of credentials, where no malware is needed, and conventional IT tools (which have actually been white listed by the victim organization) can be utilized by cyber criminal created scripts. This means that the presence of malware that can be spotted at endpoints is not used and there will be no alarms raised. Traditional endpoint security software is too over reliant on searching for malware.

Standard network security can be controlled in a comparable method. Hackers evaluate their network activities first to avoid being found by extensively distributed IDS/IPS guidelines, and they carefully monitor normal endpoint operation (on endpoints that have been jeopardized) to hide their activities on a network within normal transaction durations and regular network traffic patterns. A new command and control infrastructure is produced that is not registered on network address blacklists, either at the IP or domain levels. There is not much to give the cyber criminals away here. However, more astute network behavioral assessment, especially when connected to the endpoint context which will be gone over later on in this series of posts, can be a lot more effective.

It is not time to abandon hope. Would continuous endpoint monitoring (as offered by Ziften) have supplied an early caution of the endpoint hacking to start the procedure of stopping the attacks and avoid data loss? Find out more in part 2.