Archive for April, 2017

Chuck Leaver – Why Edit Difference Is Important Part Two

Written By Jesse Sampson And Presented By Chuck Leaver CEO Ziften

 

In the first post on edit distance, we took a look at searching for destructive executables with edit distance (i.e., the number of character modifications it takes to make 2 text strings match). Now let’s look at how we can use edit distance to look for malicious domains, and how we can build edit distance functions that can be combined with other domain functions to determine suspicious activity.

Here is the Background

Exactly what are bad actors doing with destructive domains? It could be simply using a similar spelling of a common domain to fool negligent users into looking at ads or getting adware. Legitimate websites are gradually catching onto this method, sometimes called typo-squatting.

Other destructive domains are the result of domain name generation algorithms, which could be used to do all types of nefarious things like avert countermeasures that obstruct recognized compromised sites, or overwhelm domain servers in a distributed DoS attack. Older versions use randomly generated strings, while more advanced ones add tricks like injecting typical words, additionally confusing protectors.

Edit distance can aid with both usage cases: here we will find out how. First, we’ll omit common domain names, because these are normally safe. And, a list of typical domains supplies a standard for spotting abnormalities. One excellent source is Quantcast. For this discussion, we will adhere to domain names and prevent sub-domains (e.g. ziften.com, not www.ziften.com).

After data cleaning, we compare each candidate domain name (input data observed in the wild by Ziften) to its prospective neighbors in the same top-level domain (the last part of a domain name – classically.com,. org, etc. but now can be almost anything). The basic job is to discover the closest neighbor in regards to edit distance. By discovering domains that are one step removed from their nearby next-door neighbor, we can quickly identify typo-ed domain names. By discovering domains far from their neighbor (the normalized edit distance we introduced in the initial post is useful here), we can also discover anomalous domain names in the edit distance area.

Exactly what were the Results?

Let’s take a look at how these results appear in reality. Use caution when navigating to these domain names considering that they might consist of malicious material!

Here are a few potential typos. Typo-squatters target well known domains because there are more possibilities somebody will check them out. Numerous of these are suspect in accordance with our hazard feed partners, but there are some false positives too with adorable names like “wikipedal”.

ed2-1

Here are some unusual looking domain names far from their next-door neighbors.

ed2-2

So now we have developed 2 beneficial edit distance metrics for searching. Not just that, we have three features to possibly add to a machine learning model: rank of closest neighbor, range from next-door neighbor, and edit distance 1 from neighbor, suggesting a danger of typo tricks. Other features that could be utilized well with these include other lexical features like word and n-gram distributions, entropy, and string length – and network features like the number of failed DNS requests.

Streamlined Code that you can Play Around with

Here is a streamlined version of the code to play with! Created on HP Vertica, but this SQL will probably run with most advanced databases. Note the Vertica editDistance function might differ in other executions (e.g. levenshtein in Postgres or UTL_MATCH. EDIT_DISTANCE in Oracle).

ed2-3

 

Chuck Leaver – If You Can’t Manage It Then You Can’t Secure It And The Reverse Is True

Written by Chuck Leaver Ziften CEO

 

If your business computing environment is not effectively managed there is no chance that it can be totally safe and secure. And you cannot efficiently manage those complicated enterprise systems unless there’s a good sense that they are secure.

Some may call this a chicken-and-egg circumstance, where you do not know where to begin. Should you begin with security? Or should you begin with the management of your system? That’s the wrong technique. Think about this instead like Reese’s Peanut Butter Cups: It’s not chocolate initially. It’s not peanut butter first. Rather, both are mixed together – and treated as a single scrumptious treat.

Lots of companies, I would argue a lot of companies, are structured with an IT management department reporting to a CIO, and with a security management team reporting to a CISO. The CIO team and the CISO team do not know each other, speak with each other just when definitely required, have unique budgets, certainly have different priorities, read different reports, and use various management platforms. On a day-to-day basis, what makes up a task, a concern or an alert for one group flies completely under the other group’s radar.

That’s not good, because both the IT and security teams need to make presumptions. The IT team thinks that everything is protected, unless someone tells them otherwise. For instance, they assume that devices and applications have not been compromised, users have actually not escalated their privileges, etc. Likewise, the security group assumes that the servers, desktops, and mobiles are working properly, operating systems and apps fully updated, patches have been used, and so on

Because the CIO and CISO groups aren’t talking to each other, don’t comprehend each others’ roles and concerns, and aren’t utilizing the same tools, those presumptions may not be appropriate.

And once again, you cannot have a safe and secure environment unless that environment is effectively managed – and you cannot manage that environment unless it’s safe and secure. Or putting it another way: An environment that is not secure makes anything you perform in the IT organization suspect and irrelevant, and suggests that you can’t know whether the details you are seeing are right or controlled. It might all be phony news.

Bridging the IT / Security gap

How to bridge that space? It sounds simple but it can be difficult: Ensure that there is an umbrella covering both the IT and security groups. Both IT and security report to the very same individual or organization someplace. It might be the CIO, it might be the CFO, it might be the CEO. For the sake of argument here, let’s state it’s the CFO.

If the company does not have a secure environment, and there’s a breach, the worth of the brand name and the business can be lowered to zero. Similarly, if the users, devices, infrastructure, application, and data aren’t well-managed, the business cannot work successfully, and the value drops. As we’ve gone over, if it’s not properly managed, it cannot be secured, and if it’s not secure, it can’t be well handled.

The fiduciary responsibility of senior executives (like the CFO) is to protect the value of business assets, and that means ensuring IT and security speak with each other, comprehend each other’s goals, and if possible, can see the exact same reports and data – filtered and shown to be significant to their specific areas of obligation.

That’s the thought process that we adopted with the creation of our Zenith platform. It’s not a security management tool with IT abilities, and it’s not an IT management tool with security abilities. No, it’s a Peanut Butter Cup, developed equally around chocolate and peanut butter. To be less confectionery, Zenith is an umbrella that provides IT teams exactly what they require to do their jobs, and gives security teams exactly what they require also – without coverage spaces that could undermine assumptions about the state of enterprise security and IT management.

We have to guarantee that our company’s IT infrastructure is created on a safe and secure foundation – and that our security is carried out on a well managed base of hardware, infrastructure, software and users. We cannot operate at peak efficiency, and with complete fiduciary duty, otherwise.

 

Chuck Leaver – You Need Continuous Endpoint Visibility Even When Devices Are Offline

Written By Roark Pollock And Presented By Chuck Leaver Ziften CEO

 

A survey recently completed by Gallup found that 43% of Americans that were employed worked from another location for some of their employment time in 2016. Gallup, who has been surveying telecommuting patterns in the United States for almost a 10 years, continues to see more employees working beyond standard offices and more of them doing so for more days from the week. And, obviously the number of connected devices that the typical staff member uses has increased as well, which assists drive the convenience and preference of working far from the office.

This freedom surely makes for better staff members, and one hopes more productive staff members, however the issues that these patterns represent for both systems and security operations teams ought to not be overlooked. IT systems management. IT asset discovery, and danger detection and response functions all benefit from real time and historical visibility into user, device, application, and network connection activity. And to be really efficient, endpoint visibility and monitoring ought to work regardless of where the user and device are operating, be it on the network (local), off the network however linked (remotely), or detached (offline). Existing remote working patterns are significantly leaving security and functional groups blind to prospective problems and risks.

The mainstreaming of these trends makes it much more challenging for IT and security groups to restrict what used to be considered higher threat user habits, such as working from a coffeehouse. But that ship has actually sailed and today security and systems management teams need to be able to adequately monitor user, device, application, and network activity, find abnormalities and improper actions, and enforce appropriate action or fixes no matter whether an endpoint is locally linked, from another location connected, or detached.

In addition, the fact that numerous workers now regularly gain access to cloud based assets and applications, and have backup network or USB connected storage (NAS) drives at their homes additionally magnifies the requirement for endpoint visibility. Endpoint controls often supply the one and only record of remote activity that no longer necessarily ends in the organization network. Offline activity provides the most severe example of the need for constant endpoint monitoring. Plainly network controls or network tracking are of negligible use when a device is running offline. The installation of a suitable endpoint agent is crucial to guarantee the capture of all important system and security data.

As an example of the kinds of offline activities that could be detected, a customer was recently able to monitor, flag, and report unusual habits on a business laptop. A high level executive transferred large amounts of endpoint data to an unapproved USB stick while the device was offline. Because the endpoint agent was able to gather this behavioral data during this offline period, the customer was able to see this uncommon action and follow up appropriately. Continuing to monitor the device, applications, and user habits even when the endpoint was detached, provided the customer visibility they never had previously.

Does your business have constant tracking and visibility when worker endpoints are on an island? If so, how do you do so?

Chuck Leaver – Machine Learning Will Bring Unintended Consequences

Written By Roark Pollock And Presented By Ziften CEO Chuck Leaver

 

If you are a student of history you will see lots of examples of serious unexpected repercussions when brand-new technology has actually been presented. It typically surprises people that brand-new technologies may have wicked purposes as well as the positive purposes for which they are brought to market however it occurs all the time.

For example, Train robbers utilizing dynamite (“You think you used enough Dynamite there, Butch?”) or spammers utilizing email. More recently making use of SSL to hide malware from security controls has ended up being more common just because the legitimate use of SSL has made this technique more useful.

Due to the fact that new technology is typically appropriated by bad actors, we have no need to think this will not be true about the brand-new generation of machine-learning tools that have actually reached the marketplace.

To what effect will these tools be misused? There are probably a few ways in which attackers might utilize machine-learning to their benefit. At a minimum, malware authors will evaluate their new malware against the new class of sophisticated hazard protection products in a quest to modify their code to ensure that it is less likely to be flagged as malicious. The effectiveness of protective security controls always has a half-life because of adversarial learning. An understanding of artificial intelligence defenses will assist enemies become more proactive in reducing the efficiency of artificial intelligence based defenses. An example would be an enemy flooding a network with phony traffic with the intention of “poisoning” the machine learning model being developed from that traffic. The objective of the assailant would be to fool the protector’s artificial intelligence tool into misclassifying traffic or to create such a high level of false positives that the protectors would dial back the fidelity of the alerts.

Artificial intelligence will likely likewise be utilized as an attack tool by opponents. For instance, some researchers predict that attackers will make use of machine learning strategies to refine their social engineering attacks (e.g., spear phishing). The automation of the effort it takes to tailor a social engineering attack is particularly troubling provided the effectiveness of spear phishing. The capability to automate mass customization of these attacks is a powerful financial incentive for opponents to adopt the techniques.

Expect breaches of this type that provide ransomware payloads to increase greatly in 2017.

The need to automate jobs is a significant driver of investment choices for both assailants and defenders. Artificial intelligence promises to automate detection and response and increase the operational tempo. While the innovation will increasingly become a standard part of defense in depth techniques, it is not a magical solution. It needs to be understood that hackers are actively working on evasion techniques around artificial intelligence based detection products while also utilizing machine learning for their own attack functions. This arms race will need defenders to increasingly attain incident response at machine pace, further exacerbating the need for automated incident response abilities.