Businesses have to defend their environments from attacks, and security professionals are asked to accomplish impossible feats in the modern era of cyber defense: they have to protect users and critical information from unauthorized access. This is asymmetric warfare where the bad guys have to get it right only once, but the good guys have to get it right every time. Criminals can monetize stolen data fairly easily, and the criminal success rate has steadily improved over the past decade. As we’ve seen with political and cyber military attacks, money is not the only incentive.
To combat these problems, companies have armed themselves with a plethora of new security tools. As a result, those responsible for an organization’s security posture can be inundated with thousands of alerts — prioritizing and acting on these is a daunting task. A skilled security professional can do a great job when focusing on a specific investigation, but when that process requires stitching together the relevant pieces of information, humans need help extracting insights from an ocean of alerts and raw data coalesced across multiple security systems.
No company is immune to cyber criminal activity. In 2013, Target was hacked despite receiving as many as 10,000 security alerts per day. While Target is a Fortune 100 retailer, even medium-sized companies have to sift through hundreds of thousands of alerts each year. Alerts are investigated before being categorized as false positives and ultimately ignored, but most alerts are idiosyncratic to a product or application with little context of the overall business impact. To prevent financial and reputational loss, security teams are driven to find the most critical needles in an ever-growing haystack of security information.
The techno-elite companies like Facebook, Amazon, Netflix, Google, Apple and Microsoft (a.k.a. FANGAM) have successfully leveraged machine learning algorithms across their businesses that include security systems to protect their users, applications and the overall infrastructure. There are many definitions of machine learning (ML) but some would describe it as a type of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed.
Machine learning focuses on the development of computer programs that can teach themselves and develop innovative solutions when exposed to large quantities of data. Today, we are seeing machine learning based software tools exceed human intelligence in specific tasks within narrow disciplines.
Machine learning algorithms often start out as supervised by engineers and taught with labeled training sets. An example of supervised ML could be a training set with 100 critical security alerts where 20 are labeled as malicious activity and 80 are labeled as non-malicious. Based on the training set provided, the algorithms would attempt to determine malicious activity on new alerts that have not been investigated. However, one of the most intriguing classes of machine learning is “deep learning” where the software is unsupervised and must function with a self-learning approach to develop its own answers.
One example of deep learning would be to provide 10,000 critical alerts with a finite set of outcomes and no training set. The deep learning algorithm would determine its own grouping of the data. The more data the deep learning algorithm processes, the more accurate the algorithm could become at determining malicious and non-malicious activity.
Threat Hunting Starts with Your Data
A business getting a cyber attack is a bit like a person getting sick. Everyone will eventually get sick, and when this happens, you want a quick and accurate diagnosis. You want access to the best medical care possible so that the sickness does not linger and lead to more serious problems. For a speedy recovery, you want to go to a doctor who is thorough and knowledgeable of the latest treatments, no matter how experimental. With some life threatening diseases, an experimental treatment may work better than the typical standard of care.
Machine learning for security is more like an experimental treatment because these algorithms aren’t deployed as standard practice in the industry yet. However, security teams need to care for their information systems in a manner similar to how we care for our health to limit and in the best case, prevent the damage that a criminal can do. Once a security breach is successfully executed, the challenge of discovery and incident response will occur along with the time-consuming and expensive task of cleanup and forensics analysis to understand what exactly had been compromised.
Threat discovery should always start with the data and being able to discern what pieces of information will lead you on the path to tracking down cyber criminal activity. Network and endpoint security tools like firewalls and antivirus programs generate scads of alerts and logs that describe when access to a protected system was blocked, allowed or flagged as a potential threat. Each event describes anomalous activity that does not conform to any normal or expected practice. If an alert is directly tied to a critical breach or an ex-filtration of sensitive data, then the security team becomes activated as Emergency 911 responders to that alert.
Very rarely will one alert illustrate a complete story around a major security attack. Generally, you need to assess dozens of alerts from several different systems across weeks or even months to triangulate a sophisticated attack. To add to the complexity of the process, security professionals need to review data from multiple systems that are stored in separate repositories.
Security professionals have to conduct what someone once described as “swivel chair analytics” and jump from console to dashboard to report to the command line before being able to determine that a cyber crime was committed. Reducing the need for “swivel chair analytics” is just one potential benefit of machine learning.
Keep Your Friends Close and Your Enemies Closer
While Sony Pictures had several defensive measures around their crown jewels of unreleased films and scripts, many other vectors were vulnerable for attack. The cyber criminals working with and for the North Korean government were undetected before whipping out Sony Picture’s IT infrastructure and releasing sensitive internal company emails that ended up ostracizing the top executives from their own industry.
This wasn’t a smash and grab, and these sophisticated criminals weren’t after money or the most valuable assets of the business — the screenplays and unreleased movies. North Korea’s primary objective was to embarrass and intimidate an enterprise. Mission accomplished. As a result, many information technology and business leaders reassessed their strategic security plans. This class of cyber crime warrants a new approach in detection and response that we’re starting to see with machine learning.
Information security can often be broken down into three broad categories: defense, detection and response. Companies can invest and deploy all the leading infosec tools available to create many layers of defense, but the kicker is that no matter how much money is invested in blocking attacks, the probability of never getting compromised is slim. This modern reality has forced chief information security officers (CISOs) to shift their investment balance toward improving their detection and incident response capabilities.
Today, companies need to defend themselves against advanced persistent threats (APTs) like what we saw with the Sony Pictures attack, which are often associated with a nation state actor that’s well funded by a military or government entity. An APT organization will often gain unauthorized access to their target through unexpected ways and remain undetected for long periods of time. It’s like a lurking alligator waiting to steal data rather than cause immediate damage to the network or organization.
APTs are carefully planned and rehearsed in advance to avoid detection. They’re able to stay in a system for months, if not years, by waiting in the shadows until they became a normal part of the environment. They slowly increase activity and then one day, the intent of the enemy is revealed. Yet, their stealthy movements appear hidden in the shadows of the data they leave behind. Machine learning has the ability to shine a light on the criminal footprints hidden from human sight.
Having The Right Staff Isn’t Enough
Hiring enough security professionals has become an industry-wide challenge for businesses of all sizes. In 2016, several reports have cited the number of unfilled security jobs in the U.S. at about 209,000 and globally, at about one million jobs. There is a real talent gap within security that continues to widen. For the lucky few companies that have ample staffing in their security ranks, finding cyber threats with previous-generation tools is like finding needles in an enormous haystack.
To compound the challenge, cyber threats that bypass the traditional layers of defense are not black or white signals, but rather low-grade grey signals that are difficult to make sense of. What machine learning can do is find the disparate needles in the haystack and thread them together. There can be dozens of different needles along a single attack thread created over a five-month period that tells you a bigger threat has taken hold within your environment. Machine learning is well suited to flag anomalous behaviors that span across users, partners, networks and infrastructure systems. This level of insight is worthy of a security professional’s time, knowledge, and skills.
These new machine learning algorithm techniques have already reduced the cost of security operations and threat-hunting investigations between $500,000 and $1 million each year for mid-size Global 2000 enterprises. Once these machine learning algorithms find the important needles in the haystack, the next evolution will be to employ AI assistants to take corrective action within a narrow set of tasks to help bridge the talent gap in security.
Raise Your Security IQ — Fight Smarter, Not Harder
Security teams are always on alert because a cyber criminal can take advantage of one minor mistake to gain an edge. Machine learning can be a powerful countermeasure provided there’s plenty of useful data to feed the algorithms. Machines never get tired, and these types of algorithms become more accurate as they process more data to refine their capabilities.
Self-learning machines that become smarter than humans in specific tasks represent the promise of reversing the decade-long negative trends in cyber defense. Business leaders and security teams need to start leveraging machine learning to stay one step ahead of adversaries that are constantly innovating on how to commit crimes.
This article first appeared on geek.ly
The post How to fight crime with Machine Learning appeared first on Doug Dooley.