Browsing category

Security Intelligence & Analytics

Antivirus, Data Management, email, Internet of Things (IoT), Risk Management, Security Intelligence & Analytics, Security Strategy, Social Engineering, Stuxnet, Threat Protection,

What Security Threats of the Past Can Tell Us About the Future of Cybersecurity

Remember the Love Bug virus? Security veterans will remember that, nearly 20 years ago, the computer worm also known as ILOVEYOU spread like wildfire across email systems and networks because we believed it was a legitimate email from someone we knew. Security threats were in their infancy at that point, and most users weren’t yet tuned in to know the difference between a real email and an attack from threat actors.

The Love Bug worked for two reasons: First, it relied on email to spread, and you can’t shut down email. Second, it used social engineering tactics to generate user buy-in. Once the virus was launched on your computer, it sent the same email message to everyone in your address book. The virus also overloaded computer networks and manipulated files. It was a cybersecurity wake-up call; one of the first examples of how easy it was to use the burgeoning internet for malicious purposes.

We’ve come a long way since the Love Bug when it comes to improving overall security efforts and addressing cyberthreats. Attackers have also come a long way over the past two decades as their tactics become more sophisticated and harder to detect. However, as Josh Zelonis, senior analyst with Forrester, told the audience at Check Point’s recent CPX 360 conference, many companies are still challenged by Love Bug-like attacks.

Here we are, moving quickly toward fifth-generation cyberattacks — where we will see a rise in security threats involving the internet of things (IoT) and more cryptojacking — yet we continue to fall for basic social engineering attacks. Perhaps we can begin to develop defensive strategies for the future of cybersecurity by better understanding the threat landscape of the past.

Consistency Across Generations of Attacks

The way we share information has been a primary driver of how we approach cybersecurity. When data was exchanged via floppy disks in a controlled environment, organizations didn’t need to prioritize information security. When we began to share data online, a simple firewall was enough to keep bad guys out. The Love Bug helped give rise to signature-based virus detection, but with how fast malware moves these days, antivirus software is often no longer effective because it can’t find attacks before they cause damage to your system(s).

As attacks grow more sophisticated, we’re also seeing the same tried-and-true attack methods. Social engineering remains a preferred method for spreading malware; phishing attacks were up 250 percent between January and December 2018, according to Microsoft.

According to Frank Downs, director of cybersecurity practices at ISACA, we’re seeing consistency in the type of attacks used as well as attackers’ the end goals.

“These trends identify that while certain cybersecurity considerations change, proven attackers, victims and attack processes will never go out of style,” wrote Downs in an ISACA Now blog post.

Anticipating Future Security Threats Before They Happen

Not only do cybercriminals like to use time-tested methods of attacks, but if we pay attention, we can see signs of next-generation security threats months or even years before they happen. For example, Zelonis pointed out that the Morris worm, which was released in 1988 and is considered one of the first major cyberattacks, had third-generation components 12 years before the industry reached Gen III maturity. Similarly, the Shamoon virus had Gen V abilities five years early.

Then there was Stuxnet, which was designed to disrupt, deny and destroy. The malicious worm was discovered in 2010 in overseas nuclear systems, but was believed to have been in development for years before that. Fast forward to IoT devices and the consumer market: What we know about Stuxnet — including how it works and what its intents are — should be a primer for the types of attacks to expect on the IoT.

We already have much of the technology we need to address the next generation of attacks in place. Now, we need to focus less on adding new technologies to meet new security challenges and instead develop new strategies to stop threats. We’ve out-innovated our capabilities, and now is the time to rethink the way we approach our defenses.

If we look close enough, many new security threats are similar to things we’ve seen in other forms or attack styles we’ve previously defended against. Fully understanding the threats of the past can help us better anticipate what is to come for the future of cybersecurity — a tactic that may finally put our defenses ahead of threat actors.

The post What Security Threats of the Past Can Tell Us About the Future of Cybersecurity appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Sue Poremba

Chief Information Security Officer (CISO), CISO, Data Breaches, Data Privacy, Internet of Things (IoT), Personally Identifiable Information (PII), Risk Management, Security Framework, Security Intelligence & Analytics, Security Strategy, Security Testing, Vulnerabilities,

An Apple a Day Won’t Improve Your Security Hygiene, But a Cyber Doctor Might

You might’ve begun to notice a natural convergence of cybersecurity and privacy. It makes sense that these two issues go hand-in-hand, especially since 2018 was littered with breaches that resulted in massive amounts of personally identifiable information (PII) making its way into the wild. These incidents alone demonstrate why an ongoing assessment of security hygiene is so important.

You may also see another convergence: techno-fusion. To put it simply, you can expect to see technology further integrating itself into our lives, whether it is how we conduct business, deliver health care or augment our reality.

Forget Big Data, Welcome to Huge Data

Underlying in these convergences is the amount of data we produce, which poses an assessment challenge. According to IBM estimates, we produce 2.5 quintillion bytes of data every day. If you’re having problems conceptualizing that number — and you’re not alone — try rewriting it like this: 2.5 million terabytes of data every day.

Did that help? Perhaps not, especially since we are already in the Zettabyte era and the difficulty of conceptualizing how much data we produce is, in part, why we face such a huge data management problem. People are just not used to dealing with these numbers.

With the deployment of 5G on the way — which will spark an explosion of internet of things (IoT) devices everywhere — today’s Big Data era may end up as a molehill in terms of data production and consumption. This is why how you manage your data going forward could be the difference between surviving and succumbing to a breach.

Furthermore, just as important as how you will manage your data is who will manage and help you manage it.

Expect More Auditors

It’s not uncommon for larger organizations to use internal auditors to see what impact IT has on their business performance and financial reporting. With more organizations adopting some sort of cybersecurity framework (e.g., the Payment Card Industry Data Security Standard or NIST’s Framework for Improving Critical Infrastructure Cybersecurity), you can expect to hear more compliance and audit talk in the near future.

There is utility in having these internal controls. It’s a good way to maintain and monitor your organization’s security hygiene. It’s also one way to get internal departments to talk to each other. Just as IT professionals are not necessarily auditors, neither are auditors some sort of IT professionals. But when they’re talking, they can learn from each other, which is always a good thing.

Yet internal-only assessments and controls come with their own set of challenges. To begin, the nature of the work is generally reactive. You can’t audit something you haven’t done yet. Sure, your audit could find that you need to do something, but the process itself may be very laborious, and by the time you figure out what you need to do, you may very well have an avalanche of new problems.

There are also territorial battles. Who is responsible for what? Who reports to whom? And my personal favorite: Who has authority? It’s a mess when you have all the responsibility and none of the authority.

Another, perhaps bigger problem is that internal controls may have blind spots. That’s why there is value in having a regular, external vulnerability assessment.

When it Comes to Your Security Hygiene, Don’t Self-Diagnose

Those in the legal and medical fields have undoubtedly been cautioned not to act as their own counsel or doctor. Perhaps we should consider similar advice for security professionals too. It’s not bad advice, considering a recent Ponemon Institute report found that organizations are “suffering from investments in disjointed, non-integrated security products that increase cost and complexity.”

Think about it like this: You, personally, have ultimate responsibility to take care of your own health. Your cybersecurity concerns are no different. Even at the personal level, if you take care of the basics, you’re doing yourself a huge favor. So do what you can to keep yourself in the best possible health.

Part of healthy maintenance normally includes a checkup with a doctor, even when you feel everything is perfectly fine. Assuming you’re happy with your doctor and have a trusting relationship, after an assessment and perhaps some tests, your doctor will explain to you, in a way that you are certain to understand, what is going on. If something needs a closer look or something requires immediate attention, you can take care of it. That’s the advantage of going to the doctor, even when you think you’re all right. They have the assessment tools and expertise you generally do not.

‘I Don’t Need a Doctor, I Feel Fine’

Undoubtedly, this is a phrase you have heard before, or have even invoked on your own. But cybersecurity concerns continue to grow and internal resources remain overwhelmed by responding to so many alerts and financial constraints or understaffing. Therefore, the need for some outside assistance may not only be necessary, but welcomed, as that feeling of security fatigue has been around for some time now.

There is an added wildcard factor too: I’m confident many of us in the field have heard IT professionals say, “We’ve got this” with a straight face. My general rule of thumb is this: If attackers can get into the U.S. Department of Defense, they can get to you, so the “I feel fine” comment could very well include a dose of denial.

When considering external assistance — really just a vulnerability assessment — it’s worth thinking through the nuance of this question: Is your IT department there to provide IT services, or is it there to secure IT systems? I suggest the answer is not transparently obvious, and much of it will depend on your business mission.

Your IT team may be great at innovating and deploying services, but that does not necessarily mean its strengths also include cybersecurity audits/assessments, penetration testing, remediation or even operating intelligence-led analytics platforms. Likewise, your security team may be great at securing your networks, but that does not necessarily mean it understands your business limitations and continuity needs. And surely, the last thing you want to do is get trapped in some large capital investment that just turns into shelfware.

Strengthen Your Defenses by Seeing a Cyber Doctor

Decision-makers — particularly at the C-suite and board level, in tandem with the chief information security officer (CISO) and general counsels — should consider the benefits of a regular external assessment by trusted professionals that not only understand the cybersecurity landscape in real time, but also the business needs of the organization.

It’s simple: Get a checkup from a cyber doctor who will explain what’s up in simple language, fix it with help if necessary and then do what you can on your own. Or, get additional external help if needed. That’s it. That semiannual or even quarterly assessment could very well be that little bit of outside help that inoculates you from the nastiest of cyber bugs.

The post An Apple a Day Won’t Improve Your Security Hygiene, But a Cyber Doctor Might appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: George Platsis

immune system, Indicator of Compromise (IoC), Intrusion Detection System (IDS), Modeling, Network, Network Security, Patch Management, Payment Card Industry (PCI), Risk Management, Risk mitigation, Security Information and Event Management (SIEM), Security Intelligence & Analytics, Vulnerabilities, Vulnerability Management,

Comprehensive Vulnerability Management in Connected Security Solutions

Security vulnerabilities are everywhere — in the software we use, in mobile apps, in hardware and in internet of things (IoT) devices. Almost anything can be hacked, and we can see that in the staggering numbers of vulnerabilities disclosed every year. In fact, there were 10,644 vulnerabilities disclosed in the first half of 2018 alone, according to Risk Based Security. This year will likely top that number, and there is no doubt in my mind the same will be written 12 months from now.

In a threat landscape so replete with opportunities for attackers to make a move, vulnerability management is a central activity that can help organizations reduce their exposure to the attack surface and mitigate risk. Vulnerability management solutions have been available for many years now, yet the process remains a challenge for many organizations today.

Effective and efficient vulnerability management requires the involvement of various stakeholders throughout the organization. They typically come from multiple teams, such as security, asset owners and IT operations, to name a few. It is not enough to scan for vulnerabilities and then send a report over the wall with a large number of issues that have to be addressed; this is a surefire way to waste precious resources and frustrate teams in the process. Worst of all, it can potentially leave some of the riskiest vulnerabilities unaddressed.

According to forecasts released by Gartner1 in 2018, around 30 percent of organizations will adopt a risk-based approach to vulnerability management by 2022, which could help them suffer 80 percent fewer breaches. Sounds like a promising forecast, but how can organizations adopt an effective risk-based approach that could yield such improvement in their security posture?

Let’s explore how connected security solutions can help security teams contextualize and prioritize vulnerabilities.

Risk-Based Vulnerability Management Starts With Prioritization

Vulnerability prioritization is a widely discussed topic in the information security domain. From the Common Vulnerability Scoring System (CVSS) to approaches based on asset value and exploit weaponization, asset value and its criticality and sensitivity are all fundamental elements of vulnerability remediation prioritization.

Foregoing a vulnerability patch on a critical server, a production environment or the database that holds company secrets can result in high-impact damage to the business. On the other hand, an approach based on potential exploit weaponization stipulates that vulnerabilities are only as dangerous as the threats that could exploit them.

How do you prioritize the right patch? Which approach will result in keeping up with the business’ goals? There’s more to it than choosing one or the other. Let’s look more specifically into why and how patch management, security information and event management (SIEM) and network topology modeling can help prioritize addressing vulnerabilities.

Focus Your Efforts Through Patch Management

Imagine a traditional vulnerability assessment program that requires monthly scans. Every month, a scanning solution assesses potential vulnerabilities and completes remediation activities. During the next scan, those vulnerabilities are confirmed as remediated while new ones are identified, and the cycle continues.

But how many of those flaws will have been realistically patched before the next vulnerability scan? How much time will security staff spend looking at vulnerabilities to ensure they have been effectively patched? It would be wise for a vulnerability management process to require a specific scan to validate that a vulnerability has indeed been remediated.

Considering the resources available for investigation in a typical organization, knowing that a patch management solution has reliably applied or scheduled a fix can help security teams focus on the vulnerabilities that have not yet been remediated.

Look at Network Traffic Routes Using Network Topology Modeling

Now let’s shift our focus to network security. Fundamentally, the topology of a network can help define the opportunity for an attacker to exploit a particular vulnerability. Defenders should ask themselves where devices are placed on the network and whether that placement is conducive to optimizing the security they can offer. What rules have been configured on them, and what data drove their creation?

By gathering details on existing network security and the configuration of network devices, threat modeling solutions can help build a network traffic topology. This topology can provide answers to questions such as:

  • Can users access critical/sensitive assets from the edges of the network?
  • What subnetworks have a path to the organization’s crown jewels?
  • Are there vulnerabilities on a particular port that can be exploited from the edge of my network?

Going through this process can help you use network topology to inform vulnerability prioritization. A high-risk vulnerability on a low-value asset in an area of your network that cannot be reached from the internet is likely less important than a medium-risk vulnerability on a high-value asset that is accessible from the internet. This is why network topology threat modeling can be a helpful tool for prioritizing which vulnerabilities present higher risk and which do not necessarily require immediate action.

While vulnerabilities don’t change in their definition, network configurations do. A network modeling solution should monitor security policies and adjust risk as the context changes.

Let’s say, for example, that a firewall rule has been added to allow traffic from the edge of the network to a low-value asset affected by a high-risk vulnerability. Defining the risk here may seem straightforward, but what if there were additional details to consider? The low-value asset has a network path to high-value assets, for instance. Now the risk associated with the vulnerability has changed, and this should be reflected in how the vulnerabilities are prioritized.

Inform Your Security Team Via Your SIEM

SIEM data can help inform security professionals about the context of the services associated with certain vulnerabilities.

Consider the example of CVE-2014-3566, known as the enabler for the Padding Oracle On Downgraded Legacy Encryption (POODLE) attack. According to IBM X-Force Exchange, “Multiple products could allow a remote attacker to obtain sensitive information, caused by a design error when using the SSLv3 protocol. A remote user with the ability to conduct a man-in-the-middle attack could exploit this vulnerability via the POODLE attack to decrypt SSL sessions and calculate the plaintext of secure connections.”

While an asset might be running a version of OpenSSL relevant to CVE-2014-3566, the only way for an attacker to “obtain sensitive information” is for that information to exist in the first place. Network flows may tell us that no SSL traffic was ever recorded to or from this service, or they may paint the picture of an HTTPS service used throughout the organization and from outside the organization’s network. Here we have two different scenarios associated with two very different risks that a vulnerability assessment solution alone cannot differentiate.

Using a threat feed, a SIEM solution can help determine not only whether there is traffic from the internet going to a vulnerable service on an asset, but also if that flow is indeed coming from an identified malicious source. This can raise an offense in the SIEM system and should also feed down to a vulnerability management solution to prioritize that particular vulnerability instance.

In addition, let’s say an intrusion detection system (IDS) identified an indicator of compromise (IoC) that clearly points to the exploitation of a vulnerability. Not only will this raise an offense in a SIEM solution for a security team to investigate, it will also be prioritized by a SIEM tool that is integrated with a vulnerability management solution. That particular vulnerability would clearly become a high-priority concern.

Connect Security Solutions to Keep Up With Evolving Modern Threats

While vulnerability management solutions have helped organizations mitigate risk for a couple decades now, cybersecurity threats are more prevalent than ever before. Systems have become increasingly complex, attacks are more sophisticated as a result and the volume of vulnerabilities is beyond the remediation capabilities of many organizations.

To stay ahead of attackers, organizations should consider vulnerability management solutions that integrate with SIEM tools, network and threat modeling capabilities, and patch management systems. Making the best of vulnerability management today means breaking down the silos of security and IT operations solutions and connecting them together.

1 Implement a Risk-Based Approach to Vulnerability Management, August 21, 2018, Prateek Bhajanka and Craig Lawson

The post Comprehensive Vulnerability Management in Connected Security Solutions appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Thibault Barillon

IBM Security, Security Analytics, Security Information and Event Management (SIEM), Security Intelligence, Security Intelligence & Analytics, Security Leaders, Security Leadership, Security Operations Center (SOC), Security Professionals, Threat Detection, Threat Intelligence,

Follow the Leaders: 7 Tried-and-True Tips to Get the Most Out of Your Security Analytics

The practice of analyzing security data for detection and response — otherwise known as security analytics (SA) — comes in many forms and flavors. Consumed data varies from organization to organization, analytic processes span a plethora of algorithms and outputs can serve many use cases within a security team.

In early 2019, IBM Security commissioned a survey to better understand how companies currently use security analytics, identify key drivers and uncover some of the net benefits security decision-makers have experienced. The findings were drawn from more than 250 interviews with information security decision-makers around the globe.

7 Lessons From Top Performers in Security Analytics

Encouragingly, the study revealed rising levels of maturity when it comes to security analytics. Roughly 15 percent of all interviewees scored as high performers, meaning their investigation processes are well-defined and they continuously measure the effectiveness of the output. These respondents are especially strong in terms of volume of investigations (five to 10 times more investigations than the average) and false positives (approximately 30 percent below average). Meanwhile, 97 percent of these leaders successfully built a 24/7 security operations center (SOC) with a total staffing headcount between 25 and 50.

What lessons can organizations with lower levels of SA maturity take away from this shining example? Below are seven key lessons security teams can learn from the top performers identified in the survey:

  1. Top SA performers have a knack for integrating security data. While many mid-performing organizations struggle with this integration and consider the task an obstacle to effective security analytics, leaders identified in the survey have streamlined the process, freeing them to focus on use case and content development.
  2. Nine in 10 high performers have an accurate inventory of users and assets — in other words, they understand the enterprise’s boundaries and potential attack surfaces and continuously update their inventory. This is likely a result of effective, automated discovery using a combination of collected security data and active scanning. By comparison, less than 30 percent of low-performing security teams practice this approach.
  3. A robust detection arsenal contains an equal mix of rule-based matching (i.e., indicators of compromise), statistical modeling (i.e., baselining) and machine learning. In stark contrast, intermediate performers rely more on existing threat intelligence as a primary detection method.
  4. Top performers use content provided by their security analytics vendors. In fact, 80 percent of respondents in this category indicated that the vendor-provided content is sufficient, whether sourced out of the box or via services engagements.
  5. Compared to middling performers, top performers dedicate between two and three times more resources to tuning detection tools and algorithms. To be exact, 41 percent of high performers spend 40 hours or more per week on detection tuning.
  6. High-performing security teams automate the output of the analytics and prioritize alerts based on asset and threat criticality. They also have automated investigation playbooks linked to specific alerts.
  7. Finally, organizations with a high level of SA maturity continuously measure their output and understand the importance of time. Approximately 70 percent of top performers keep track of monthly metrics such as time to respond and time spent on investigation. Low-performing organizations, on the other hand, measure the volume of alerts, and their use of time-based metrics is 60 to 70 percent lower than that of high performers.

Build a Faster, More Proactive and More Transparent SOC

So what do the high performers identified in the survey have to show for their security analytics success? For one thing, they all enjoy superb visibility into the performance of their SOC. While many companies are improving, particularly in the areas of cloud and endpoint visibility, 41 percent of leaders in security analytics claim to have full SOC visibility, compared to 13 percent of intermediate and low performers.

In addition, while lower-performing organizations leverage security analytics to investigate and respond — i.e., react — to threats, high performers use SA to stay ahead of threats proactively. Finally, the leaders identified in the study generate their own threat intelligence and are experts in analyzing security data.

The key takeaway here is that security is a race against time — specifically, to outpace cyber adversaries. Leading security teams know this, which is why they continuously challenge themselves by integrating new data, extracting new insights, implementing smart automation, and, most importantly, measuring the time to detect, investigate and respond.

The post Follow the Leaders: 7 Tried-and-True Tips to Get the Most Out of Your Security Analytics appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Bart Lenaerts

Advanced Threats, Artificial intelligence, Artificial Intelligence (AI), Chief Information Security Officer (CISO), Data Breaches, Risk Management, Security Costs, Security Intelligence & Analytics, Security Products, Security Strategy, Skills Gap, Threat Detection, Zero-Day Attacks,

Are Applications of AI in Cybersecurity Delivering What They Promised?

Many enterprises are using artificial intelligence (AI) technologies as part of their overall security strategy, but results are mixed on the post-deployment usefulness of AI in cybersecurity settings.

This trend is supported by a new white paper from Osterman Research titled “The State of AI in Cybersecurity: The Benefits, Limitations and Evolving Questions.” According to the study, which included responses from 400 organizations with more than 1,000 employees, 73 percent of organizations have implemented security products that incorporate at least some level of AI.

However, 46 percent agree that rules creation and implementation are burdensome, and 25 percent said they do not plan to implement additional AI-enabled security solutions in the future. These findings may indicate that AI is still in the early stages of practical use and its true potential is still to come.

How Effective Is AI in Cybersecurity?

“Any ITDM should approach AI for security very cautiously,” said Steve Tcherchian, chief information security officer (CISO) and director of product at XYPRO Technology. “There are a multitude of security vendors who tout AI capabilities. These make for great presentations, marketing materials and conversations filled with buzz words, but when the rubber meets the road, the advancement in technology just isn’t there in 2019 yet.”

The marketing Tcherchian refers to has certainly drummed up considerable attention, but AI may not yet be delivering enough when it comes to measurable results for security. Respondents to the Osterman Research study noted that the AI technologies they have in place do not help mitigate many of the threats faced by enterprise security teams, including zero-day and advanced threats.

Still Work to Do, but Promise for the Future

While applications of artificial intelligence must still mature for businesses to realize their full benefits, many in the industry still feel the technology offers promise for a variety of applications, such as improving the speed of processing alerts.

“AI has a great potential because security is a moving target, and fixed rule set models will always be evaded as hackers are modifying their attacks,” said Marty Puranik, CEO of Atlantic.Net. “If you have a device that can learn and adapt to new forms of attacks, it will be able to at least keep up with newer types of threats.”

Research from the Ponemon Institute predicted several benefits of AI use, including cost-savings, lower likelihood of data breaches and productivity enhancements. The research found that businesses spent on average around $3 million fighting exploits without AI in place. Those who have AI technology deployed spent an average of $814,873 on the same threats, a savings of more than $2 million.

Help for Overextended Security Teams

AI is also being considered as a potential point of relief for the cybersecurity skills shortage. Many organizations are pinched to find the help they need in security, with Cybersecurity Ventures predicting the skills shortage will increase to 3.5 million unfilled cybersecurity positions by 2021.

AI can help security teams increase efficiency by quickly making sense of all the noise from alerts. This could prove to be invaluable because at least 64 percent of alerts per day are not investigated, according to Enterprise Management Associates (EMA). AI, in tandem with meaningful analytics, can help determine which alerts analysts should investigate and discern valuable information about what is worth prioritizing, freeing security staff to focus on other, more critical tasks.

“It promises great improvements in cybersecurity-related operations, as AI releases security engineers from the necessity to perform repetitive manual processes and provides them with an opportunity and time to improve their skills, learn how to use new tools, technologies,” said Uladzislau Murashka, a certified ethical hacker (CEH) at ScienceSoft.

Note that while AI offers the potential for quicker, more efficient handling of alerts, human intervention will continue to be critical. Applications of artificial intelligence will not replace humans on the security team anytime soon.

Paving an Intelligent Path Forward

It’s important to consider another group that is investing in AI technology and using it for financial gains: cybercriminals. Along with enterprise security managers, those who make a living by exploiting sensitive data also understand the potential AI has for the future. It will be interesting to see how these capabilities play out in the future cat-and-mouse game of cybersecurity.

While AI in cybersecurity is still in the early stages of its evolution, its potential has yet to be fully realized. As security teams continue to invest in and develop AI technologies, these capabilities will someday be an integral part of cyberdefense.

The post Are Applications of AI in Cybersecurity Delivering What They Promised? appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Joan Goodchild

Artificial Intelligence (AI), Machine Learning, Security Intelligence & Analytics, Threat Intelligence, Threat Monitoring,

Now That You Have a Machine Learning Model, It’s Time to Evaluate Your Security Classifier

This is the third installment in a three-part series about machine learning. Be sure to read part one and part two for more context on how to choose the right artificial intelligence solution for your security problems.

As we move into this third part, we hope we have helped our readers better identify an artificial intelligence (AI) solution and select the right algorithm to address their organization’s security needs. Now, it’s time to evaluate the effectiveness of the machine learning (ML) model being used. But with so many metrics and systems available to measure security success, where does one begin?

Classification or Regression? Which Can Get Better Insights From Your Data?

By this time, you may have selected an algorithm of choice to use with your machine learning solution. It could fall into one of two categories in general, classification or regression. Here is a reminder of the main difference: From a security standpoint, these two types of algorithms tend to solve different problems. For example, a classifier might be used as an anomaly detector, which is often the basis of the new generation of intrusion detection and prevention systems. Meanwhile, a regression algorithm might be better at things such as detecting denial-of-service attacks (DoS) because these problems tend to involve numbers rather than nominal labels.

At first look, the difference between Classification and Regression might seem complicated, but it really isn’t. It just comes down to what type of value our target variable, also called our dependent variable, contains. In that sense, the main difference between the two is that the output variable in Regression is numerical while he output for Classification is categorical/discrete.

For our purposes in this blog, we’ll focus on metrics that are used to evaluate algorithms applied to supervised ML. For reference, supervised machine learning is the form of learning where we have complete labels and a ground truth. For example, we know that the data can be divided into class1 and class2, and each of our training, validation, and testing samples is labeled as belonging to class1 or class2.

Classification Algorithms – or Classifiers

To have ML work with data, we can select a security classifier, which is an algorithm with a non-numeric class value. We want this algorithm to look at data and classify it into predefined data “classes.” These are usually two or more categorical, dependent variables.

For example, we might try to classify something as an attack or not an attack. We would create two labels, one for each of those classes. A classifier then takes the training set and tries to learn a “decision boundary” between the two classes. There could be more than two classes, and in some cases only one class. For example, the Modified National Institute of Standards and Technology (MNIST) database demo tries to classify an image as one of the ten possible digits from hand-written samples. This demo is often used to show the abilities of deep learning, as the deep net can output probabilities for each digit rather than one single decision. Typically, the digit with the highest probability is chosen as the answer.

A Regression Algorithm – or Regressor

A Regression algorithm, or regressor, is used when the target variable is a number. Think of a function in math: there are numbers that go into the function and there is a number that comes out of it. The task in Regression is to find what this function is. Consider the following example:

Y = 3x+9

We will now find ‘Y’ for various values of ‘X’. Therefore:

X = 1 -> y = 12

X = 2 -> y = 15

X = 3 -> y = 18

The regressor’s job is to figure out what the function is by relying on the values of X and Y. If we give the algorithm enough X and Y values, it will hopefully find the function 3x+9.

We might want to do this in cases where we need to calculate the probability of an event being malicious. Here, we do not want a classification, as the results are not fine-grained enough. Instead, we want a confidence or probability score. So, for example, the algorithm might provide the answer that “there is a 47 percent probability that this sample is malicious.”

In the next section, we will be looking at the various metrics for each, Classification, and Regression, which can help us determine the efficacy of our security posture by using our chosen ML model.

Metrics for Classification

Before we dive into common classification metrics, let’s define some key terms:

  • Ground truth is a set of known labels or descriptions of which class or target variable represents the correct solution. In a binary classification problem, for instance, each example in the ground truth is labeled with the correct classification. This mirrors the training set, where we have known labels for each example.
  • Predicted labels represent the classifications that the algorithm believes are correct. That is, the output of the algorithm.

Now let’s take a closer look at some of the most useful metrics against which we can choose to measure the success of our machine learning deployment.

True Positive Rate

This is the ratio of correctly predicted positive examples to the total number of examples in the ground truth. If there are 100 examples in the ground truth and the model correctly predicts 65 of them as positive, then the true positive rate (TPR) is 65 percent, sometimes written as 0.65.

False Positive Rate

The false positive rate (FPR) is the number of incorrectly predicted examples that are labeled as positive by the algorithm but are actually negative in the ground truth. If we have 100 examples and 15 of them are incorrectly predicted as positive, then the false positive rate would be 15 percent, sometimes written as 0.15.

True Negative Rate

The true negative rate (TNR) is the number of correctly predicted negative examples divided by the number of examples in the ground truth. Let us say that in the scenario of 100 examples that another 15 of these examples were correctly predicted as negative. Therefore, the true negative rate (TNR) is 15 percent, also written as 0.15. Notice here that there were 15 false positives and 15 true negatives. This makes for a total of 30 negative examples.

False Negative Rate

The false negative rate (FNR) is the ratio of examples predicted incorrectly as belonging to the negative class over the number of examples in the ground truth. Continuing with the aforementioned case, let’s say that out of 100 examples in the ground truth, the algorithm correctly predicted 65 as positive. We also know that 15 were predicted as false positives and 15 were predicted as true negatives. This leaves us with 5 examples unaccounted for, so our false negative rate is 5 percent, or 0.05. The false negative rate is the complement to the true positive rate, so the sum of the two metrics should be 70 percent (0.7), as 70 examples actually belong to the positive class.


Accuracy measures the proportion of correct predictions, both positive and negative, to the total number of examples in the ground truth. This metric can often be misleading if, for instance, there is a large proportion of positive examples in the ground truth compared to the number of negative examples. Similarly, if the model predicts only the positive class correctly, accuracy will not give you a sense of how well the model does with negative predictions versus negative examples in the ground truth even though the accuracy could be quite high because the positive examples were predicted.

Accuracy = (TP+TN)/(TP+TN+FP+FN)


Before we explore the precision metric, it’s important to define a few more terms:

  • TP is the raw number of true positives (in the above example, the TP is 65).
  • FP is the raw number of false positives (15 in the above example).
  • TN is the raw number of true negatives (15 in the above example).
  • FN is the raw number of false negatives (5 in the above example).

Precision, sometimes known as the positive predictive value, is the proportion of true positives predicted by the algorithm over the sum of all examples predicted as positive. That is, precision=TP/(TP+FP).

In our example, there were 65 positives in the ground truth that the algorithm correctly labeled as positive. However, it also labeled 15 examples as positive when they were actually negative.

These false positives go into the denominator of the precision calculation. So, we get 65/(65+15), which yields a precision of 0.81.

What does this mean? In brief, high precision means that the algorithm returned far more true positives than false positives. In other words, it is a qualitative measure. The higher the precision, the better job the algorithm did of predicting true positives while rejecting false positives.


Recall, also known as sensitivity, is the ratio of true positives to true positives plus false negatives: TP/(TP+FN).

In our example, there were 65 true positives and 5 false negatives, giving us a recall of 65/(65+5) = 0.93. Recall is a quantitative measure; in a classification task, it is a measure of how well the algorithm “memorized” the training data.

Note that there is often a trade-off between precision and recall. In other words, it’s possible to optimize one metric at the expense of the other. In a security context, we may often want to optimize recall over precision because there are circumstances where we must predict all the possible positives with a high degree of certainty.

For example, in the world of automotive security, where kinetic harm may occur, it is often heard that false positives are annoying, but false negatives can get you killed. That is a dramatic example, but it can apply to other situations as well. In intrusion prevention, for instance, a false positive on a ransomware sample is a minor nuisance, while a false negative could cause catastrophic data loss.

However, there are cases that call for optimizing precision. If you are constructing a virus encyclopedia, for example, higher precision might be preferred when analyzing one sample since the missing information will presumably be acquired from another sample.


An F-measure (or F1 score) is defined as the harmonic mean of precision and recall. There is a generic F-measure, which includes a variable beta that causes the harmonic mean of precision and recall to be weighted.

Typically, the evaluation of an algorithm is done using the F1 score, meaning that beta is 1 and therefore the harmonic mean of precision and recall is unweighted. The term F-measure is used as a synonym for F1 score unless beta is specified.

The F1 score is a value between 0 and 1 where the ideal score is 1, and is calculated as 2 * Precision * Recall/(Precision+Recall), or the harmonic mean. This metric typically lies between precision and recall. If both are 1, then the F-measure equals 1 as well. The F1 score has no intuitive meaning per se; it is simply a way to represent both precision and recall in one metric.

Matthews Correlation Coefficient

The Matthews Correlation Coefficient (MCC), sometimes written as Phi, is a representation of all four values — TP, FP, TN and FN. Unlike precision and recall, the MCC takes true negatives into account, which means it handles imbalanced classes better than other metrics. It is defined as:


If the value is 1, then the classifier and ground truth are in perfect agreement. If the value is 0, then the result of the classifier is no better than random chance. If the result is -1, the classifier and the ground truth are in perfect disagreement. If this coefficient seems low (below 0.5), then you should consider using a different algorithm or fine-tuning your current one.

Youden’s Index

Also known as Youden’s J statistic, Youden’s index is the binary case of the general form of the statistic known as ‘informedness’, which applies to multiclass problems. It is calculated as (sensitivity + specificity–1) and can be seen as the probability of an informed decision verses a random guess. In other words, it takes all four predictors into account.

Remember from our examples that recall=TP/(FP+FN) and specificity, or TNR, is also the complement of the FPR. Therefore, the Youden index incorporates all measures of predictors. If the value of Youden’s index is 0, then the probability of the decision actually being informed is no better than random chance. If it is 1, then both false positives and false negatives are 0.

Area Under the Receiver Operator Characteristic Curve

This metric, usually abbreviated as AUC or ROC, measures the area under the curve plotted with true positives on the Y-axis and false positives on the X-axis. This metric can be useful because it provides a single number that lets you compare models of different types. An AUC value of 0.5 means the result of the test is essentially a coin flip. You want the AUC to be as close to 1 as possible because this enables researchers to make comparisons across experiments.

Area Under the Precision Recall Curve

Area under the precision recall curve (AUPRC) is a measurement that, like MCC, accounts for imbalanced class distributions. If there are far more negative examples than positive examples, you might want to use AUPRC as your metric and visual plot. The curve is precision plotted against recall. The closer to 1, the better. Note that since this metric/plot works best when there are more negative predictions than positive predictions, you might have to invert your labels for testing.

Average Log Loss

Average log loss represents the penalty of wrong prediction. It is the difference between the probability distributions of the actual and predicted models.

In deep learning, this is sometimes known as the cross-entropy loss, which is used when the result of a classifier such as a deep learning model is a probability rather than a binary label. Cross-entropy loss is therefore the divergence of the predicted probability from the actual probability in the ground truth. This is useful in multiclass problems but is also applicable to the simplified case of binary classification.

By using these metrics to evaluate your ML model, and tailoring them to your specific needs, you could fine-tune the output from the data and essentially get more certain results, thus detecting more issues/threats, and optimizing controls as needed.

Metrics for Regression

For regression, the goal is to determine the amount of errors produced by the ML algorithm. The model is considered good if the error value between the predicted and observed value is small.

Let’s take a closer look at some of the metrics used for evaluating regression models.

Mean Absolute Error

Mean absolute error (MAE) is the closeness of the predicted result to the actual result. You can think of this as the average of the differences between the predicted value and the ground truth value. As we proceed along each test example when evaluating against the ground truth, we subtract the actual value reported in the ground truth from the predicted value from the regression algorithm and take the absolute value. We can then calculate the arithmetic mean of these values.

While the interpretation of this metric is well-defined, because it is an arithmetic mean, it could be affected by very large, or very small differences. Note that this value is scale-dependent, meaning that the error is on the same scale as the data. Because of this, you cannot compare two MAE values across datasets.

Root Mean Squared Error

Root mean squared error (RMSE) attempts to represent all error across moments in time in one value. This is often the metric that optimization algorithms seek to minimize in regression problems. When an optimization algorithm is tuning so-called hyperparameters, it seeks to make RMSE as small as possible.

Consider, however, that like MAE, RMSE is both sensitive to large and small outliers and is scale-dependent. Therefore, you have to be careful and examine your residuals to look for outliers — values that are significantly above or below the rest of the residuals. Also, like MAE, it is improper to compare RMSE across datasets unless the scaling translations have been accounted for, because data scaling, whether by normalization or standardization, is dependent upon the data values.

For example, in Standardization, the scale from -1 to 1 is determined by subtracting the mean from each value and dividing the value by the standard deviation. This gives the normal distribution. If, on the other hand, the data is normalized, the scaling is done by taking the current value and subtracting the minimum value, then dividing this by the quantity (maximum value – minimum value). These are completely different scales, and as a result, one cannot compare the RMSE between these two data sets.

Relative Absolute Error

Relative absolute error (RAE) is the mean difference divided by the arithmetic mean of the values in the ground truth. Note that this value can be compared across scales because it has been normalized.

Relative Squared Error

Relative squared error (RSE) is the total squared error of the predicted values divided by the total squared error of the observed values. This also normalizes the error measurement so that it can be compared across datasets.

Machine Learning Can Revolutionize Your Organization’s Security

Machine learning is integral to the enhancement of cybersecurity today and it will only become more critical as the security community embraces cognitive platforms.

In this three-part series, we covered various algorithms and their security context, from cutting-edge technologies such as generative adversarial networks to more traditional algorithms that are still very powerful.

We also explored how to select the appropriate security classifier or regressor for your task, and, finally, how to evaluate the effectiveness of a classifier to help our readers better gauge the impact of optimization. With a better idea about these basics, you’re ready to examine and implement your own algorithms and to move toward revolutionizing your security program with machine learning.

The post Now That You Have a Machine Learning Model, It’s Time to Evaluate Your Security Classifier appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Brad Harris

Artificial intelligence, Artificial Intelligence (AI), Chief Information Security Officer (CISO), CISO, Cloud Security, Cognitive Security, Internet of Things (IoT), Machine Learning, Penetration Testing, Security Intelligence & Analytics, Security Leaders, Security Leadership, Security Operations Center (SOC), Security Solutions,

Break Through Cybersecurity Complexity With New Rules, Not More Tools

Let’s be frank: Chief information security officers (CISOs) and security professionals all know cybersecurity complexity is a major challenge in today’s threat landscape. Other folks in the security industry know this too — although some don’t want to admit it. The problem is that amid increasing danger and a growing skills shortage, security teams are overwhelmed by alerts and the growing number of complex tools they have to manage. We need to change that, but how? By completely rethinking our assumptions.

The basic assumption of security up until now is that new threats require new tools. After 12 years at IBM Security, leading marketing teams and making continuous contact with our clients — and, most recently, as VP of product marketing — I’ve seen a lot of promising new technology. But in our rapidly diversifying industry, there are more specialized products to face every kind of threat in an expanding universe of attack vectors. Complexity is a hidden cost of all these marvelous products.

It’s not just security products that contribute to the cybersecurity complexity conundrum; digitization, mobility, cloud and the internet of things (IoT) all contribute to the complexity of IT environments, making security an uphill battle for underresourced security teams. According to Forrester’s “Global Business Technographics Security Survey 2018,” 31 percent of business and IT decision-makers ranked the complexity of the IT environment among the biggest security challenges they face, tied with the changing nature of threats as the most-cited challenge.

I’ll give you one more mind-boggling statistic to demonstrate why complexity is the enemy of security: According to IBM estimates, enterprises use as many as 80 different security products from 40 vendors. Imagine trying to build a clear picture with pieces from 80 separate puzzles. That’s what CISOs and security operations teams are being asked to do.

7 Rules to Help CISOs Reduce Cybersecurity Complexity

The sum of the parts is not greater than the whole. So, we need to escape the best-of-breed trap to handle the problem of complexity. Cybersecurity doesn’t need more tools; it needs new rules.

Complexity requires us as security professionals and industry partners to turn the old ways of thinking inside out and bring in fresh perspectives.

Below are seven rules to help us think in new ways about the complex, evolving challenges that CISOs, security teams and their organizations face today.

1. Open Equals Closed

You can’t prevent security threats by piling on more tools that don’t talk to each other and create more noise for overwhelmed analysts. Security products need to work in concert, and that requires integration and collaboration. An open, connected, cloud-based security platform that brings security products together closes the gaps that point products leave in your defenses.

2. See More When You See Less

Security operations centers (SOCs) see thousands of security events every day — a 2018 survey of 179 IT professionals found that 55 percent of respondents handle more than 10,000 alerts per day, and 27 percent handle more than 1 million events per day. SOC analysts can’t handle that volume.

According to the same survey, one-third of IT professionals simply ignore certain categories of alerts or turn them off altogether. A smarter approach to the overwhelming volume of alerts leverages analytics and artificial intelligence (AI) so SOC analysts can focus on the most crucial threats first, rather than chase every security event they see.

3. An Hour Takes a Minute

When you find a security incident that requires deeper investigation, time is of the essence. Analysts can’t afford to get bogged down in searching for information in a sea of threats.

Human intelligence augmented by AI — what IBM calls cognitive security — allows SOC analysts to respond to threats up to 60 times faster. An advanced AI can understand, reason and learn from structured and unstructured data, such as news articles, blogs and research papers, in seconds. By automating mundane tasks, analysts are freed to make critical decisions for faster response and mitigation.

4. A Skills Shortage Is an Abundance

It’s no secret that greater demand for cybersecurity professionals and an inadequate pipeline of traditionally trained candidates has led to a growing skills gap. Meanwhile, cybercriminals have grown increasingly collaborative, but those who work to defend against them remain largely siloed. Collaboration platforms for security teams and shared threat intelligence between vendors are force multipliers for your team.

5. Getting Hacked Is an Advantage

If you’re not seeking out and patching vulnerabilities in your network and applications, you’re making an assumption that what you don’t know can’t hurt you. Ethical hacking and penetration testing turns hacking into an advantage, helping you find your vulnerabilities before adversaries do.

6. Compliance Is Liberating

More and more consumers say they will refuse to buy products from companies that they don’t trust to protect their data, no matter how great the products are. By creating a culture of proactive data compliance, you can exchange the checkbox mentality for continuous compliance, turning security into a competitive advantage.

7. Rigidity Is Breakthrough

The success of your business depends not only on customer loyalty, but also employee productivity. Balance security with productivity by practicing strong security hygiene. Run rigid but silent security processes in the background to stay out of the way of productivity.

What’s the bottom line here? Times are changing, and the current trend toward complexity will slow the business down, cost too much and fail to reduce cyber risk. It’s time to break through cybersecurity complexity and write new rules for a new era.

The post Break Through Cybersecurity Complexity With New Rules, Not More Tools appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Wangui McKelvey

Advanced Persistent Threat (APT), Advanced Threat Protection, Advanced Threats, Data Protection, Data Security, Security Information and Event Management (SIEM), Security Intelligence & Analytics, threat hunting, Threat Management, Threat Protection,

Embrace the Intelligence Cycle to Secure Your Business

Regardless of where we work or what industry we’re in, we all have the same goal: to protect our most valuable assets. The only difference is in what we are trying to protect. Whether it’s data, money or even people, the harsh reality is that it’s difficult to keep them safe because, to put it simply, bad people do bad things.

Sometimes these malicious actors are clever, setting up slow-burning attacks to steal enterprise data over several months or even years. Sometimes they’re opportunistic, showing up in the right place at the wrong time (for us). If a door is open, these attackers will just waltz on in. If a purse is left unattended on a table, they’ll quickly swipe it. Why? Because they can.

The Intelligence Cycle

So how do we fight back? There is no easy answer, but the best course of action in any situation is to follow the intelligence cycle. Honed by intelligence experts across industries over many years, this method can be invaluable to those investigating anything from malware to murders. The process is always the same.

Stage 1: Planning and Direction

The first step is to define the specific job you are working on, find out exactly what the problem is and clarify what you are trying to do. Then, work out what information you already have to deduce what you don’t have.

Let’s say, for example, you’ve discovered a spate of phishing attacks — that’s your problem. This will help scope subsequent questions, such as:

  • What are the attackers trying to get?
  • Who is behind the attacks?
  • Where are attacks occurring?
  • How many attempts were successful?

Once you have an idea of what you don’t know, you can start asking the questions that will help reveal that information. Use the planning and direction phase to define your requirements. This codifies what you are trying to do and helps clarify how you plan on doing it.

Stage 2: Collection

During this stage, collect the information that will help answer your questions. If you cannot find the answers, gather data that will help lead to those answers.

Where this comes from will depend on you and your organization. If you are protecting data from advanced threats, for instance, you might gather information internally from your security information and event management (SIEM) tool. If you’re investigating more traditional organized crime, by contrast, you might knock on doors and whisper to informants in dark alleys to collect your information.

You can try to control the activity of collection by creating plans to track the process of information gathering. These collection plans act as guides to help information gatherers focus on answering the appropriate questions in a timely manner. Thorough planning is crucial in both keeping track of what has been gathered and highlighting what has not.

Stage 3: Processing and Exploitation

Collected information comes in many forms: handwritten witness statements, system logs, video footage, data from social networks, the dark web, and so on. Your task is to make all the collected information usable. To do this, put it into a consistent format. Extract pertinent information (e.g., IP addresses, telephone numbers, asset references, registration plate details, etc.), place some structure around those items of interest and make it consistent. It often helps to load it into a schematized database.

If you do this, your collected information will be in a standard shape and ready for you to actually start examining it. The value is created by putting this structure around the information. It gives you the ability to make discoveries, extract the important bits and understand your findings in the context of all the other information. If you can, show how attacks are connected, link them to bad actors and collate them against your systems. It helps to work with the bits that are actually relevant to the specific thing you’re working on. And don’t forget to reference this new data you collected against all the old stuff you already knew; context is king in this scenario.

This stage helps you make the best decisions you can against all the available information. Standardization is great; it is hard to work with information when it’s in hundreds of different formats, but it’s really easy when it’s in one.

Of course, the real world isn’t always easy. Sometimes it is simply impossible to normalize all of your collected information into a single workable pot. Maybe you collected too much, or the data arrived in too many varied formats. In these cases, your only hope is to invest in advanced analytical tools and analysts that will allow you to fuse this cacophony of information into some sensible whole.

Stage 4: Analysis Production

The analysis production stage begins when you have processed your information into a workable state and are ready to conduct some practical analysis — in other words, you are ready to start producing intelligence.

Think about the original task you planned to work on. Look at all the lovely — hopefully standardized — information you’ve collected, along with all the information you already had. Query it. Ask questions of it. Hypothesize. Can you find the answer to your original question? What intelligence can you draw from all this information? What stories can it tell? If you can’t find any answers — if you can’t hypothesize any actions or see any narratives — can you see what is missing? Can you see what other information you would need to collect that would help answer those questions? This is the stage where you may be able to draw new conclusions out of your raw information. This is how you produce actionable intelligence.

Actionable intelligence is an important concept. There’s no point in doing all this work if you can’t find something to do at the end of it. The whole aim is to find an action that can be performed in a timely manner that will help you move the needle on your particular task.

Finding intelligence that can be acted upon is key. Did you identify that phishing attack’s modus operandi (MO)? Did you work out how that insider trading occurred? It’s not always easy, but it is what your stakeholders need. This stage is where you work out what you must do to protect whatever it is you are safeguarding.

Stage 5: Dissemination

The last stage of the intelligence cycle is to go back to the stakeholders and tell them what you found. Give them your recommendations, write a report, give a presentation, draw a picture — however you choose to do it, convey your findings to the decision-makers who set the task to begin with. Back up your assertions with your analysis, and let the stakeholders know what they need to do in the context of the intelligence you have created.

Timeliness is very important. Everything ages, including intelligence. There’s no point in providing assessments for things that have already happened. You will get no rewards for disseminating a report on what might happen at the London Marathon a week after the last contestant finished. Unlike fine wine, intelligence does not improve with age.

To illustrate how many professionals analyze and subsequently disseminate intelligence, below is an example of an IBM i2 dissemination chart:

The Intelligence Cycle

The analysis has already happened and, in this case, the chart is telling your boss to go talk to that Gene Hendricks chap — he looks like one real bad egg.

Then what? If you found an answer to your original question, great. If not, then start again. Keep going around the intelligence cycle until you do. Plan, collect, process, analyze, disseminate and repeat.

Gain an Edge Over Advanced Threats

We are all trying to protect our valued assets, and using investigation methodologies such as the intelligence cycle could help stop at least some malicious actors from infiltrating your networks. The intelligence cycle can underpin the structure of your work both with repetitive processes, such as defending against malware and other advanced threats, and targeted investigations, such as searching for the burglars who stole the crown jewels. Embrace it.

Whatever it is you are doing — and whatever it is you are trying to protect — remember that adopting this technique could give your organization the edge it needs to fight back against threat actors who jealously covet the things you defend.

To learn more, read the interactive white paper, “Detect, Disrupt and Defeat Advanced Physical and Cyber Threats.”

Read the white paper

The post Embrace the Intelligence Cycle to Secure Your Business appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Matthew Farenden

Access Management, Identity and Access Management (IAM), Incident Response (IR), Security Information and Event Management (SIEM), Security Intelligence & Analytics, Security Operations Center (SOC), Security Solutions, Threat Detection,

Bring Order to Chaos By Building SIEM Use Cases, Standards, Baselining and Naming Conventions

Security operations centers (SOCs) are struggling to create automated detection and response capabilities. While custom security information and event management (SIEM) use cases can allow businesses to improve automation, creating use cases requires clear business logic. Many security organizations lack efficient, accurate methods to distinguish between authorized and unauthorized activity patterns across components of the enterprise network.

Even the most intelligent SIEM can fail to deliver value when it’s not optimized for use cases, or if rules are created according to incorrect parameters. Creating a framework that can accurately detect suspicious activity requires baselines, naming conventions and effective policies.

Defining Parameters for SIEM Use Cases Is a Barrier to SOC Success

Over the past few years, I’ve consulted with many enterprise SOCs to improve threat detection and incident response capabilities. Regardless of SOC maturity, most organizations struggle to accurately define the difference between authorized and suspicious patterns of activity, including users, admins, access patterns and scripts. Countless SOC leaders are stumped when they’re asked to define authorized patterns of activity for mission-critical systems.

SIEM rules can be used to automate detection and response capabilities for common threats such as distributed denial-of-service (DDoS), authentication failures and malware. However, these rules must be built on clear business logic for accurate detection and response capabilities. Baseline business logic is necessary to accurately define risky behavior in SIEM use cases.

Building a Baseline for Cyber Hygiene

Cyber hygiene is defined as the consistent execution of activities necessary to protect the integrity and security of enterprise networks, including users, data assets and endpoints. A hygiene framework should offer clear parameters for threat response and acceptable use based on policies for user governance, network access and admin activities. Without an understanding of what defines typical, secure operations, it’s impossible to create an effective strategy for security maintenance.

A comprehensive framework for cybersecurity hygiene can simplify security operations and create guidelines for SIEM use cases. However, capturing an effective baseline for systems can strengthen security frameworks and create order in chaos. To empower better hygiene and threat detection capabilities based on business logic, established standards such as a naming convention can create clear parameters.

VLAN Network Categories

For the purpose of simplified illustration, imagine that your virtual local area networks (VLANs) are categorized among five criticality groups — named A, B, C, D and E — with the mission-critical VLAN falling into the A category (_A).

A policy may be created to dictate that A-category VLAN systems can communicate directly with any other category without compromising data security. However, communication with the A-category VLAN from B, C, D or E networks is not allowed. Authentication to a jump host can accommodate authorized exceptions to this standard, such as when E-category users need access to an A-category server.

Creating a naming convention and policy for VLAN network categories can help you develop simple SIEM use cases to prevent unauthorized access to A resources and automatically detect suspicious access attempts.

Directory Services and Shared Resources

You can also use naming convention frameworks to create a policy for managing groups of user accounts according to access level in directory services, such as Lightweight Directory Access Protocol (LDAP) or Active Directory (AD). A standardized naming convention for directory services provides a clear framework for acceptable user access to shared folders and resources. AD users categorized within the D category may not have access to A-category folders or _A.

Creating effective SIEM rules based on these use cases is a bit more complex than VLAN business logic since it involves two distinct technologies and potentially complex policies for resource access. However, creating standards that connect user access to resources establishes clear parameters for strict, contextual monitoring. Directory users with A-category access may require stricter change monitoring due to the potential for abuse of admin capabilities. You can create SIEM use cases to detect other configuration mistakes, such as a C-category user who is suddenly escalated to A-category.

Username Creation

Many businesses are already applying some logic to standardize username creation for employees. A policy may dictate that users create a seven-character alias that involves three last-name characters, two first-name characters and two digits. Someone named Janet Doe could have the username DoeJa01, for example. Even relatively simple username conventions can support SIEM use cases for detecting suspicious behavior. When eight or more characters are entered into a username field, an event could be triggered to lock the account until a new password is created.

The potential SIEM use cases increase with more complex approaches to username creation, such as 12-character usernames that combine last- and first-name characters with the employee’s unique HR-issued identification. A user named Jonathan Doerty, for instance, could receive an automatically generated username of doertjo_4682. Complex usernames can create friction for legitimate end users, but some minor friction can be justified if it provides greater safeguards for privileged users and critical systems.

An external threat actor may be able to extrapolate simple usernames from social engineering activities, but they’re unlikely to guess an employee’s internal identification number. SIEM rules can quickly detect suspicious access attempts based on username field entries that lack the required username components. Requiring unique identification numbers from HR systems can also significantly lower the risk of admins creating fake user credentials to conceal malicious activity.

Unauthorized Code and Script Locations

Advanced persistent threats can evade detection by creating backdoor access to deploy a carefully disguised malicious code. Standard naming conventions provide a cost-effective way to create logic to detects malware risks. A simple model for script names could leverage several data components, such as department name, script name and script author, resulting in authorized names like HR_WellnessLogins_DoexxJo. Creating SIEM parameters for acceptable script names can automate the detection of malware.

Creating baseline standards for script locations such as /var/opt/scripts and C:Program Files can improve investigation capabilities when code is detected that doesn’t comply with the naming convention or storage parameters. Even the most sophisticated threat actors are unlikely to perform reconnaissance on enterprise naming convention baselines before creating a backdoor and hiding a script. SIEM rules can trigger a response from the moment a suspiciously named script begins to run or a code file is moved into an unauthorized storage location.

Scaling Security Response With Standards

Meaningful threats to enterprise data security often fly under the radar of even the most sophisticated threat detection solutions when there’s no baseline to define acceptable activity. SOC analysts have more technological capabilities than ever, but many are struggling to optimize detection and response with effective SIEM use cases.

Clear, scalable systems to define policies for acceptable activity create order in chaos. The smartest approach to creating effective SIEM use cases relies on standards, a strong naming convention and sound policy. It’s impossible to accurately understand risks without a clear framework for authorized activities. Standards, baselines and naming conventions can remove barriers to effective threat detection and response.

The post Bring Order to Chaos By Building SIEM Use Cases, Standards, Baselining and Naming Conventions appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Ludek Subrt

Advanced Persistent Threat (APT), Analytics, Artificial intelligence, Big Data, Data Management, insider threats, Internet of Things (IoT), Machine Learning, Security Analytics, Security Intelligence & Analytics, Security Training, Threat Detection, Threat Intelligence, User Behavior Analytics (UBA),

Stay Ahead of the Growing Security Analytics Market With These Best Practices

As breach rates climb and threat actors continue to evolve their techniques, many IT security teams are turning to new tools in the fight against corporate cybercrime. The proliferation of internet of things (IoT) devices, network services and other technologies in the enterprise has expanded the attack surface every year and will continue to do so. This evolving landscape is prompting organizations to seek out new ways of defending critical assets and gathering threat intelligence.

The Security Analytics Market Is Poised for Massive Growth

Enter security analytics, which mixes threat intelligence with big data capabilities to help detect, analyze and mitigate targeted attacks and persistent threats from outside actors as well as those already inside corporate walls.

“It’s no longer enough to protect against outside attacks with perimeter-based cybersecurity solutions,” said Hani Mustafa, CEO and co-founder of Jazz Networks. “Cybersecurity tools that blend user behavior analytics (UBA), machine learning and data visibility will help security professionals contextualize data and demystify human behavior, allowing them to predict, prevent and protect against insider threats.”

Security analytics can also provide information about attempted breaches from outside sources. Analytics tools work together with existing network defenses and strategies and offer a deeper view into suspicious activity, which could be missed or overlooked for long periods due to the massive amount of superfluous data collected each day.

Indeed, more security teams are seeing the value of analytics as the market appears poised for massive growth. According to Global Market Insights, the security analytics market was valued at more than $2 billion in 2015, and it is estimated to grow by more than 26 percent over the coming years — exceeding $8 billion by 2023. ABI Research put that figure even higher, estimating that the need for these tools will drive the security analytics market toward a revenue of $12 billion by 2024.

Why Are Security Managers Turning to Analytics?

For most security managers, investment in analytics tools represents a way to fill the need for more real-time, actionable information that plays a role in a layered, robust security strategy. Filtering out important information from the massive amounts of data that enterprises deal with daily is a primary goal for many leaders. Businesses are using these tools for many use cases, including analyzing user behavior, examining network traffic, detecting insider threats, uncovering lost data, and reviewing user roles and permissions.

“There has been a shift in cybersecurity analytics tooling over the past several years,” said Ray McKenzie, founder and managing director of Red Beach Advisors. “Companies initially were fine with weekly or biweekly security log analytics and threat identification. This has morphed to real-time analytics and tooling to support vulnerability awareness.”

Another reason for analytics is to gain better insight into the areas that are most at risk within an IT environment. But in efforts to cull important information from a wide variety of potential threats, these tools also present challenges to the teams using them.

“The technology can also cause alert fatigue,” said Simon Whitburn, global senior vice president, cybersecurity services at Nominet. “Effective analytics tools should have the ability to reduce false positives while analyzing data in real-time to pinpoint and eradicate malicious activity quickly. At the end of the day, the key is having access to actionable threat intelligence.”

Personalization Is Paramount

Obtaining actionable threat intelligence means configuring these tools with your unique business needs in mind.

“There is no ‘plug and play’ solution in the security analytics space,” said Liviu Arsene, senior cybersecurity analyst at Bitdefender. “Instead, the best way forward for organizations is to identify and deploy the analytics tools that best fits an organization’s needs.”

When evaluating security analytics tools, consider the company’s size and the complexity of the challenges the business hopes to address. Organizations that use analytics may need to include features such as deployment models, scope and depth of analysis, forensics, and monitoring, reporting and visualization. Others may have simpler needs with minimal overhead and a smaller focus on forensics and advanced persistent threats (APTs).

“While there is no single analytics tool that works for all organizations, it’s important for organizations to fully understand the features they need for their infrastructure,” said Arsene.

Best Practices for Researching and Deploying Analytics Solutions

Once you have established your organization’s needs and goals for investing in security analytics, there are other important considerations to keep in mind.

Emphasize Employee Training

Chief information security officers (CISOs) and security managers must ensure that their staffs are prepared to use the tools at the outset of deployment. Training employees on how to make sense of information among the noise of alerts is critical.

“Staff need to be trained to understand the results being generated, what is important, what is not and how to respond,” said Steve Tcherchian, CISO at XYPRO Technology Corporation.

Look for Tools That Can Change With the Threat Landscape

Security experts know that criminals are always one step ahead of technology and tools and that the threat landscape is always evolving. It’s essential to invest in tools that can handle relevant data needs now, but also down the line in several years. In other words, the solutions must evolve alongside the techniques and methodologies of threat actors.

“If the security tools an organization uses remain stagnant in their programming and update schedule, more vulnerabilities will be exposed through other approaches,” said Victor Congionti of Proven Data.

Understand That Analytics Is Only a Supplement to Your Team

Analytics tools are by no means a replacement for your security staff. Having analysts who can understand and interpret data is necessary to get the most out of these solutions.

Be Mindful of the Limitations of Security Analytics

Armed with security analytics tools, organizations can benefit from big data capabilities to analyze data and enhance detection with proactive alerts about potential malicious activity. However, analytics tools have their limitations, and enterprises that invest must evaluate and deploy these tools with their unique business needs in mind. The data obtained from analytics requires context, and trained staff need to understand how to make sense of important alerts among the noise.

The post Stay Ahead of the Growing Security Analytics Market With These Best Practices appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Joan Goodchild