Browsing category

Security Solutions

Incident Response (IR), Security Intelligence & Analytics, Security Operations Center (SOC), Security Professionals, Security Solutions, Threat Intelligence,

Level Up Security Operations With Threat Intelligence Cheat Codes

Few fields have experienced growth over the last two decades like cybersecurity and video gaming. Through the years, both industries have seen the rise and fall of incumbent players and the near-constant shift in consumer preferences. While learning how to embrace their own platform shifts, both fields have had to fundamentally reinvent themselves to adapt and survive.

Arcade-Style Silos Make Way for Plug-and-Play Solutions

For many people, their first memorable experience with video games was at an arcade. Arcade operators made heavy one-off investments for each new game that came out. For example, “Mortal Kombat 2” and its sequels did not build onto or integrate with the existing “Mortal Kombat” games. In many ways, this issue has also plagued cybersecurity, with the average organization deploying 80-plus point products from over 40 vendors.

The advent of the console flipped the gaming industry on its head. Rather than having to buy a new machine for each game, there was a single interface that ran multiple games — classic examples of which include the Super Nintendo Entertainment System (SNES) — where additional functionality was just a cartridge away. Rather than shelling out for singular monolithic solutions, consumers preferred modular platforms that enabled them to add additional games in a snap.

The consumer shift toward unified platforms is true today in security as chief information security officers (CISOs) look more for integrated solutions with the ability to add new features as their organization matures. But even as silos are broken down and security data becomes more unified, how can organizations derive actionable insights from the data to understand their adversary, reduce their investigation time and increase visibility into their environment?

What’s Video Game Design Got to Do With Threat Intelligence?

Threat intelligence is the connecting of specific threat identifiers across many cybersecurity tools and infusing the information into proactive investigation, incident response and remediation workflows. When designing a threat intelligence strategy that allows analysts to detect threats at a rapid pace and developing security operations center (SOC) leadership to make informed decisions, it’s important to consider your organization’s unique needs based on factors such as industry, geography and the nature of your most critical assets.

Similarly, depending on the type of game and its objectives, video game designers choose to focus on varying aspects when developing a game, but three are always constant:

1. The Characters and Players

The good-versus-evil dichotomy is often invoked when talking video game character development; it’s also reflected in the constant game of cat-and-mouse between organizations and threat actors. Whether it’s Mario versus Bowser or analyst versus cyber adversary, it is important to understand the motivation behind attackers to better anticipate their next steps.

Whether that’s kidnapping the princess or exfiltrating sensitive information, security leaders can make informed risk management, organizational and staffing decisions by understanding how the enemy operates. By knowing, for example, that a specific threat actor is targeting their industry, analysts can quickly identify whether they are at risk of an exploit or take proactive steps to patch and protect potentially affected systems.

To invoke Sun Tzu, knowing your enemy is knowing yourself, so having a complete view of which attackers are targeting industry peers or geographic neighbors can give you a window into the mindset of the adversary and help your organization prepare stronger defenses by understanding the vulnerabilities before they become an attack.

2. Narrative and Gameplay

One element that separates some of the best games from the rest is a strong narrative element within a collaborative, multiplayer world. Designers carefully curate decision points for the user, having them make choices that potentially alter how the game unfolds. Threat intelligence guides users in their decision-making process to help inform all levels of the SOC. Tactical threat intelligence can be integrated into the workflow to help reduce false positives, enabling the frontline analyst to quickly decide what is real and what is noise. And for tier-two and -three analysts, who proactively hunt threats and facilitate incident response, having information on the a particular actor’s tactics, techniques and procedures (TTPs) can help them better make day-to-day decisions on task prioritization, threat mitigation and resource allocation.

As the trend has been in recent years, single player modes are being phased out in favor of multiplayer online games. In these games, there is a strong need for communication and collaboration, since most are team-based and the success of the individual depends on the success of the team. Even though analysts may sometimes feel that they’re fighting the battle alone, cybersecurity is a team sport. Threat intelligence is collaborative by nature, with many feeds being driven by a combination of individuals sharing information for others in their industry and validated information from threat researchers.

Threat intelligence can be the unifier for members of the security operations center to collaborate when dealing with investigations and incident response. When teams have identified a validated threat and need to investigate or initiate a response workflow, threat intelligence solutions can integrate with incident response and case management tools to enrich playbooks with specific information about the threat. When it’s all hands on deck, teams can quickly collaborate and add additional indicators as they build the investigation and search threat intelligence for more relevant information.

3. Repeat Playability

The best games are not only fun to play once, but over and over again for years — what gamers refer to as repeat playability. Organizations typically deploy multiple threat intelligence feeds of varying quality for broad and overlapping coverage. While having more data at your teams’ fingertips is generally a good thing, increased visibility often comes at a cost. Gone are the days where security teams could get by with multiple static dumps of comma-separated values (CSVs) with indicators of compromise (IoCs). Even with four threat intelligence sources that provide 300 indicators a day, teams are receiving almost 500,000 indicators a year.

Analysts are overwhelmed, spending hours sifting through data searching for a what feels like a needle in a needle stack to find bits of actionable information. The repetitive nature and sheer volume of their workload, coupled with the cybersecurity skills gap, often leads to analyst burnout. When potential threats are automatically prioritized based on severity, it reduces investigation time and allows analysts to focus on only the most critical threats to their organization.

Up, Up, Down, Down, Left, Right, Left, Right

With actionable and relevant threat intelligence, security teams have the ability to see the previously unseen and significantly accelerate the way they work. Just like the Konami Code did for “Contra,” threat intelligence can provide organizations with security operations cheat codes to gain the competitive advantage they need to combat cybercriminals.

Register for the May 2 webinar to learn how to unlock threat intelligence easter eggs

The post Level Up Security Operations With Threat Intelligence Cheat Codes appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Jeremy Goldstein

Access Management, IBM Security, Identity & Access, Identity and Access Management (IAM), Kuppingercole, Security Intelligence & Analytics, Security Products, Security Solutions,

KuppingerCole Report: Leadership Compass of Access Management and Federation

Part of fixing any IT issue is finding the right solution for the problem and ensuring the issue will not happen again. One of the major struggles for the IT industry is finding the right vendors to enlist as protectors.

KuppingerCole’s Leadership Compass report on access management and federation aims to close the gap between the right solution and the right vendor.

Emerging business requirements, such as onboarding business partners, providing customer access to services and adopting new cloud services, require IT to react and find solutions to these communications and collaboration conditions. Access management and federation vendors are closing in to address these needs and enable business agility.

With many vendors in this market segment, the KuppingerCole Leadership Compass provides a view and analysis of the leading vendors and their strengths and weaknesses. The report acts as a guide for the consumer to compare product features and individual product requirements.

Read the KuppingerCole Leadership Compass report

Breaking Down the Leadership Ratings

When evaluating the different vendors and products, KuppingerCole looked into the aspects of overall functionality, size of the company, number of customers, number of developers, partner ecosystems, licensing models and platform support. Specific features, such as federation inbound, federation outbound, backend integration, adaptive authentication, registration, user stories, security models, deployment models, customization and multitenancy, were considered as well.

KuppingerCole created various leadership ratings, including “Product Leadership,” “Innovation Leadership,” and “Market Leadership,” to combine for the “Overall Leadership” rating. With this view, KuppingerCole gives an overall impression of each vendor’s offering in the particular market segment.

Product Leadership is based on analysis of product and services features and capabilities. This view focuses on the functional strength and completeness of each product.

Innovation Leadership focuses on a customer-oriented approach that ensures the product or service has compatibility with earlier versions, as well as supports new features that deliver emerging customer requirements.

Market Leadership is based on market criteria, such as number of customers, the partner ecosystem, the global reach and the nature of responses to factors affecting the market outlook. This view focuses on global reach, sales and service support, and successful execution of marketing strategy.

KuppingerCole Leadership Compass: Access Management and Federation

How IBM Ranks

IBM Security Access Manager (ISAM) is ranked as a leader in the Product, Marketing and Technology Leadership categories. This rating comes from IBM ISAM having one of the largest customer bases of all vendors in the market segment, a strong partner ecosystem, mature access management and strong adaptive authentication. ISAM is among the leading products in the access management and federation market and meets organizations’ growing lists of IT security requirements with broad feature support.

Read the Full Report

Check out the complete report to discover:

  • An overview of the access management and federation market;
  • The right vendor and right solution for your business; and
  • Why IBM ISAM is a leader in Product, Marketing and Technology.

Read the KuppingerCole Leadership Compass report

The post KuppingerCole Report: Leadership Compass of Access Management and Federation appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Kelly Lappin

Artificial Intelligence (AI), CISO, Collaboration, RSA Conference, Security Conferences, Security Leaders, Security Leadership, Security Operations Center (SOC), Security Products, Security Professionals, Security Solutions, Skills Gap,

Rewrite the Rules to Reduce Complexity in Your Security Architecture

Complexity as it relates to security architecture is attracting a lot of attention. At RSA Conference (RSAC) earlier this year, I saw complexity discussed at multiple vendor booths and in several presentations. But what does it really mean? And is it really that bad?

To get to the root of why complexity is such a challenge, I think you have to take a step back and look at what it is that makes security architecture so complex. One look at the RSAC 2019 exhibit hall provided a clue.

Walking the exhibit floor, I was struck over and over by the sheer number of vendors exhibiting this year. Every inch of space was used to show new products, services, approaches, integrations — you name it. It was noisy and overwhelming for me, and I can only imagine what it must have been like for security directors who were walking around trying to make sense of what was new.

I think the crowded RSAC expo floor is an accurate representation of one of the biggest conundrums in cybersecurity: It is an industry in constant flux. Every day, there are new attacks, updated methods and changing compromise patterns in addition to changing regulatory standards and new business initiatives that need to be evaluated for risk. And since every business has its unique needs and requirements, it’s really no surprise that there are multiple ways to approach a problem, and thus a plethora of products and services available.

Without a doubt, variety is essential for empowering customers to opt for solutions that work best for their unique situations. However, this singular approach to problem solving has created an incredibly complex environment for security organizations to manage, and that has consequences.

“At any given time, the analysts in our security operations center are looking at 10–20 windows open per product,” said Devin Somppi, lead of security operations at BriteSky. “While each of my analysts is an expert in their role, sharing information across these fields is a challenge.”

Somppi referred to his team as the “human glue” binding all of their different security applications. What he means is that many of the individual security solutions produce data that must be analyzed and acted upon. On an individual level, this works great. However, when investigating a multilayered security incident, the data must be shared among the analysts, and that takes time.

“Take, for example, a very common incident: a targeted phishing attack,” said Somppi. “First surfaced through a SIEM, an analyst reviews the situation and kicks off an investigation. This involves multiple parts: checking with your threat intelligence team to run the file against the latest information, getting information from your email security appliance for headers to see if it’s been spoofed, notifying the user of the compromise. This process does work — we make it work — but it can be slow and arduous when that information is spread across multiple teams.”

That kind of delay can be disastrous for end users.

It’s Time to Think Differently About Security

In their RSA Conference session, Somppi and IBM Security Chief Technology Officer Sridhar Muppidi discussed how the biggest hurdle for the security industry — vendors — will be rethinking its approach to security.

“We really have to start looking at security as a team sport,” said Muppidi. An avid cyclist, Muppidi used the example of a peloton from his college cycling days.

“I’m not much of a sprinter, but I’m great at hills,” he said. “There are others in our group where sprinting was their strength. And once we started communicating and leveraging our individual strengths, we not only improved in our race, but as a whole we became much more efficient. The same can be true for security.”

Thinking of security as a team sport shouldn’t be too hard; after all, our adversaries do this very well. Most attackers buy, sell and trade secrets. They share data, swap methodologies and collaborate on processes, all in the name of compromising their targets. So why shouldn’t we defenders adopt the same approach?

The easy answer is that we should. As security vendors, when we communicate better — when we share information and leverage each other’s strengths — we enable organizations to actively defend their networks. More importantly, we empower them to grow their businesses.

The harder question is, how do we do it? In their joint session at RSAC 2019, Muppidi and Somppi laid out three ways the cybersecurity industry can rethink its approach and be more collaborative in its defense.

1. Break Down Silos Among Vendors

In the current environment, each security vendor has its own way of capturing information and it is very hard to integrate that data. While this works to address security issues at an individual level, this siloed approach to using and viewing security data is limiting the potential of not only our clients, but also what we as security vendors can do.

“In order for organizations to really see what cybersecurity can do for their business, we have to break down the silos we’ve built as vendors,” Muppidi said. “This means unifying not only technical capabilities like our APIs or our use of microservices, but also the overall experience. That requires addressing things like different views on data privacy or getting over our ‘competitive’ mindset.”

This is not easy to do, but it ultimately provides a better cybersecurity experience for organizations that are already struggling.

2. Rethink the Role of Security Analysts by Embracing Artificial Intelligence

Artificial intelligence (AI) will play a pivotal role in how we approach security in the coming years. AI will become the connective tissue between products, decreasing the need for the “human glue” Somppi described as the current approach to information sharing between technologies

“We will always need analysts,” said Somppi. “But they’ll be augmented by AI, and we’ll need to rethink the way they work. Analysts need to be the experts, but AI needs to be the glue.”

Ultimately, using AI to reduce the time it takes to connect data insights will make security stronger and our analysts less stressed.

3. Redefine Success as It Relates to Securing the Business

Every organization has a different measure of success when it comes to security. For some, success means speeding up the time it takes to detect a threat. Others are more concerned about how long it takes to remedy the situation, or maybe it’s all about applying lessons learned to make sure it doesn’t happen again. Without a doubt, these are all important, but we need to think differently.

“What if success means getting your SOC analysts home in time for dinner with their families?,” Muppidi asked. When considering the predicted security skills gap, reducing the stress among your security analysts is a critical measure of success.

“Finding resources tends to be a challenge for our industry,” said Somppi. “I can find technology for anything and everything, but to have someone who can utilize that technology is incredibly difficult. I don’t want to burn them out.”

In addition to keeping them engaged and interested in their area of defense, it’s also critical to reduce the rate of analyst burnout. By reducing workload and stress, you can empower your SOC analysts to focus on fewer, but higher-value projects that are more strategic to the organization and are focused on growth.

Less Is More When It Comes to Your Security Architecture

The main takeaway from Somppi and Muppidi’s RSAC session is that it’s time for cybersecurity professionals to collaborate more and compete less. By breaking down silos among security teams and vendors, augmenting human intelligence with AI and machine learning, and empowering analysts to do more impactful work under less pressure, chief information security officers (CISOs) and business leaders can improve security output while also reducing the number of security products needed to protect the enterprise. Put simply, it’s time to make less matter more.

The post Rewrite the Rules to Reduce Complexity in Your Security Architecture appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Jennifer Glenn

Incident Management, Incident Response, Incident Response (IR), Incident Response Plan, Security Information and Event Management (SIEM), Security Operations and Response, Security Operations Center (SOC), Security Products, Security Professionals, Security Solutions, Threat Intelligence,

SOAR: The Second Arm of Security Operations

While security information and event management (SIEM) is rightly considered an indispensable tool for detecting and managing threats, it can only do so much good if you’re just detecting threats to respond to them. Of course, successful threat management demands rapid incident response, and security operations teams tend to overemphasize detection as a result.

How can organizations both empower their responders to remediate threats quickly and strengthen their security posture to prevent data breaches in the first place? The answer is security orchestration, automation and response (SOAR).

SOAR Solutions Add Context to SIEM Data

SIEM solutions are now deployed in virtually every large enterprise, and for very good reason. In the U.K., in fact, the RM3808 regulation precludes any organization from bidding for public sector network services work unless it has a SIEM solution in place. This makes sense: Companies should be monitoring their events and data flows if they expect to detect threats to their information or that of their customers.

SOAR tooling enables security operations teams to automate the tedious and repetitive elements of their workflow that don’t require human oversight and instead focus on more mentally challenging tasks that call for discernment and judgment. The best SOAR solutions enrich and contextualize threats to help analysts quickly triage cases according to the severity of the risk, sensitivity and/or criticality of the business functions under threat.

Many of the remedial tasks that fall under the analyst’s supervision, such as isolating endpoints, can be orchestrated with a SOAR platform via application programming interfaces (APIs). Faster remediation leads to earlier resolution of incidents in the attack chain, which greatly reduces the risk of a data breach.

A Force Multiplier for Understaffed Security Operations Teams

Even if you had an unlimited security budget at your disposal, you would still struggle to hire the caliber and quantity of talent you need to stay on top of the constant barrage of threats to your organization. According to Cybersecurity Ventures, the cyber skills shortfall is expected to hit 3.5 million unfilled positions by 2021. This is one of the reasons why white hats are lagging behind the increasingly sophisticated threat landscape in the cyber arms race.

SOAR solutions can help organizations address the talent gap by lightening analysts’ manual workload and sharpening their ability to prioritize the most pressing threats and remediate them quickly.

Enrichment and Contextualization: Where SIEM Ends and SOAR Begins

There is a degree of overlap in how vendors describe the enrichment and contextualization functionalities of their SIEM and SOAR solutions. It’s common for both products to claim that they enrich, contextualize and help triage threats. But where does SIEM end and SOAR begin?

SIEM is all about detection. The amount of automation and orchestration required for swift incident response cannot be carried out at the detection layer. If a SIEM tool processes between 10,000 and 500,000 events per second — as it does in most cases — the computing resources required are simply not available to enrich this volume of data. So why can’t the enrichment take place once the SIEM tool has generated an offense or incident?

For the average enterprise, only about 80 percent or less of incidents originate from SIEM. It’s important to channel incidents generated by data loss prevention (DLP) tools, managed service alerts, phishing and investigations into one place so your security operations center (SOC) analysts or computer security incident response team (CSIRT) can contextualize and act upon them. SIEM tools are not optimized to support this alongside the mammoth task of analyzing enormous reams of events and data flows according to predefined correlations and indicators of compromise (IoCs). Endpoint detection and response (EDR) and threat intelligence platforms are not integrated, thus the SIEM only assists with part of the investigation process.

Lastly, case management is arguably the most crucial feature set within incident response. Cybersecurity playbooks have become enormously complex, and the level of effort and cost needed to build them into the detection layer is often prohibitive.

Why Detection Alone Is Not Enough

It goes without saying that well-calibrated detection tools give the incident response function the data it needs to remediate threats. But having well-defined incident response plans can also help sharpen and refine the rules and use cases you use to calibrate your SIEM solution. The benefits are bidirectional: What correlations and indicators are you looking for? Why are you looking for them? Once you find them, what is the incident response plan?

One of our clients recently enacted a protocol whereby detection use cases are only written if they have an associated incident response plan. If you want to write SIEM rules for the sole purpose of visibility and metrics, that’s all well and good. However, being deliberate and honest about this will keep your operations more streamlined.

If your function is willing to spend thousands or even millions on SIEM solutions but not prepared to deal efficiently with the alerts being outputted, what is the value of that investment? Why wait until your SIEM tool is churning out alerts before realizing that your team is overwhelmed?

Clients of ours that have run parallel SIEM/SOAR proofs of concept (POCs) have saved significant amounts of time and effort compared to those that have undergone an arduous SIEM POC only to have to follow up with another SOAR POC. In one case, a client even decided to switch off its SIEM solution until it had implemented a SOAR tool to help it deal with the torrent of alerts. Given that SIEM and SOAR are two sides of the coin that comprises security operations, why serve these POCs consecutively when they can be executed concurrently?

The post SOAR: The Second Arm of Security Operations appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Cian Walker

Cyberthreats, RSA Conference, Security Conferences, Security Information and Event Management (SIEM), Security Solutions, Threat Detection, threat hunting, Threat Intelligence, Threat Prevention, Threat Protection,

Hunting for the True Meaning of Threat Hunting at RSAC 2019

After my first-ever RSA Conference experience, I returned to Boston with a lot of takeaways — not to mention a week’s worth of new socks, thanks to generous vendors that had a more functional swag approach than most. I spent the majority of my time at RSAC 2019 at the Master Threat Hunting kiosk within the broader IBM Security booth, where I told anyone who wanted to listen about how we use methodologies and tools from the military and intelligence communities to fight cyberthreats in the private sector. When I wasn’t at the booth, I was scouring the show floor on a hunt of my own — a hunt for the true meaning of threat hunting.

Don’t Believe the Hype: 3 Common Misconceptions About Threat Hunting

At first glance, the results of my hunt seemed promising; I saw the term “threat hunting” plastered all over many of the vendors’ booths. Wanting to learn more, I spoke with the booth personnel about their threat hunting solutions, gathered a stack of marketing one-pagers and continued on my separate hunt for free socks and stress balls.

After digesting the information from booth staff and digging into the marketing materials from the myriad vendors, I was saddened to learn that threat hunting is becoming a full-blown buzzword.

Let’s be honest: “Threat hunting” certainly has a cool ring to it that draws people in and makes them want to learn more. However, it’s important not to lose sight of the fact that threat hunting is an actual approach to cyber investigations that has been around since long before marketers starting using it as a hook.

Below are three of the most notable misconceptions about threat hunting I witnessed as I prowled around the show floor at RSAC 2019.

1. Threat Hunting Should Be Fully Automated

In general, automation is great; I love automating parts of my life to save time and to make things easier. However, there are some things that can’t be fully automated — or shouldn’t be, at least not yet. Threat hunting is one of those things.

While automation can be used within various threat hunting tools, it is still a very manual, human-led process to proactively (and reactively) hunt for unknown threats in your network that may have avoided your rules-based detection solutions. Threat hunting methodologies were derived from the counterterrorism community and repurposed for cybersecurity. There’s a reason why we don’t fully automate counterterrorism analysis, and the same applies to cyber.

2. Threat Hunting and EDR Are One and the Same

This was the most common misconception I encountered while searching for threat hunting solutions at RSAC. It went something like this: I would go into a booth, ask to learn more about the vendor’s threat hunting solution and come to find that what’s actually being marketed is an endpoint detection and response (EDR) solution.

EDR is a crucial piece of threat hunting, but these products are not the only tools threat hunters use. If threat hunting was as easy as using an EDR solution to find threats, we would have a much higher success rate. The truth is that EDR solutions need to be coupled with other tools, such as threat intelligence, open-source intelligence (OSINT) and network data, and brought together in a common platform to visualize anomalies and trends in the data.

3. Threat Hunting Is Overly Complicated

All of the marketing and buzz around threat hunting has overcomplicated what it actually is. It’s not one tool, it’s not automated, it’s not an overly complicated process. It takes multiple tools and a ton of data, it is very much dependent on well-trained analysts that know what they’re looking for, and it is an investigative process just like counterterrorism and law enforcement investigations. Since cyber threat hunting mirrors these investigative techniques, threat hunters should look toward trusted tools from the national security and law enforcement sectors.

What Is the True Meaning of Cyber Threat Hunting?

Don’t get me wrong — I am thrilled that threat hunting is gaining steam and vendors are coming up with innovative solutions to contribute to the definition of threat hunting. As a former analyst, I define threat hunting as an in-depth, human-led, investigative process to discover threats to an organization. My definition may vary from most when it comes to how this is conducted, since most definitions emphasize that threat hunting is a totally proactive approach. While I absolutely agree with the importance of proactivity, there aren’t many organizations that can take a solely proactive approach to threat hunting due to constraints related to budget, training and time.

While not ideal, there is a way to hunt reactively, which is often more realistic for small and midsize organizations. For example, you could conduct a more in-depth cyber investigation to get the context around a cyber incident or alert. Some would argue that’s just incident response, not threat hunting — but it turns into threat hunting when an analyst takes an all-source intelligence approach to enrich their investigation with external sources, such as threat intelligence and social media, and other internal sources of data. This approach can show the who, what, where, when and how around the incident and inform leadership on how to take the best action. The context can be used to retrain the rules-based systems and build investigative baselines for future analysis.

The Definition of Threat Hunting Is Evolving

Cyber threat hunting tools come in all shapes and sizes, but the most advanced tools allow you to reactively and proactively investigate threats by bringing all your internal and external data into one platform. By fusing internal security information and event management (SIEM) data, internal records, access logs and more with external data feeds, cyber threat hunters can identify trends and anomalies in the data and turn it into actionable intelligence to address threats in the network and proactively thwart ones that haven’t hit yet.

Behind the buzz and momentum from RSAC 2019, threat hunting will continue to gain traction, more advanced solutions will be developed, and organizations will be able to hunt down threats more efficiently and effectively. I’m excited to see how the definition evolves in the near future — as long as the cyber threat hunting roots stay strong.

Read the “SANS 2018 Threat Hunting Results” report

The post Hunting for the True Meaning of Threat Hunting at RSAC 2019 appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Jake Munroe

Endpoint, Endpoint Management, Endpoint Security, patch, Patch Management, Security Solutions, Vulnerabilities, Vulnerability Analysis, Vulnerability Management,

How Patch Posture Reporting Improves Security Landscapes

Vulnerability identification and remediation are critical to maintaining a secure environment. Today, most organizations are using one or multiple vulnerability scanning tools to identify vulnerabilities on endpoints such as business critical servers, laptops and desktops. They also have processes in place to apply security patches (provided by platform or application software vendors) to remediate vulnerabilities quickly. However, many security teams remain concerned that their IT infrastructures may still be vulnerable to attacks from newly emerging malware or exploitation vectors (e.g., WannaCry, Petya/NotPetya and Apache Struts), simply because some machines contain vulnerabilities that have not been identified or patched and could be manipulated by these threats.

So, what is missing in the current vulnerability identification and patching processes and tools? We carefully analyzed vulnerability and patch management processes and interacted with multiple customer teams involved in these areas to learn more. We discovered that insufficient patch posture reporting, a lack of data, and operational inefficiency in these processes and tools actually cause great challenges with the various teams. We also found that these challenges are why many organizations still cannot establish a high degree of confidence in protecting their IT infrastructure and valuable business assets using their current vulnerability identification and remediation capabilities.

Here are some of our key findings:

Aggregate Patch/Vulnerability Posture Data Is Lacking

Typically, security teams perform vulnerability scanning periodically and report current security posture, mainly from a vulnerability discovery perspective. IT operations/infrastructure teams are then responsible for applying security patches to all business-critical machines to address the identified vulnerabilities. Unfortunately, there is usually no data available to give a comprehensive and cumulative view of all the patching actions that have been performed over time and how the vulnerabilities have been remediated by these applied patches. Customers have told IBM that without visibility to a complete and timely patch posture of the entire IT infrastructure with a focus on vulnerability remediation, it is difficult to assess the overall risk level of an organization and the effectiveness of the patching activities.

Vulnerability Remediation Is Not Prioritized

IBM frequently hears that IT operations teams do not have access to the vulnerability data produced by security teams. And even if they do have access, there is no data readily available to link the discovered vulnerabilities with the required security patches to be applied. As a result of this data gap and lack of integration between the tools used by the two teams, when the IT operations teams apply security patches, they have no idea of exactly what vulnerabilities are being remediated and thus how the patching is going to impact the overall security posture.

In today’s IT environments, there is a large number of machines and there are always many patches to apply across the entire software stack on each machine (e.g., virtual machines, operating system, middleware, applications, etc.). It is becoming increasingly important for IT security and operations teams to work together to prioritize remediation efforts to address the biggest vulnerabilities first (the vulnerabilities that could cause the greatest damages), thereby optimizing their remediation impact. This will also help raise the organization’s overall security posture in a timely manner and help reduce costs.

Demonstrating Compliance Is Difficult

Many regulations and corporate security policies require high severity security patches to be applied within a relatively short time period. The Payment Card Industry Data Security Standard (PCI DSS), for example, , requires installing applicable, critical security patches within one month of release. However, it is usually a manual, time-consuming process for compliance teams to collect all the needed information such as when a security patch was released; when it was applied to each applicable machine; and whether all the machines have had the patch applied. Compliance teams have told IBM that they need this data to be collected and reported in a more automatic way to more effectively demonstrate compliance during regulatory or corporate policy audits.

Things to Look for in Patch Posture Reporting

To address these challenges, look for solutions that offer patch posture reporting as either a standalone tool or a capability that is incorporated within an existing solution focused on vulnerability management, patch management and/or compliance. The primary goal of patch posture reporting should be to empower the IT security and operations teams to better meet their vulnerability identification, patch management and compliance responsibilities by providing the ability for:

  • Security or IT operations managers to get the current status and historical trend of all patches applicable to all machines, so they can get a complete risk posture or assess patch effort efficiency at any time;
  • IT operations specialists to sort/filter all applicable patches based on severity and/or current remediation status to prioritize patching actions, so they can maximize their impact to security posture improvement; and
  • Compliance specialists to track and report when security patches are released and applied to each machine, so they can more effectively demonstrate compliance with regulatory/organizational policies.

Let’s look more closely at some of the specific functions that should be provided by a patch posture reporting tool.

Comprehensive Patch Posture Assessment

Patch posture reporting should help IT operations managers assess patching effort efficiency and security operations center (SOC) managers assess how related vulnerabilities have been remediated by patching. The posture data needs to be made available in near real time to provide a current and comprehensive picture. Specific types of data to be reported should include:

  • For each patch, all the machines that have been remediated, the remediation percentage (among all the applicable machines) and how the data has changed over time (a historical trend); and
  • For each machine, all the patches that have been applied, the remediation percentage (among all the applicable patches) and how the data has changed over time (a historical trend).

To provide benefits to various teams with different responsibilities, the posture data should also be presented in multiple views including:

  • Per-machine views showing all the patches applicable to each machine;
  • Per-machine group views (e.g., all the machines running Windows) showing all the patches applicable to a group of similar machines;
  • Per-patch views showing all the machines applicable by a particular patch; and
  • Aggregated posture views showing all the patches across all the machines.

With this type of comprehensive patch posture data, it would be fast and easy for security or IT operations managers to answer frequently asked questions, such as:

  • What machines do not have a particular patch applied yet (e.g., for a critical security patch that can fix vulnerabilities exploited by malware as bad as WannaCry)?
  • What is the current remediation status of all the patches applicable to a business-critical server?
  • What are the top 10 machines that have the greatest number of patches yet to be applied?

Remediation Task Prioritization

In addition to a complete patch posture, look for tools that provide data filtering and sorting functions to help IT operations specialists efficiently determine their next remediation priority. Data that can be filtered or sorted should include:

  • The severity of a patch (a patch with a higher severity usually needs to be applied first);
  • When the patch was released (usually an older patch needs to be applied sooner, especially for compliance purposes);
  • Whether a patch has been superseded (applying a newer, superseding patch may reduce the total patching effort);
  • The number of machines the patch can be applied to (applying the more broadly applicable patch can have a larger impact to the overall posture); and
  • The current remediation status (a lower remediation percentage may mean a higher security exposure).

Patch management usually needs to take into account additional factors, such as when the machines are available for offline maintenance (patching), and that requires coordination with machine owners. Nonetheless, a complete patch posture and efficient filtering/sorting functions can definitely enable the IT security and operations teams to make much more informed decisions on patching prioritization so they can help remediate vulnerabilities more effectively.

Patch Compliance Demonstration

As mentioned earlier, complying with a regulation or a corporate policy for security patching usually requires tracking of very tedious data, because compliance specialists usually need to be able to answer the following questions during an audit:

  • When was a patch released by the vendor?
  • When was it applied to each applicable machine?
  • Did the elapsed time (between release and applying) exceed the regulation/policy requirement?

Consider using tools that track and store this type of data automatically (which may mean integration with patch management tools and patch action logs) and that can generate related reports that compliance specialists can leverage to pass audits. It’s also important to note that historical trend data for patching can demonstrate the progress being made by the IT security and operations team over time. In some cases, even if 100 percent compliance has not been achieved, this historical trending data may be useful in demonstrating the organization’s desire to achieve compliance (and thereby avoid potential fines or repercussions).

Patch Posture Reporting Is Key to Identifying Risks, Reducing Costs and Demonstrating Compliance

In summary, many organizations still struggle with getting full visibility of aggregated patch/vulnerability posture, prioritizing vulnerability remediation and effectively demonstrating compliance. This is primarily due to insufficient data and missing capabilities in existing vulnerability discovery or patch management solutions. Patch posture reporting addresses these gaps by delivering the functions described in this blog to the IT operations/infrastructure and security teams who need them. This, in turn, enables these teams and organizations to much more effectively identify and mitigate security risks, reduce operational costs and demonstrate policy/regulation compliance. IBM BigFix Compliance has recently been enhanced to include patch posture reporting to provide these benefits to organizations. For more information, please refer to this solution brief.

The post How Patch Posture Reporting Improves Security Landscapes appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: I-Lung Kao

Cloud, Cloud Security, Cloud Services, Data Protection, Data Security, Encryption, Encryption Keys, Security Solutions,

Lessons from the Encryption Front Line: Core Components in the Cloud

This is the second installment in a multipart series about data encryption. Be sure to read part one for the full story.

Now that we understand the common threats facing organizations and how to select the right solution for data-at-rest encryption (DaRE), what’s the next step in your data encryption journey?

Encrypting data is the relatively easy part of the solution, but securely managing keys is a major challenge. According to the National Institute of Standards and Technology (NIST), “Keys are analogous to the combination of a safe. If an adversary knows the combination, the strongest safe provides no security against penetration. Similarly, poor key management may easily compromise strong algorithms.”

DaRE needs more than software to encrypt data, because the keys still need to be managed. Let’s dive deeper into the key management challenge, the core components needed to manage keys effectively and the open standards security teams should use in their cloud environments.

The Encryption Key Management Challenge

In DaRE solutions, symmetric encryption is used for speed, and the same key is used to encrypt and decrypt the data. The security of the system relies on the encryption key being kept secret. Most organizations now encrypt disks within a laptop. To start the decrypting process, a password must be entered manually, which is impractical for cloud environments with thousands of servers.

If the data is being decrypted after a system has started, the encryption software can use a secret key stored locally on the server, which will be in an obscured format that can be decoded. The risk here is that a privileged insider or threat actor could potentially decode the key and decrypt the data. Therefore, security teams need a way to protect their encryption keys.

Unscrambling the Encryption Solution Components

A typical cloud encryption solution has three core components: an encryption client, a key management server (KMS) and a hardware security module (HSM).

The encryption client performs the actual encryption using a data encryption key (DEK). Since it needs to be stored encrypted, the DEK itself is obscured using a key encryption key (KEK).

The KEK is obtained from a KMS, which contains many hundreds or thousands of keys in a database. Once again, the KEKs need to be encrypted using a master encryption key (MEK) because there is a risk that the KMS could be compromised. The MEK is stored in the HSM, which enables the security team to store a key in hardware that physically prevents tampering or loss of the MEK.

Creating an Open Encryption Solution

In the past, encryption solutions have been built around proprietary protocols, making integration difficult. That’s why OASIS defined a set of standards to improve interoperability between encryption and key management solutions from different vendors.

Over the past few years, vendors have increasingly adopted standard protocols for communication between the KMS and HSM, such as OASIS PKCS#11, as well as communication between the encryption client and the KSM, such as the OASIS KMIP protocol. Look for solutions that use these standards when putting together your encryption strategy.

Encryption Solutions Are Maturing

With a standard set of components that support open standards, encryption technology is gradually maturing to make implementation and encryption key management easier. In cloud environments, these components are often available in a lower-cost implementation known as bring-your-own-key (BYOK), which integrates with supported DaRE solutions. These solutions are now reaching high levels of assurance with HSMs offering FIPS 140-2 Level 4 in the cloud.

Depending on your needs, you can develop encryption solutions based on open standards from components you build and run yourself or source them as managed services from cloud providers.

The post Lessons from the Encryption Front Line: Core Components in the Cloud appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Mark Buckwell

Artificial intelligence, Artificial Intelligence (AI), Chief Information Security Officer (CISO), CISO, Cloud Security, Cognitive Security, Internet of Things (IoT), Machine Learning, Penetration Testing, Security Intelligence & Analytics, Security Leaders, Security Leadership, Security Operations Center (SOC), Security Solutions,

Break Through Cybersecurity Complexity With New Rules, Not More Tools

Let’s be frank: Chief information security officers (CISOs) and security professionals all know cybersecurity complexity is a major challenge in today’s threat landscape. Other folks in the security industry know this too — although some don’t want to admit it. The problem is that amid increasing danger and a growing skills shortage, security teams are overwhelmed by alerts and the growing number of complex tools they have to manage. We need to change that, but how? By completely rethinking our assumptions.

The basic assumption of security up until now is that new threats require new tools. After 12 years at IBM Security, leading marketing teams and making continuous contact with our clients — and, most recently, as VP of product marketing — I’ve seen a lot of promising new technology. But in our rapidly diversifying industry, there are more specialized products to face every kind of threat in an expanding universe of attack vectors. Complexity is a hidden cost of all these marvelous products.

It’s not just security products that contribute to the cybersecurity complexity conundrum; digitization, mobility, cloud and the internet of things (IoT) all contribute to the complexity of IT environments, making security an uphill battle for underresourced security teams. According to Forrester’s “Global Business Technographics Security Survey 2018,” 31 percent of business and IT decision-makers ranked the complexity of the IT environment among the biggest security challenges they face, tied with the changing nature of threats as the most-cited challenge.

I’ll give you one more mind-boggling statistic to demonstrate why complexity is the enemy of security: According to IBM estimates, enterprises use as many as 80 different security products from 40 vendors. Imagine trying to build a clear picture with pieces from 80 separate puzzles. That’s what CISOs and security operations teams are being asked to do.

7 Rules to Help CISOs Reduce Cybersecurity Complexity

The sum of the parts is not greater than the whole. So, we need to escape the best-of-breed trap to handle the problem of complexity. Cybersecurity doesn’t need more tools; it needs new rules.

Complexity requires us as security professionals and industry partners to turn the old ways of thinking inside out and bring in fresh perspectives.

Below are seven rules to help us think in new ways about the complex, evolving challenges that CISOs, security teams and their organizations face today.

1. Open Equals Closed

You can’t prevent security threats by piling on more tools that don’t talk to each other and create more noise for overwhelmed analysts. Security products need to work in concert, and that requires integration and collaboration. An open, connected, cloud-based security platform that brings security products together closes the gaps that point products leave in your defenses.

2. See More When You See Less

Security operations centers (SOCs) see thousands of security events every day — a 2018 survey of 179 IT professionals found that 55 percent of respondents handle more than 10,000 alerts per day, and 27 percent handle more than 1 million events per day. SOC analysts can’t handle that volume.

According to the same survey, one-third of IT professionals simply ignore certain categories of alerts or turn them off altogether. A smarter approach to the overwhelming volume of alerts leverages analytics and artificial intelligence (AI) so SOC analysts can focus on the most crucial threats first, rather than chase every security event they see.

3. An Hour Takes a Minute

When you find a security incident that requires deeper investigation, time is of the essence. Analysts can’t afford to get bogged down in searching for information in a sea of threats.

Human intelligence augmented by AI — what IBM calls cognitive security — allows SOC analysts to respond to threats up to 60 times faster. An advanced AI can understand, reason and learn from structured and unstructured data, such as news articles, blogs and research papers, in seconds. By automating mundane tasks, analysts are freed to make critical decisions for faster response and mitigation.

4. A Skills Shortage Is an Abundance

It’s no secret that greater demand for cybersecurity professionals and an inadequate pipeline of traditionally trained candidates has led to a growing skills gap. Meanwhile, cybercriminals have grown increasingly collaborative, but those who work to defend against them remain largely siloed. Collaboration platforms for security teams and shared threat intelligence between vendors are force multipliers for your team.

5. Getting Hacked Is an Advantage

If you’re not seeking out and patching vulnerabilities in your network and applications, you’re making an assumption that what you don’t know can’t hurt you. Ethical hacking and penetration testing turns hacking into an advantage, helping you find your vulnerabilities before adversaries do.

6. Compliance Is Liberating

More and more consumers say they will refuse to buy products from companies that they don’t trust to protect their data, no matter how great the products are. By creating a culture of proactive data compliance, you can exchange the checkbox mentality for continuous compliance, turning security into a competitive advantage.

7. Rigidity Is Breakthrough

The success of your business depends not only on customer loyalty, but also employee productivity. Balance security with productivity by practicing strong security hygiene. Run rigid but silent security processes in the background to stay out of the way of productivity.

What’s the bottom line here? Times are changing, and the current trend toward complexity will slow the business down, cost too much and fail to reduce cyber risk. It’s time to break through cybersecurity complexity and write new rules for a new era.

The post Break Through Cybersecurity Complexity With New Rules, Not More Tools appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Wangui McKelvey

Access Management, Identity and Access Management (IAM), Incident Response (IR), Security Information and Event Management (SIEM), Security Intelligence & Analytics, Security Operations Center (SOC), Security Solutions, Threat Detection,

Bring Order to Chaos By Building SIEM Use Cases, Standards, Baselining and Naming Conventions

Security operations centers (SOCs) are struggling to create automated detection and response capabilities. While custom security information and event management (SIEM) use cases can allow businesses to improve automation, creating use cases requires clear business logic. Many security organizations lack efficient, accurate methods to distinguish between authorized and unauthorized activity patterns across components of the enterprise network.

Even the most intelligent SIEM can fail to deliver value when it’s not optimized for use cases, or if rules are created according to incorrect parameters. Creating a framework that can accurately detect suspicious activity requires baselines, naming conventions and effective policies.

Defining Parameters for SIEM Use Cases Is a Barrier to SOC Success

Over the past few years, I’ve consulted with many enterprise SOCs to improve threat detection and incident response capabilities. Regardless of SOC maturity, most organizations struggle to accurately define the difference between authorized and suspicious patterns of activity, including users, admins, access patterns and scripts. Countless SOC leaders are stumped when they’re asked to define authorized patterns of activity for mission-critical systems.

SIEM rules can be used to automate detection and response capabilities for common threats such as distributed denial-of-service (DDoS), authentication failures and malware. However, these rules must be built on clear business logic for accurate detection and response capabilities. Baseline business logic is necessary to accurately define risky behavior in SIEM use cases.

Building a Baseline for Cyber Hygiene

Cyber hygiene is defined as the consistent execution of activities necessary to protect the integrity and security of enterprise networks, including users, data assets and endpoints. A hygiene framework should offer clear parameters for threat response and acceptable use based on policies for user governance, network access and admin activities. Without an understanding of what defines typical, secure operations, it’s impossible to create an effective strategy for security maintenance.

A comprehensive framework for cybersecurity hygiene can simplify security operations and create guidelines for SIEM use cases. However, capturing an effective baseline for systems can strengthen security frameworks and create order in chaos. To empower better hygiene and threat detection capabilities based on business logic, established standards such as a naming convention can create clear parameters.

VLAN Network Categories

For the purpose of simplified illustration, imagine that your virtual local area networks (VLANs) are categorized among five criticality groups — named A, B, C, D and E — with the mission-critical VLAN falling into the A category (_A).

A policy may be created to dictate that A-category VLAN systems can communicate directly with any other category without compromising data security. However, communication with the A-category VLAN from B, C, D or E networks is not allowed. Authentication to a jump host can accommodate authorized exceptions to this standard, such as when E-category users need access to an A-category server.

Creating a naming convention and policy for VLAN network categories can help you develop simple SIEM use cases to prevent unauthorized access to A resources and automatically detect suspicious access attempts.

Directory Services and Shared Resources

You can also use naming convention frameworks to create a policy for managing groups of user accounts according to access level in directory services, such as Lightweight Directory Access Protocol (LDAP) or Active Directory (AD). A standardized naming convention for directory services provides a clear framework for acceptable user access to shared folders and resources. AD users categorized within the D category may not have access to A-category folders or _A.

Creating effective SIEM rules based on these use cases is a bit more complex than VLAN business logic since it involves two distinct technologies and potentially complex policies for resource access. However, creating standards that connect user access to resources establishes clear parameters for strict, contextual monitoring. Directory users with A-category access may require stricter change monitoring due to the potential for abuse of admin capabilities. You can create SIEM use cases to detect other configuration mistakes, such as a C-category user who is suddenly escalated to A-category.

Username Creation

Many businesses are already applying some logic to standardize username creation for employees. A policy may dictate that users create a seven-character alias that involves three last-name characters, two first-name characters and two digits. Someone named Janet Doe could have the username DoeJa01, for example. Even relatively simple username conventions can support SIEM use cases for detecting suspicious behavior. When eight or more characters are entered into a username field, an event could be triggered to lock the account until a new password is created.

The potential SIEM use cases increase with more complex approaches to username creation, such as 12-character usernames that combine last- and first-name characters with the employee’s unique HR-issued identification. A user named Jonathan Doerty, for instance, could receive an automatically generated username of doertjo_4682. Complex usernames can create friction for legitimate end users, but some minor friction can be justified if it provides greater safeguards for privileged users and critical systems.

An external threat actor may be able to extrapolate simple usernames from social engineering activities, but they’re unlikely to guess an employee’s internal identification number. SIEM rules can quickly detect suspicious access attempts based on username field entries that lack the required username components. Requiring unique identification numbers from HR systems can also significantly lower the risk of admins creating fake user credentials to conceal malicious activity.

Unauthorized Code and Script Locations

Advanced persistent threats can evade detection by creating backdoor access to deploy a carefully disguised malicious code. Standard naming conventions provide a cost-effective way to create logic to detects malware risks. A simple model for script names could leverage several data components, such as department name, script name and script author, resulting in authorized names like HR_WellnessLogins_DoexxJo. Creating SIEM parameters for acceptable script names can automate the detection of malware.

Creating baseline standards for script locations such as /var/opt/scripts and C:Program Files can improve investigation capabilities when code is detected that doesn’t comply with the naming convention or storage parameters. Even the most sophisticated threat actors are unlikely to perform reconnaissance on enterprise naming convention baselines before creating a backdoor and hiding a script. SIEM rules can trigger a response from the moment a suspiciously named script begins to run or a code file is moved into an unauthorized storage location.

Scaling Security Response With Standards

Meaningful threats to enterprise data security often fly under the radar of even the most sophisticated threat detection solutions when there’s no baseline to define acceptable activity. SOC analysts have more technological capabilities than ever, but many are struggling to optimize detection and response with effective SIEM use cases.

Clear, scalable systems to define policies for acceptable activity create order in chaos. The smartest approach to creating effective SIEM use cases relies on standards, a strong naming convention and sound policy. It’s impossible to accurately understand risks without a clear framework for authorized activities. Standards, baselines and naming conventions can remove barriers to effective threat detection and response.

The post Bring Order to Chaos By Building SIEM Use Cases, Standards, Baselining and Naming Conventions appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Ludek Subrt

Behavioral Analytics, Machine Learning, Network Security, Security Information and Event Management (SIEM), Security Intelligence, Security Intelligence & Analytics, Security Solutions, Security Tools,

SIEM Event Normalization Makes Raw Data Relevant to Both Humans and Machines

A security information and event management (SIEM) system is an indispensable tool for any security operations center (SOC). It collects events from devices in your network infrastructure such as servers, cloud devices, firewalls and Wi-Fi access points to give operations professionals fine-grained visibility into activity on the network and help them spot anomalies that may signal a cyberattack.

In its raw form, this log data is almost impossible for a human to process, so advanced SIEM solutions conduct a process called event normalization to deliver a homogeneous view. Event normalization consists of breaking each field of a raw event into variables and combining them into views that are relevant to security administrators. This is a crucial step in the process of finding meaning in often isolated and heterogeneous events.

Visualize Your Network Activity

There are thousands of vendors and models of devices and software that an organization may want to monitor. It’s impossible for a SIEM to read raw events from all of them, let alone keep up with versions and new releases. Using correlation rules and tools such as a DSM editor, security administrators can translate raw data into a single, normalized stream, making it possible for the SIEM to present data from nearly any device or log source in a meaningful form. Event normalization enables administrators to detect anomalies even when data is streaming in from multiple locations.

For example, a brute-force attack consists of a series of authentication attempts against a system, either from a single IP or multiple addresses. Sorting through authentication logs one by one is a tedious task, but a SIEM solution can solve the problem using correlation rules. This enables administrators to see anomalies such as login attempts from suspicious locations, network scans and simultaneous authentication attempts by the same user from different locations. A SIEM can also monitor network traffic for unusual activity, such as large file downloads.

Behold the Power of Event Normalization

To give you a sense of the power of normalization, here’s an example of a raw log from a firewall:

<;;5>logver=54 dtime=1536072238 devid=FG74E83E17000037 devname=firewall-fort vd=External date=2018-09-04 time=14:43:58 slot=4 logid=0000000013 type=traffic subtype=forward level=notice srcip= srcport=44000 srcintf=”DMZ” dstip= dstport=443 dstintf=”External” poluuid=55555555-5b5b-5a5a-5c5c-5a5b5c5d5f55 sessionid=555555555 proto=6 action=close policyid=55 policytype=policy dstcountry=”United States” srccountry=”United States” trandisp=snat transip=Pub-IP-Address transport=44000 service=”tcp_1-65535″ duration=11 sentbyte=1699 rcvdbyte=6002 sentpkt=16 rcvdpkt=13 appcat=”unscanned”

Buried in this nearly unreadable stream is important information, including:

  • Hostname;
  • Date and time;
  • Source IP of the traffic;
  • Destination IP;
  • Source port;
  • Destination port;
  • Action taken by the firewall;
  • Source country;
  • Destination country;
  • Application discovered; and
  • Translated IP addresses.

Using correlation rules, we can extract these important details automatically into a report or chart that helps us visualize activity from many sources. The process of creating events consists of finding patterns in raw data, mapping it to known expressions, and assigning unique categories and identifiers. If the SIEM encounters an unknown log source or data type, we can use the editor to define an event and assign variables such as name, severity and facility.

Get the Most Out of Your SIEM Deployment

Good normalization practices are essential to maximizing the value of your SIEM. Tools such as DSM editors make it fast and easy for security administrators to define, test, organize and reuse events, thereby ensuring the maximum visibility into everything that takes place on the enterprise’s computing fabric. It turns steams of machine data into something humans can use.

The post SIEM Event Normalization Makes Raw Data Relevant to Both Humans and Machines appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Moises Monge