Browsing category

threat hunting

Cyberthreats, RSA Conference, Security Conferences, Security Information and Event Management (SIEM), Security Solutions, Threat Detection, threat hunting, Threat Intelligence, Threat Prevention, Threat Protection,

Hunting for the True Meaning of Threat Hunting at RSAC 2019

After my first-ever RSA Conference experience, I returned to Boston with a lot of takeaways — not to mention a week’s worth of new socks, thanks to generous vendors that had a more functional swag approach than most. I spent the majority of my time at RSAC 2019 at the Master Threat Hunting kiosk within the broader IBM Security booth, where I told anyone who wanted to listen about how we use methodologies and tools from the military and intelligence communities to fight cyberthreats in the private sector. When I wasn’t at the booth, I was scouring the show floor on a hunt of my own — a hunt for the true meaning of threat hunting.

Don’t Believe the Hype: 3 Common Misconceptions About Threat Hunting

At first glance, the results of my hunt seemed promising; I saw the term “threat hunting” plastered all over many of the vendors’ booths. Wanting to learn more, I spoke with the booth personnel about their threat hunting solutions, gathered a stack of marketing one-pagers and continued on my separate hunt for free socks and stress balls.

After digesting the information from booth staff and digging into the marketing materials from the myriad vendors, I was saddened to learn that threat hunting is becoming a full-blown buzzword.

Let’s be honest: “Threat hunting” certainly has a cool ring to it that draws people in and makes them want to learn more. However, it’s important not to lose sight of the fact that threat hunting is an actual approach to cyber investigations that has been around since long before marketers starting using it as a hook.

Below are three of the most notable misconceptions about threat hunting I witnessed as I prowled around the show floor at RSAC 2019.

1. Threat Hunting Should Be Fully Automated

In general, automation is great; I love automating parts of my life to save time and to make things easier. However, there are some things that can’t be fully automated — or shouldn’t be, at least not yet. Threat hunting is one of those things.

While automation can be used within various threat hunting tools, it is still a very manual, human-led process to proactively (and reactively) hunt for unknown threats in your network that may have avoided your rules-based detection solutions. Threat hunting methodologies were derived from the counterterrorism community and repurposed for cybersecurity. There’s a reason why we don’t fully automate counterterrorism analysis, and the same applies to cyber.

2. Threat Hunting and EDR Are One and the Same

This was the most common misconception I encountered while searching for threat hunting solutions at RSAC. It went something like this: I would go into a booth, ask to learn more about the vendor’s threat hunting solution and come to find that what’s actually being marketed is an endpoint detection and response (EDR) solution.

EDR is a crucial piece of threat hunting, but these products are not the only tools threat hunters use. If threat hunting was as easy as using an EDR solution to find threats, we would have a much higher success rate. The truth is that EDR solutions need to be coupled with other tools, such as threat intelligence, open-source intelligence (OSINT) and network data, and brought together in a common platform to visualize anomalies and trends in the data.

3. Threat Hunting Is Overly Complicated

All of the marketing and buzz around threat hunting has overcomplicated what it actually is. It’s not one tool, it’s not automated, it’s not an overly complicated process. It takes multiple tools and a ton of data, it is very much dependent on well-trained analysts that know what they’re looking for, and it is an investigative process just like counterterrorism and law enforcement investigations. Since cyber threat hunting mirrors these investigative techniques, threat hunters should look toward trusted tools from the national security and law enforcement sectors.

What Is the True Meaning of Cyber Threat Hunting?

Don’t get me wrong — I am thrilled that threat hunting is gaining steam and vendors are coming up with innovative solutions to contribute to the definition of threat hunting. As a former analyst, I define threat hunting as an in-depth, human-led, investigative process to discover threats to an organization. My definition may vary from most when it comes to how this is conducted, since most definitions emphasize that threat hunting is a totally proactive approach. While I absolutely agree with the importance of proactivity, there aren’t many organizations that can take a solely proactive approach to threat hunting due to constraints related to budget, training and time.

While not ideal, there is a way to hunt reactively, which is often more realistic for small and midsize organizations. For example, you could conduct a more in-depth cyber investigation to get the context around a cyber incident or alert. Some would argue that’s just incident response, not threat hunting — but it turns into threat hunting when an analyst takes an all-source intelligence approach to enrich their investigation with external sources, such as threat intelligence and social media, and other internal sources of data. This approach can show the who, what, where, when and how around the incident and inform leadership on how to take the best action. The context can be used to retrain the rules-based systems and build investigative baselines for future analysis.

The Definition of Threat Hunting Is Evolving

Cyber threat hunting tools come in all shapes and sizes, but the most advanced tools allow you to reactively and proactively investigate threats by bringing all your internal and external data into one platform. By fusing internal security information and event management (SIEM) data, internal records, access logs and more with external data feeds, cyber threat hunters can identify trends and anomalies in the data and turn it into actionable intelligence to address threats in the network and proactively thwart ones that haven’t hit yet.

Behind the buzz and momentum from RSAC 2019, threat hunting will continue to gain traction, more advanced solutions will be developed, and organizations will be able to hunt down threats more efficiently and effectively. I’m excited to see how the definition evolves in the near future — as long as the cyber threat hunting roots stay strong.

Read the “SANS 2018 Threat Hunting Results” report

The post Hunting for the True Meaning of Threat Hunting at RSAC 2019 appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Jake Munroe

cryptocurrency, cryptocurrency miner, IBM Security, IBM X-Force Incident Response and Intelligence Services (IRIS), IBM X-Force Research, Incident Response (IR), Ransomware, Skills Gap, threat hunting, Threat Intelligence, X-Force,

Cryptojacking Rises 450 Percent as Cybercriminals Pivot From Ransomware to Stealthier Attacks

Cybercriminals made a lot of noise in 2017 with ransomware attacks like WannaCry and NotPetya, using an in-your-face approach to cyberattacks that netted them millions of dollars from victims. But new research from IBM X-Force, the threat intelligence, research and incident response arm of IBM Security, revealed that 2018 saw a rapid decline in ransomware attacks as cybercrime gangs shifted tactics to remain under the radar.

Ransomware attacks declined by 45 percent between Q1 2018 and Q4 2018, according to the research. That doesn’t mean cybercrime is on the decline, however. Instead, cybercriminals employed cryptojacking, the stealthy theft of computing power to generate cryptocurrency, at a much higher rate. Cryptojacking surged by 450 percent over the course of 2018, according to the newly released “IBM X-Force Threat Intelligence Index 2019.”

Wendi Whitmore, global lead of the IBM X-Force Incident Response and Intelligence Services (IRIS) team, said in an interview that ransomware was highly successful for several years, but the payoff was starting to decline.

“It appears, for a variety of reasons, cybercriminals are getting less money from ransomware attacks and potentially getting a better return on their investment and their time from cryptojacking,” Whitmore said.

IBM X-Force observed a 45 percent decline in ransomware attacks and a 450 percent increase in cryptojacking over the course of 2018, as shown by the trend lines in this chart.

Cryptojacking and Other Stealth Attacks

The term cryptojacking refers to the illicit use of computing resources to generate cryptocurrency such as bitcoin, which peaked in value at nearly $20,000 in late 2017, and Monero, which has generated millions of dollars for cybercriminals over the past decade.

Cryptojacking involves infecting a victim’s computer with malware or through browser-based injection attacks. The malware uses the processing power of the hijacked computer to mine (generate) cryptocurrency. The spike in central processing unit (CPU) usage may cause systems to slow, and enterprises may be affected by the presence of the malware on their network servers and employee devices.

While less destructive than ransomware, the presence of cryptomining malware in enterprise environments is concerning because it indicates a vulnerability that may be exploited in other attacks.

“The victim doesn’t usually know their computer has taken over for that purpose,” Whitmore said.

Yet an even stealthier form of attack doesn’t use malware at all. More than half of cyberattacks (57 percent) seen by X-Force IRIS in 2018 did not leverage malware, and many involved the use of nonmalicious tools, including PowerShell, PsExec and other legitimate administrative solutions, allowing attackers to “live off the land” and potentially remain in IT environments longer. These attacks could allow cybercriminals to harvest credentials, run queries, search databases, access user directories and connect to systems of interest.

Attacks that don’t use malware are much more challenging for defense teams to detect, Whitmore said, because they are leveraging tools built into the environment and can’t be identified through signatures or typical malware detection techniques. Instead, defense teams need to detect malicious commands, communications and other actions that might look like legitimate business processes.

“Attackers are identifying that it’s a lot easier to stay in an organization longer-term if they don’t install anything funny that might get detected by a wide variety of technologies, or by really smart defenders who are constantly looking in the environment to identify something that’s new or different,” Whitmore said.

Attackers are infiltrating IT environments with stealthy techniques that target misconfigurations and other system vulnerabilities, Whitmore said, and using tried-and-true methods that are still very difficult to prevent at a wide scale, such as phishing. Publicly disclosed security incidents involving misconfiguration increased by 20 percent between 2017 and 2018, according to X-Force research. Meanwhile, IBM X-Force Red, an autonomous team of veteran hackers within IBM Security who conduct various types of hardware and software vulnerability testing, finds an average 1,440 unique vulnerabilities per organization.

Still, humans represent one of the largest security weaknesses, with 29 percent of attacks analyzed by IBM X-Force involving compromises via phishing emails. Nearly half (45 percent) of those phishing attempts were business email compromise (BEC) scams, also known as CEO fraud or whaling attacks.

These highly targeted attacks are aimed at individuals responsible for making payments from business accounts, claiming to come from someone inside the organization such as the CEO or chief financial officer (CFO). The FBI reported that between October 2013 and May 2018, BEC fraud had cost organizations $12.5 billion.

Read the complete X-Force Threat Intelligence Index Report

Transportation in the Crosshairs

Among the more surprising findings in this year’s X-Force Threat Intelligence Index report is the level of attacks on the transportation industry, which was the second-most attacked industry in 2018, behind only financial services. In 2017, transportation was the 10th most targeted industry, but in 2018 it was targeted in 13 percent of attacks, behind financial services, which was targeted in 19 percent of attacks.

“That was a pretty surprising finding for us,” Whitmore said. “To see the transportation industry emerge as the second-most impacted industry really means that we’re seeing a lot more activity overall in that industry.”

A few factors changed the game this year, Whitmore noted, including the industry’s growing reliance on data, website applications and mobile apps, and the increasing amount of information consumers are sharing. Transportation companies hold valuable customer data such as payment card information, personally identifiable information (PII) and loyalty rewards accounts. Cybercriminals are interested in targeting that information to monetize it.

Additionally, Whitmore said, there’s “a widespread attack surface in the transportation industry, leveraging things like third-party providers with legacy systems and a lot of communications systems that are out of their direct management.”

Proactive Defenses and Agile Response

There are signs that organizations are increasing their security hygiene by applying best practices such as access controls, patching vulnerabilities in software and hardware, and training employees to spot phishing attempts, Whitmore said.

Yet cybersecurity is a daily fight, and the security skills gap means security teams have to be agile and collaborative while augmenting their capabilities with supporting security technologies and services.

The IBM X-Force Threat Intelligence report offers recommendations for organizations to increase preparedness through preventive measures such as threat hunting — proactively searching networks and endpoints for advanced threats that evade prevention and detection tools.

Additionally, risk management models need to consider likely threat actors, infection methods and potential impact to critical business processes. Organizations need to be aware of risks arising from third parties, such as cloud service providers, suppliers and acquisitions.

Finally, the IBM X-Force Threat Intelligence Index emphasizes remediation and incident response. Even organizations with a mature security posture may not know how to respond to a security incident. Effective incident response is not only a technical matter; leadership and crisis communications are key to rapid response and quickly resuming business operations.

Read the complete X-Force Threat Intelligence Index Report

The post Cryptojacking Rises 450 Percent as Cybercriminals Pivot From Ransomware to Stealthier Attacks appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: John Zorabedian

Cyberthreats, Incident Response (IR), orchestration, Security Operations Center (SOC), Threat Detection, threat hunting, Threat Intelligence, Threat Management, Threat Monitoring, Threat Prevention, Threat Response, Threat Sharing,

It’s Time to Modernize Traditional Threat Intelligence Models for Cyber Warfare

When a client asked me to help build a cyberthreat intelligence program recently, I jumped at the opportunity to try something new and challenging. To begin, I set about looking for some rudimentary templates with a good outline for building a threat intelligence process, a few solid platforms that are user-friendly, the basic models for cyber intelligence collection and a good website for describing various threats an enterprise might face. This is what I found:

  1. There are a handful of rudimentary templates for building a good cyberthreat intelligence program available for free online. All of these templates leave out key pieces of information that any novice to the cyberthreat intelligence field would be required to know. Most likely, this is done to entice organizations into spending copious amounts of money on a specialist.
  2. The number of companies that specialize in the collection of cyberthreat intelligence is growing at a ludicrous rate, and they all offer something that is different, unique to certain industries, proprietary, automated via artificial intelligence (AI) and machine learning, based on pattern recognition, or equipped with behavioral analytics.
  3. The basis for all threat intelligence is heavily rooted in one of three basic models: Lockheed Martin’s Cyber Kill Chain, MITRE’s ATT&CK knowledge base and The Diamond Model of Intrusion Analysis.
  4. A small number of vendors working on cyberthreat intelligence programs or processes published a complete list of cyberthreats, primary indicators, primary actors, primary targets, typical attack vectors and potential mitigation techniques. Of that small number, very few were honest when there was no useful mitigation or defensive strategy against a particular tactic.
  5. All of the cyberthreat intelligence models in use today have gaps that organizations will need to overcome.
  6. A search within an article content engine for helpful articles with the keyword “threat intelligence” produced more than 3,000 results, and a Google search produces almost a quarter of a million. This is completely ridiculous. Considering how many organizations struggle to find experienced cyberthreat intelligence specialists to join their teams — and that cyberthreats grow by the day while mitigation strategies do not — it is not possible that there are tens of thousands of professionals or experts in this field.

It’s no wonder why organizations of all sizes in a variety of industries are struggling to build a useful cyberthreat intelligence process. For companies that are just beginning their cyberthreat intelligence journey, it can be especially difficult to sort through all these moving parts. So where do they begin, and what can the cybersecurity industry do to adapt traditional threat intelligence models to the cyber battlefield?

How to Think About Thinking

A robust threat intelligence process serves as the basis for any cyberthreat intelligence program. Here is some practical advice to help organizations plan, build and execute their program:

  1. Stop and think about the type(s) of cyberthreat intelligence data the organization needs to collect. For example, if a company manufactures athletic apparel for men and women, it is unnecessary to collect signals, geospatial data or human intelligence.
  2. How much budget is available to collect the necessary cyberthreat intelligence? For example, does the organization have the budget to hire threat hunters and build a cyberthreat intelligence program uniquely its own? What about purchasing threat intelligence as a service? Perhaps the organization should hire threat hunters and purchase a threat intelligence platform for them to use? Each of these options has a very different cost model for short- and long-term costs.
  3. Determine where cyberthreat intelligence data should be stored once it is obtained. Does the organization plan to build a database or data lake? Does it intend to store collected threat intelligence data in the cloud? If that is indeed the intention, pause here and reread step one. Cloud providers have very different ideas about who owns data, and who is ultimately responsible for securing that data. In addition, cloud providers have a wide range of security controls — from the very robust to a complete lack thereof.
  4. How does the organization plan to use collected cyberthreat intelligence data? It can be used for strategic purposes, tactical purposes or both within an organization.
  5. Does the organization intend to share any threat intelligence data with others? If yes, then you can take the old cybersecurity industry adage “trust but verify” and throw it out. The new industry adage should be “verify and then trust.” Never assume that an ally will always be an ally.
  6. Does the organization have enough staff to spread the workload evenly, and does the organization plan to include other teams in the threat intelligence process? Organizations may find it very helpful to include other teams, either as strategic partners, such as vulnerability management, application security, infrastructure and networking, and risk management teams, or as tactical partners, such as red, blue and purple teams.

How Can We Adapt Threat Intelligence Models to the Cyber Battlefield?

As mentioned above, the threat intelligence models in use today were not designed for cyber warfare. They are typically linear models, loosely based on Carl Von Clausewitz’s military strategy and tailored for warfare on a physical battlefield. It’s time for the cyberthreat intelligence community to define a new model, perhaps one that is three-dimensional, nonlinear, rooted in elementary number theory and that applies vector calculus.

Much like game theory, The Diamond Model of Intrusion Analysis is sufficient if there are two players (the victim and the adversary), but it tends to fall apart if the adversary is motivated by anything other than sociopolitical or socioeconomic payoff, if there are three or more players (e.g., where collusion, cooperation and defection of classic game theory come into play), or if the adversary is artificially intelligent. In addition, The Diamond Model of Intrusion Analysis attempts to show a stochastic model diagram but none of the complex equations behind the model — probably because that was someone’s 300-page Ph.D. thesis in applied mathematics. This is not much help to the average reader or a newcomer to the threat intelligence field.

Nearly all models published thus far are focused on either external actors or insider threats, as though a threat actor must be one or the other. None of the widely accepted models account for, or include, physical security.

While there are many good articles about reducing alert fatigue in the security operations center (SOC), orchestrating security defenses, optimizing the SOC with behavioral analysis and so on, these articles assume that the reader knows what any of these things mean and what to do about any of it. A veteran in the cyberthreat intelligence field would have doubts that behavioral analysis and pattern recognition are magic bullets for automated threat hunting, for example, since there will always be threat actors that don’t fit the pattern and whose behavior is unpredictable. Those are two of the many reasons why the fields of forensic psychology and criminal profiling were created.

Furthermore, when it comes to the collection of threat intelligence, very few articles provide insight on what exactly constitutes “useful data,” how long to store it and which types of data analysis would provide the best insight.

It would be a good idea to get the major players in the cyberthreat intelligence sector together to develop at least one new model — but preferably more than one. It’s time for industry leaders to develop new ways of classifying threats and threat actors, share what has and has not worked for them, and build more boundary connections than the typical socioeconomic or sociopolitical ones. The sector could also benefit from looking ahead at what might happen if threat actors choose to augment their crimes with algorithms and AI.

The post It’s Time to Modernize Traditional Threat Intelligence Models for Cyber Warfare appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Kelly Ryver

Advanced Persistent Threat (APT), Advanced Threat Protection, Advanced Threats, Data Protection, Data Security, Security Information and Event Management (SIEM), Security Intelligence & Analytics, threat hunting, Threat Management, Threat Protection,

Embrace the Intelligence Cycle to Secure Your Business

Regardless of where we work or what industry we’re in, we all have the same goal: to protect our most valuable assets. The only difference is in what we are trying to protect. Whether it’s data, money or even people, the harsh reality is that it’s difficult to keep them safe because, to put it simply, bad people do bad things.

Sometimes these malicious actors are clever, setting up slow-burning attacks to steal enterprise data over several months or even years. Sometimes they’re opportunistic, showing up in the right place at the wrong time (for us). If a door is open, these attackers will just waltz on in. If a purse is left unattended on a table, they’ll quickly swipe it. Why? Because they can.

The Intelligence Cycle

So how do we fight back? There is no easy answer, but the best course of action in any situation is to follow the intelligence cycle. Honed by intelligence experts across industries over many years, this method can be invaluable to those investigating anything from malware to murders. The process is always the same.

Stage 1: Planning and Direction

The first step is to define the specific job you are working on, find out exactly what the problem is and clarify what you are trying to do. Then, work out what information you already have to deduce what you don’t have.

Let’s say, for example, you’ve discovered a spate of phishing attacks — that’s your problem. This will help scope subsequent questions, such as:

  • What are the attackers trying to get?
  • Who is behind the attacks?
  • Where are attacks occurring?
  • How many attempts were successful?

Once you have an idea of what you don’t know, you can start asking the questions that will help reveal that information. Use the planning and direction phase to define your requirements. This codifies what you are trying to do and helps clarify how you plan on doing it.

Stage 2: Collection

During this stage, collect the information that will help answer your questions. If you cannot find the answers, gather data that will help lead to those answers.

Where this comes from will depend on you and your organization. If you are protecting data from advanced threats, for instance, you might gather information internally from your security information and event management (SIEM) tool. If you’re investigating more traditional organized crime, by contrast, you might knock on doors and whisper to informants in dark alleys to collect your information.

You can try to control the activity of collection by creating plans to track the process of information gathering. These collection plans act as guides to help information gatherers focus on answering the appropriate questions in a timely manner. Thorough planning is crucial in both keeping track of what has been gathered and highlighting what has not.

Stage 3: Processing and Exploitation

Collected information comes in many forms: handwritten witness statements, system logs, video footage, data from social networks, the dark web, and so on. Your task is to make all the collected information usable. To do this, put it into a consistent format. Extract pertinent information (e.g., IP addresses, telephone numbers, asset references, registration plate details, etc.), place some structure around those items of interest and make it consistent. It often helps to load it into a schematized database.

If you do this, your collected information will be in a standard shape and ready for you to actually start examining it. The value is created by putting this structure around the information. It gives you the ability to make discoveries, extract the important bits and understand your findings in the context of all the other information. If you can, show how attacks are connected, link them to bad actors and collate them against your systems. It helps to work with the bits that are actually relevant to the specific thing you’re working on. And don’t forget to reference this new data you collected against all the old stuff you already knew; context is king in this scenario.

This stage helps you make the best decisions you can against all the available information. Standardization is great; it is hard to work with information when it’s in hundreds of different formats, but it’s really easy when it’s in one.

Of course, the real world isn’t always easy. Sometimes it is simply impossible to normalize all of your collected information into a single workable pot. Maybe you collected too much, or the data arrived in too many varied formats. In these cases, your only hope is to invest in advanced analytical tools and analysts that will allow you to fuse this cacophony of information into some sensible whole.

Stage 4: Analysis Production

The analysis production stage begins when you have processed your information into a workable state and are ready to conduct some practical analysis — in other words, you are ready to start producing intelligence.

Think about the original task you planned to work on. Look at all the lovely — hopefully standardized — information you’ve collected, along with all the information you already had. Query it. Ask questions of it. Hypothesize. Can you find the answer to your original question? What intelligence can you draw from all this information? What stories can it tell? If you can’t find any answers — if you can’t hypothesize any actions or see any narratives — can you see what is missing? Can you see what other information you would need to collect that would help answer those questions? This is the stage where you may be able to draw new conclusions out of your raw information. This is how you produce actionable intelligence.

Actionable intelligence is an important concept. There’s no point in doing all this work if you can’t find something to do at the end of it. The whole aim is to find an action that can be performed in a timely manner that will help you move the needle on your particular task.

Finding intelligence that can be acted upon is key. Did you identify that phishing attack’s modus operandi (MO)? Did you work out how that insider trading occurred? It’s not always easy, but it is what your stakeholders need. This stage is where you work out what you must do to protect whatever it is you are safeguarding.

Stage 5: Dissemination

The last stage of the intelligence cycle is to go back to the stakeholders and tell them what you found. Give them your recommendations, write a report, give a presentation, draw a picture — however you choose to do it, convey your findings to the decision-makers who set the task to begin with. Back up your assertions with your analysis, and let the stakeholders know what they need to do in the context of the intelligence you have created.

Timeliness is very important. Everything ages, including intelligence. There’s no point in providing assessments for things that have already happened. You will get no rewards for disseminating a report on what might happen at the London Marathon a week after the last contestant finished. Unlike fine wine, intelligence does not improve with age.

To illustrate how many professionals analyze and subsequently disseminate intelligence, below is an example of an IBM i2 dissemination chart:

The Intelligence Cycle

The analysis has already happened and, in this case, the chart is telling your boss to go talk to that Gene Hendricks chap — he looks like one real bad egg.

Then what? If you found an answer to your original question, great. If not, then start again. Keep going around the intelligence cycle until you do. Plan, collect, process, analyze, disseminate and repeat.

Gain an Edge Over Advanced Threats

We are all trying to protect our valued assets, and using investigation methodologies such as the intelligence cycle could help stop at least some malicious actors from infiltrating your networks. The intelligence cycle can underpin the structure of your work both with repetitive processes, such as defending against malware and other advanced threats, and targeted investigations, such as searching for the burglars who stole the crown jewels. Embrace it.

Whatever it is you are doing — and whatever it is you are trying to protect — remember that adopting this technique could give your organization the edge it needs to fight back against threat actors who jealously covet the things you defend.

To learn more, read the interactive white paper, “Detect, Disrupt and Defeat Advanced Physical and Cyber Threats.”

Read the white paper

The post Embrace the Intelligence Cycle to Secure Your Business appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Matthew Farenden

Artificial Intelligence (AI), Automation, Cognitive Security, Data Protection, Incident Response (IR), Security Information and Event Management (SIEM), Security Intelligence & Analytics, Security Operations Center (SOC), threat hunting, Threat Intelligence,

Maturing Your Security Operations Center With the Art and Science of Threat Hunting

Your organization has fallen prey to an advanced persistent threat (APT) after being targeted by a state-sponsored crime collective. Six million records were lost over 18 months of undetected presence on your network. While your security operations center (SOC) is fully staffed with analysts, your threat hunting capabilities failed to detect the subtle signs of threat actors moving laterally through your network and slowly exfiltrating customer data.

How did this happen?

It all started with a highly targeted spear phishing attack on your director of communications. He failed to notice the carefully disguised symbols in an email he thought was sent by the IT department and logged in to a spoofed domain. This credential theft resulted in the spread of zero-day malware and slowly escalated account privileges. While some of the criminals’ behavior triggered alerts in the SOC, your analysts categorized the incidents as benign positives. Your organization is now facing a multimillion-dollar cleanup and a serious loss of customer trust.

Why you need to master threat hunting

Why You Should Hunt Advanced Threats Before They Strike

Situations like the one described above are all too common in our industry. The majority of successful exploits attributed to human error fit a small series of predictable patterns that exploit known vulnerabilities in an organization’s network. As a result, many data breaches can be prevented with effective cyber hygiene tactics.

Advanced threats are a smaller proportion of incidents, but they are typically undetected and cause the most damage. In addition, the rise in state-sponsored crime and criminal activity on the dark web has created an ecosystem that fosters open exchange between the world’s most sophisticated and skilled criminals.

The cost of a serious breach is also trending upward. According to Ponemon, the average cost of a megabreach that results in the loss of more than 1 million customer records is $40 million. And more than 60 percent of data breaches have links to either state actors or advanced, organized crime groups, according to Verizon. APTs that evade detection can result in dwell times that range from three to 24 months, further increasing the total cleanup cost for a data breach.

How can security teams fight these kinds of threats? The majority of enterprise SOCs are now at least three years old, according to a recent study from Exabeam, and are increasing in maturity. While human analysts and manual research methodologies can act as a firewall against many risks, there’s a need to scale SOC intelligence and threat hunting capabilities to safeguard against APTs.

What Is Threat Hunting?

Threat hunting can be defined as “the act of aggressively intercepting, tracking and eliminating cyber adversaries as early as possible in the Cyber Kill Chain.” The practice uses techniques from art, science and military intelligence, with internal and external data sources informing the science of statistical and cognitive analysis. Human intelligence analyzes the results and informs the art of a response. Last year, 91 percent of security leaders reported improved response speed and accuracy as a result of threat detection and investigation, according to the SANS Institute.

Threat hunting is not defined by solutions, although tools and techniques can significantly improve efficiency and outcomes. Instead, it’s defined by a widely accepted framework from Sqrrl. These are the four stages of Sqrrl’s Threat Hunting Loop:

  1. Create a hypothesis.
  2. Investigate via tools and techniques.
  3. Discover new patterns and adversary tactics, techniques and procedures (TTPs).
  4. Inform and enrich automated analytics for the next hunt.

The goal for any security team should be to complete this loop as efficiently as possible. The quicker you can do so, the quicker you can automate new processes that will help find the next threat.

The 4 Characteristics of a Comprehensive Threat Hunting Capability

A mature threat hunting capability is closely associated with SOC maturity. The least mature SOCs have human analysts who act as a firewall. As SOCs approach maturity and adopt security information and event management (SIEM) tools, their capacity to reactively investigate indicators of compromise (IoCs) increases. The most mature SOCs take a proactive approach to investigating IoCs, with researchers, analysts, solutions and a clearly defined methodology to orchestrate both investigation and response. A comprehensive capacity for hunting threats is defined by four key characteristics:

  1. Data handling: The ability to handle a deluge of data across siloed networks, including insight into internal risks, advanced activities from external threat actors and real-time threat intelligence from third-party sources.
  2. Data analysis: The ability to correlate a high volume and velocity of disparate data sources into information and, ultimately, intelligence.
  3. Informed action: Resources to increase threat hunters’ skills and easily feed threat intelligence through training, policy and cognitive capabilities.
  4. Orchestrated action: Defined processes and methodologies to hunt threats in a repeatable and orchestrated way that informs proactive security capabilities throughout the organization.

Organizations that fail to increase SOC maturity and adopt the solutions and processes for hunting threats face a number of risks. Relying on manual research methodologies can lead to costly data breaches and permanent brand damage when APTs evade detection. A lack of solutions and methods for an orchestrated IoC investigation process means less efficient and accurate operations. The absence of SOC orchestration encourages heavily manual processes.

The Diverse Business Benefits of Hunting Threats

Using cognitive intelligence tools to enhance SOC capabilities, Sogeti Luxembourg successfully reduced the average time of root cause determination and threat investigation from three hours to three minutes. In the process, the financial institution sped up their threat investigation process by 50 percent and saw a tenfold increase in actionable threat indicators.

Hunting threats can offer a number of benefits to both the business and the security operations center. The outcomes include greater protection of reputation, a more intelligent SOC and orchestrated security response.

Reputation Protection

Falling prey to an APT can cause lasting damage to a brand. One core benefit of implementing a more sophisticated threat hunting capability is the potential to guard against the most costly data breaches, which typically result in millions of lost records or permanent data deletion.

SOC Maturity

SOC analyst stress and falsely assigned benign positives are at record highs. APTs can easily go unnoticed due to the sheer volume of noise, which creates a culture of alert fatigue.

Achieving mature threat detection capabilities can change how analysts work and allow organizations to implement a cognitive SOC. Security analytics platforms enhance human intelligence and reduce manual research, and correlation tools provide real-time insight from a variety of structured and unstructured third-party data sources.

Orchestrated Security Response

With the technological capabilities to outthink adversaries, organizations can inform a proactive, unified approach to incident response (IR) that begins in the SOC. Internal and external data sources augment the intelligence of human analysts, allowing real-time, informed decision-making. Investigations and response can inform action as soon as anomalous behaviors or patterns are detected. The result is a defined process that allows your organization to mitigate threats earlier in the Cyber Kill Chain.

Intelligent Response to Sophisticated Threats

The majority of threats your organization faces each day will fit predictable patterns. While APTs make up a statistically small percentage of incidents investigated by an SOC, sophisticated threat actors use unique tactics and techniques to evade detection in a noisy SOC. These threats are the most likely to evade detection and result in highly expensive cybercrime.

State-sponsored criminal activity and attacks launched by sophisticated crime collectives are increasing. To guard against these increasingly complex threat vectors, organizations need to proactively prepare their defenses. Implementing cognitive tools in the SOC can enable organizations to adopt proactive threat hunting capabilities that leverage both art and science. By combining repeatable processes for threat investigation with intelligent solutions and skilled analysts, organizations can respond to threats earlier in the kill chain and protect their most critical assets.

Read the e-book: Master threat hunting

The post Maturing Your Security Operations Center With the Art and Science of Threat Hunting appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Rob Patey

Attribution, Cyberattacks, Cybercriminals, Cyberthreats, Threat Detection, threat hunting, Threat Management, Threat Monitoring,

The Cyber Attribution Dilemma: 3 Barriers to Cyber Deterrence

Given that the most serious threats in cyberspace are other state actors and their proxies, traditional thinking is focused on deterrence. Yet there are significant challenges for cyber deterrence.

The concept of deterrence was originally developed during the rise of nuclear technology. It relies on second-strike capabilities of opponents and complete certainty of who the opponent is, that it can survive the first strike and that it can strike back. This is known as mutually assured destruction (MAD).

Deterrence strategies have worked well throughout history to deter nuclear proliferation because only nation-states have access to the resources and technologies to get in the game. Of those actors, a basic self-interest in survival underpins the effectiveness of MAD.

There are many methods available for monitoring the mining and use of nuclear materials and technologies, and we have a fairly accurate inventory. In the cyber theater, however, the cyber attribution dilemma essentially nullifies the traditional model of deterrence as previously applied to military strategies in conventional warfare. As mentioned, MAD depends on knowing who your opponent is and understanding their capabilities for a second strike. In the cyber theater, both of these requirements are virtually impossible to fulfill.

What Are the Top Challenges to Cyber Deterrence?

Because of the inherent architecture of the internet and threat actors’ ability to obfuscate the source of an attack, it is nearly impossible to attribute attacks with a high degree of certainty. This results in a cyber attribution dilemma whereby the need to impose the costs necessary for cyber deterrence is juxtaposed with the potential costs of misattribution.

1. Misattribution

Many are concerned about the dangers of misattribution in cyber warfare and the potential escalations it could cause. The current deterrence paradigm of mutually assured disruption — the equivalent of MAD in the cyber arena — has a high risk of escalating into a tit-for-tat exchange as a result of a false accusation.

2. False Flags

Adversaries have historically used false flag operations to make an operation appear as though it was perpetrated by someone else. Because of the cyber attribution dilemma, false flags are much easier to execute in cyberspace, where the challenge of attribution already exists. False flags in cyberspace exploit this existing uncertainty and further compound doubt by casting suspicion on other actors.

3. Plausible Deniability

The attribution dilemma also gives threat actors the benefit of plausible deniability, further reducing the risks and costs associated with cyber actions. If you can’t be certain who is responsible, once again, you can’t impose costs without risking imposing the costs on the wrong actor.

In the Absence of Attribution, Resilience Is Critical

The stakes are high in cyberspace and growing daily. Deterrence rests on enterprises’ ability to impose costs or deny gains. Without the ability to impose costs while avoiding misattribution and escalation, denying gains and surviving cyberattacks through resilience is hypercritical.

Advanced attacks executed by sophisticated actors who know how to stay under the radar often cause the most damage. Adopting threat hunting in your security operations center (SOC) can help reduce dwell time as well as the cost and impact of attacks.

Read the SANS threat hunting survey

The post The Cyber Attribution Dilemma: 3 Barriers to Cyber Deterrence appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Jan Dyment

Cybersecurity Jobs, Cybersecurity Training, Data Classification, Data Management, Security Operations Center (SOC), Skills Gap, Threat Detection, threat hunting, Threat Intelligence, Threat Monitoring, Threat Prevention, Threat Protection,

More Than Just a Fad: Lessons Learned About Threat Hunting in 2018

The year has very nearly come and gone, and some fads that we saw throughout 2018 are going with it. Fidget spinners are collecting dust in cubicles, the mannequin challenge is something only seen in department stores, and the Nae Nae is becoming extinct on dance floors across the country.

It’s no different in the cybersecurity community; trending tools and buzzwords come and go as quickly as viral internet memes. However, one capability that it’s here to stay is threat hunting, a proactive approach to discovering and mitigating threats. The term and practice of threat hunting has actually been around for quite some time, but it is becoming more of a household concept throughout security operations centers (SOCs), governments and private sector companies around the world. This is largely due to studies around the benefits of the practice and real-world use cases that are rapidly emerging.

In the past year, we learned about the pros and cons of this approach, what it is, what it isn’t and everything in between. Let’s break down some of the lessons we learned about threat hunting in 2018.

Invest in Training and Methodology Before Technology

When a new security capability gains momentum in the industry, most companies’ first investment is in the tools to get them started. The same is true when it comes to investments in threat hunting, where an emphasis on methodology and tradecraft is paramount.

A key finding from the SANS 2018 threat hunting survey revealed that the No. 1 investment area for threat hunting is still technology, although respondents indicated that the lack of trained staff in numerous areas was an important reason why they did not perform threat hunting or why they did not perform it as effectively as they should. The tools are only as good as the trained professional. This is as true with threat hunters as it is with construction workers, and it should not be forgotten.

Training and hiring the right people is especially important since threat hunting requires individuals with a knowledge of intelligence analysis and an understanding of the technical aspects of the SOC. Currently, threat hunting falls within a skills gap, which means finding a trained threat hunter to use the tools that a company has invested in is like finding a unicorn.

Going into 2019, organizations that practice threat hunting should take a holistic look at their programs and, if it’s lacking, assess whether it’s the fancy tools or the lack of trained cyberthreat hunters that is the issue. Similarly, organizations that are new to the threat hunting game should evaluate the threat hunters they have or plan to hire before pulling the trigger on the latest tools.

Threat Hunting Is Only as Effective as Your Intelligence Framework

To launch an effective threat hunting program, you also need access to the right data. In terms of efficiency and accuracy, this should consist of internal data from the company mixed with external deep web, dark web, open source and third-party threat intelligence that provides context about threats manifesting through global cybercrime networks.

The SANS survey showed that a solid blend of internal, self-generated intelligence augmented with a combination of external data sources can reduce overall adversary dwell times across organizations’ networks. But it is more than just the access to the data itself; an organization could have access to all the data feeds in the world, but if it lacks the ability to provide context and formulate actionable hypotheses, then the data is next to useless.

In the counterterrorism community, we always said that intelligence drives operations. Yes, we needed access to the right data, but more importantly, we needed the ability to fuse all sources of data and develop actionable advice for operators. It’s the same with threat hunting: Data is key, but there needs to be a way to ingest, fuse and analyze data to formulate hypotheses about threats.

Threat Hunting Is Here to Stay in 2019

Going into 2019, the cybersecurity community will continue to learn about the world of threat hunting and how organizations can implement an effective threat hunting program. Just like the fads that will inevitably come and go in 2019, there will be new cybersecurity tools, methodologies and lessons in the new year. Due to the tangible benefits that organizations are seeing after implementing threat hunting programs, it’s apparent that the practice is not just another security fad.

As organizations train analysts on methodology before technology — and explore how to close the threat hunter skills gap, get access to the right data and generate actionable hypotheses to uncover threats — we will continue to learn how effective a threat hunting program can be when properly implemented.

Read the SANS 2018 threat hunting survey

The post More Than Just a Fad: Lessons Learned About Threat Hunting in 2018 appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Jake Munroe

IBM X-Force Incident Response and Intelligence Services, Incident Response, Incident Response (IR), Security Services, threat hunting, Threat Intelligence, Voices of Security, X-Force,

Visit the Subway System of Cybercrime With Security Consultant Francisco Galian

It took a group of Spain’s best hackers to awaken Francisco Galian’s passion for cybersecurity.

Francisco was in his last year of university in his native Barcelona, and as he was looking for a topic for his final thesis project an unforeseen opportunity presented itself: A security startup based on campus was developing a new threat intelligence platform. Though Francisco — then studying telecommunications engineering — didn’t intend to enter the security field at the time, he thought it could be a good learning opportunity.

“To me, it was incredible seeing what the hackers were doing, learning from them,” he says. “I just totally loved it. I was learning a lot and hearing all these battle stories.”

From In-House Intelligence to Security Consultant

Those “battle stories” must have been inspiring, because Francisco dove headfirst into security. He worked in cyberthreat intelligence before moving in-house, combining his telecommunications degree and newfound love of security by working with the likes of Cellnex and O2 Telefonica as the security lead.

Those days, he says, were “massively different” from his current work as a security consultant at IBM X-Force Incident Response and Intelligence Services (IRIS) EMEA. Working for just one company requires an intimate understanding of its infrastructure, and it adds the complications of navigating the internal politics that can make life tough for security teams. It can also lead internal teams to become complacent, Francisco believes.

“If you’re a company, you should be receiving attacks every single day just because you have public assets,” he says. “That doesn’t mean that these are very naughty attacks and everything is wrong, no. You just have to see them because you are exposed to the internet.”

Nowadays, Francisco worries when he hears that a customer hasn’t had an attack in a while. He remembers his own days in-house and knows it’s just when you think you’re safest that attacks hit you hardest. Too often he’s spoken with customers who think they’re fine, only to have the threat hunters tell them they’ve been fully compromised for months.

The Secret Subway System of Cybercrime

He explains it with an analogy. Let’s say you work in a bank in a city with an underground transport network. Now, you walk along the streets and you walk into your office, and you don’t think about the network operating underneath you; it’s invisible to those above ground. But underneath the streets, the bad guys are moving all the money out of your bank accounts.

“The thing is, you were blind — you were not looking for it, both in processes and infrastructure,” Francisco says. “That’s the big reality. People working just in one company, sometimes they struggle to understand that.”

Francisco now spends his days on-call to be parachuted in when times are tough for IBM clients. He jokes that Friday at 5 p.m. is the busiest time, as the weekend looms and internal teams haven’t been able to crack the problem.

Francisco uses his vast knowledge of cybersecurity to help with incident response, to find the issues and to help rectify and protect. He talks about one banking client that found its website defaced by threat actors; he needed to investigate the incident to determine whether it was a compromise in their infrastructure or the DNS provider’s. Remarkably, he had that one solved in three hours.

IBM X-Force IRIS security consultant Francisco Galian

Cryptojacking Is This Year’s Big Threat

The major threat trend this year has been in cryptojacking, wherein a system is compromised not to lock it with ransomware, but to use its computing resources to mine cryptocurrencies. The largest incident Francisco has worked on saw thousands of machines compromised within one company. That attacker was clever: They set a low threshold for the zombies, which meant the CPU wasn’t maxed out, making it harder to detect.

“The thing is, if for whatever reason they get pissed off, they can just shut down a huge part of your network,” he laments. And he’s seen that — threat actors who get annoyed and start to play around, or worse.

“Our day-to-day is just once a year for most companies,” Francisco says of the team focused on incident response and digital forensics. Customers come to the team when they have a severe incident they can’t handle internally. Every week it could be a new incident, a new threat, a new investigation — and when there are no new cases, the team is preparing customers via simulations and scenarios to help them be ready when the time comes.

“My aim is always to push for the efficiency, to find clever ways of doing stuff, automating tasks,” Francisco says. “That’s what I learned from my sensei from my early days. He was crazy about that — he automated everything even when he was pen testing, attacking, defending, and I’ve embraced that fully.”

‘The Answer Is Not Always in the Coffee’

And yet Francisco is not tech-obsessive. When he’s finished saving networks, you’ll find him outside playing sports — far from the computer’s glare. It’s a need to “disconnect,” he says; to have an escape. He jokes that he learned he had to have his “own life” after his first few years working in security.

And he finds staying fresh makes a big difference when you’re in the midst of responding to a big incident. “I’ve learned this from bad experiences,” he says. “You just have to find your own ways of disconnecting, and to me, sport is one of the best. If you can go and be outside, it’s going to be always better.”

That fresh mind is key when he’s in the midst of a situation and trying to work out his next move, battling the threat actors that inspired his career so many years ago. Laughs the Spaniard, “The answer is not always in the coffee!”

Meet IBM Master Inventor Rhonda Childress

The post Visit the Subway System of Cybercrime With Security Consultant Francisco Galian appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Security Intelligence Staff

Artificial Intelligence (AI), Data Protection, Data Security, Incident Forensics, Incident Response (IR), Malware, Ransomware, Security Intelligence & Analytics, Threat Detection, threat hunting, Threat Protection,

Following the Clues With DcyFS: A File System for Forensics

This article concludes our three-part series on Decoy File System (DcyFS) with a concrete example of how a cyber deception platform can also be a powerful tool for extracting forensic summaries. Using that data can expedite postmortem investigations, reveal attributing features of malware, and characterize the impact of attackers’ actions. Be sure to read part 1 and part 2 for the full story.

File System Overlays as Blank Canvases

When using Decoy File System (DcyFS), each subject’s view contains a stackable file system with an overlay layer. This layer helps protect files on the base file system, providing data integrity and confidentiality. The overlay also acts as a blank canvas, recording all created, modified and deleted files during suspicious user activity or the execution of an untrusted process.

These records are essential to piecing together what happens during a cyberattack as the overlay provides evidence of key indicators of compromise (IoCs) that investigators can use. To demonstrate the forensic capabilities of our approach, we created a module that analyzes overlays for IoCs and tested it with five different types of malware. The IoCs were sourced from the ATT&CK for Enterprise threat model.

DcyFS and the Forensics of Malware

Let’s take a closer look at the five malware types we identified with DcyFS’s analysis module and the IoCs collected through the file system overlays. We’ll also discuss how the file system actively helped protect critical systems from malware in our tests.

Persistence

Most malware is designed to persist on an infected endpoint and relaunch after a system reboot. The exact mechanism for persistence is dependent on whether the malware gains access to administrator privileges on the endpoint. If it does not, then the malware will typically modify user profile files that are run on startup.

Malware running with escalated privileges can modify systemwide configurations in order to persist. This is achieved by dropping initialization scripts into the system run-level directories. In certain cases, malware will create reoccurring tasks that ensure the malware is run on a schedule, persisting across reboots.

Each time a piece of malware modifies a system file, the changes are recorded on DcyFS’s overlay, enabling the forensic analyzer to easily identify malicious activity. Furthermore, since DcyFS provides per-process views to the malware, no file changes by the malware persist across the global file system view. This also means the malware is not restarted on a reboot.

Dynamic Link Library (DLL) Injection

Some malware, such as Umbreon and Jynx2, are not executables, but rather libraries designed to be preloaded by system processes. The libraries replace important system application programming interface (API) calls to change the functionality of a running application. In this way, an Apache web server can be turned into a backdoor, or a Bash shell can be hijacked to mine bitcoins in the background.

In Umbreon’s case, the malware replaces C API calls such as “accept,” “access” and “open” to hide its presence on the file system from an antivirus system or the system user. Umbreon also creates a user, and hides its presence using injected API calls. Such file system changes are identified by DcyFS, as is the injected malicious library. Furthermore, since the library is only loaded in its own view, it cannot be injected into any process running on the system.

Binary Downloaders (Modifiers)

Cybercrime is a mercurial commodity business, where large criminal syndicates rent access to extensive botnets to other attackers. These bots are designed to send malicious spam or download various pieces of malware, such as banking Trojans, bitcoin miners and keyloggers, to collect stolen data that can be monetized by the syndicate.

With administrative access to an infected endpoint, bots will try to download malware into many system directories, creating redundancy in hopes that the defender will miss one when detected. As a result, newly installed binary downloads on a file system are a key IoC.

Aside from downloading new binaries, malware can also alter existing system binaries to make them secretly engage in nefarious activities. While running on DcyFS, these binary modifiers only appear to modify the overlay they can access — they are unable to modify the applications in the global view of the base file system. Consequently, they are never truly executed, but the modified binary appears prominently on the overlay, where it can be extracted and analyzed by a forensics team.

Backdoors

Typically, skilled attackers will try to cover their tracks to evade detection. One way of doing this is by saving malware into hidden files, such as any file starting with a period, or modifying programs such as “ls” or “dir” so that malware files are ignored when the contents of a directory are displayed to a user.

Another technique for hiding one’s presence is to remove entries from a user’s history profile or deleting task entries that conduct antivirus scans. Finally, killing or deleting antivirus software is another mechanism for ensuring that malicious activities are not uncovered. With DcyFS, each step used to cover one’s tracks is highlighted on the file system’s overlay.

Ransomware and Beyond

Ransomware has become a prominent part of the attack ecosystem, wreaking havoc on individuals and companies alike. The Erebus ransomware, for example, cost South Korean companies millions of dollars in ransom payments to rescue their own and their customers’ data.

Recent ransomware attacks have capitalized on strong, asymmetrical encryption as the main technique to hold victims’ data for ransom. However, other malware, such as KillDisk and Shamoon, simply destroys important files and cripples system infrastructure without the option to undo the destruction.

When dealing with ransomware on the endpoint, the malware attempts to run through directories and locate preconfigured file extensions to encrypt. When that process begins, our forensic analysis looks for indication of encryption in the overlay file system, such as file MIME type, to find evidence of a ransomware attack. It can also characterize attacks by measuring their information footprint in the file system. The DcyFS forensics analyzer generates three indicators that estimate the impact of the following file system changes introduced by programs:

  • Binary differences — Average percentage of modified bytes across copied files.
  • Information gain — Average information gain across copied files measured as the difference between the entropies of base and overlay files.
  • Write entropy — Average write entropy across overlay files.

DcyFS also actively protects files from ransomware using the overlay. This allows the ransomware to “believe” it has succeeded, but enables the user to subvert the attack without any damage to critical infrastructure.

Humanize Your Security Problems With DcyFS

DcyFS is a security Swiss army knife. On one hand, the file system is a passive sensor, monitoring access to one of the most important commodities companies have: their data. It is also a forensic tool, allowing security practitioners to collect key evidence when an attack occurs. On the other hand, DcyFS is an active security control that can hide and help protect data while baiting attackers into revealing themselves.

Our research team believes that tools like DcyFS will be a big part of the next generation of cyberdefense. Agile and versatile tools of this kind not only identify attacks as they occur, but actively engage and react to the attacker. They turn security from a technical problem, as it is often cast, into a human problem, where adversaries and defenders engage like they do on any battlefield.

The post Following the Clues With DcyFS: A File System for Forensics appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Teryl Taylor

Cyberthreat, Human Factor, Security Operations Center (SOC), Skills Gap, threat hunting, Threat Intelligence, Threat Management,

Know Your Enemy: The Art and Science of Cyberthreat Hunting

From Rome to Mexico City, as my IBM Security colleagues and I have traveled the world teaching cyberthreat hunting, we’ve found a multitude of differing opinions about who is and isn’t a target for cyberattacks.

One attendee at a recent workshop even stated: “My bank isn’t a target for a cyberattack because our country isn’t seen as a major globalized economy.”

The reality, however, is that your organization is always a target. Whether you’re a target of choice or a target of opportunity, it’s not a matter of if you’ll be attacked, but when. There’s even a possibility that attackers are already dwelling within your network and have been for some time.

Watch the on-demand webinar: Know Your Enemy — Proactive Cyber Threat Intelligence and Threat Hunting

Make the First Move With a Strong Cyberthreat Hunting Team

One of the best ways to get out ahead of malicious actors is with cyberthreat hunting, the act of proactively and aggressively eliminating adversaries as early as possible in the Cyber Kill Chain. The quicker you can locate and track your adversaries’ tactics, techniques and procedures (TTPs), the less impact attackers will have on your business.

Know Your Enemy

So what types of skills does a cyberthreat hunting team require?

Security operations center (SOC) analysts define cyberthreat hunting as reactive indicators of compromise (IoCs) that lead to an investigation of an incident. IoCs are typically generated by internal security systems such as security information and event management (SIEM), incident response, intrusion detection systems (IDS) and intrusion prevention systems (IPS), and endpoint management tools.

Military and law enforcement intelligence analysts, however, define cyberthreat hunting as the process of proactively identifying, intercepting, tracking, investigating and eliminating IoCs before they impact national security, critical infrastructure and/or citizens.

The truth is they’re both right. There’s a tectonic shift occurring in the cybersecurity community with the convergence and blurring of lines between SOC and intelligence analysts. The challenge is that SOC analysts are not formally trained in intelligence life cycle analysis, and intelligence analysts are not formally trained in incident analysis and response.

The knowledge gap between these two skill sets is quite significant and has to be closed and integrated to build a fully functioning and productive cyberthreat hunting team. It’s also critical for SOCs to grasp the common denominator in both internal (reactive) and external (proactive) cyberthreats: the human element.

Put Methodology Before Technology to Close the Skills Gap

Security teams should take proactive steps to close the skills gap and mature their SOC. First, start with the basic definition of cyberthreat hunting provided above. Next, develop an understanding of the intelligence life cycle tradecraft and apply it to both security and intelligence operations. Finally, create a priority intelligence requirements (PIR) matrix that asks the logical questions of who, what, where, when, why and how regarding the analysis of global, industry-specific, geographic and cyberthreats applicable to your business.

SOC Maturity Chart

There’s no magic button or technology that will solve all of your security challenges. Through the integrated elements of people, processes, data and technology applied to the “know your enemy” intelligence methodology, you can fully gain insight into how cybercriminals are seeking to target your organization. Putting methodology before technology will serve you well in defining your adversaries’ TTPs and the methods they might use to target your organization.

In a world where the enemy potentially has access to infinite time, money and resources, it’s absolutely critical for the cybersecurity industry to close the knowledge and skills gaps, truly understand the art and science of cyberthreat hunting, and apply that understanding to proactively stop threats before they become a problem.

Watch the on-demand webinar: Know Your Enemy — Proactive Cyber Threat Intelligence and Threat Hunting

The post Know Your Enemy: The Art and Science of Cyberthreat Hunting appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Sidney Pearl