Browsing category

Artificial intelligence

Artificial intelligence, Artificial Intelligence (AI), Professional Development, Security Intelligence & Analytics, Security Operations Center (SOC), Security Professionals, Security Training, Threat Intelligence, Threat Sharing,

Foster a Culture of Knowledge Sharing in Your Security Operations Center

Security operations centers (SOCs) are leaning more and more on technology, and artificial intelligence (AI) and orchestration are finding their way into the detection and response activities of the modern SOC. Although large chunks of these activities are being programmed, there are still some people-oriented aspects that will never be fully automated. One of them is the knowledge sharing that takes place between the people in your SOC.

Share Knowledge to Bridge the Skills Gap

Every day, security operations center analysts rely on their skills and knowledge of different systems, tools, technologies, threats and vulnerabilities to investigate alerts. Finding highly skilled SOC analysts is a challenge. An alternative is to turn to more junior profiles or external resources, which results in a more diverse mix of skills and knowledge. This is considered a strength, but only if this knowledge is transferred between team members.

If a skill is exclusive to any single employee, on the other hand, this is a weakness. The last thing you want in an industry with high turnover is to lose employees with undocumented knowledge. A good approach to enabling active knowledge sharing is critical to running an efficient SOC.

Explicit Versus Tacit Knowledge

Let’s first dig into what kind of information there is to share. In general, we can make a distinction between explicit and tacit knowledge. Explicit knowledge, such as procedures, work instructions and other documents, are easily transferable.

Tacit knowledge, on the other hand, is based on experience and intuition. When a senior SOC analyst has a feeling that a security event is related to malicious activity, this can be linked to tacit knowledge. The analyst might not immediately be able to explain the reason for this feeling since it is based on personal context. When a complex security alert pops up or a storm of tickets rages, this person will be quicker to identify the issue in an accurate way.

What Is the Proper Channel for Sharing?

Since there are different types of knowledge, there should also be different ways to transfer knowledge. You should store explicit knowledge in a knowledge management repository. Collaboration or wiki software can help you centralize this information. You can also share tips and tricks, workarounds, contact lists, tool manuals, standard operating procedures, checklists, shared bookmarks, escalation paths, use case documentation or any other information that is easily transferable in such a repository.

SOC analysts working on a 24/7 shift schedule might find it hard to connect to what is going on during the day. Creating a news section with decisions made during meetings, running issues, achievements and more could help them avoid missing any important information.

Documenting tacit knowledge is more challenging. The best way to transfer this kind of knowledge is face to face. Mentoring and job shadowing can be a good approach; for example, you might pair senior and junior SOC analysts to investigate alerts. Of course, it’s also a great idea to transform tacit knowledge into explicit knowledge by documenting historical investigations whenever possible.

Let AI Do the Heavy Lifting

To make knowledge management more efficient, it’s critical to get the right information to the right people at the right time. This is where AI solutions can assist with knowledge management in your SOC. Analysts are drowning in security news and announcements every day. When investigating alerts, AI can generate relevant insights automatically and provide SOC analysts with related online and offline information based on IP addresses, malware names and hashes, enabling analysts to make faster decisions.

More Tips to Enable Active Knowledge Sharing in Your Security Operations Center

It’s imperative for SOC teams to actively share knowledge and identify knowledge gaps. Aside from reduced frustration and higher job satisfaction, knowledge sharing will lead to faster response times, higher quality of analysis and lower costs. It will also improve the efficiency, accuracy and time of investigations, and new team members will be able to ramp up more quickly. AI solutions can augment the knowledge sharing process and allow SOC analysts to focus time and energy on their core competencies.

Below are some additional tips to encourage knowledge sharing in your security operations center:

  • Make knowledge management and mentoring part of the job description.

  • Create an onboarding checklist for new SOC analysts to show the value of knowledge sharing from day one on the job. The ability to quickly onboard new analysts is an advantage to any SOC.

  • Facilitate job rotation between different tiers to identify knowledge shortcomings.

  • Organize regular peer training sessions in which an analyst delivers a short presentation to other colleagues on a topic of his or her choice.

  • When hiring specialized resources for a few days, allocate some time to let them organize a training.

  • Establish an open SOC area and limit remote work to maximize the amount of knowledge being shared.

The post Foster a Culture of Knowledge Sharing in Your Security Operations Center appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Simon Gemoets

Artificial intelligence, Artificial Intelligence (AI), Cognitive Security, IBM Watson, Security Information and Event Management (SIEM), Watson,

With AI for Cybersecurity, We Are Raising the Bar for Smart

It’s hard to imagine something more frustrating to a runner than moving the finish line after the race has started. After all, how can you set a proper pace if the distance keeps changing? How will you know you’ve succeeded if the definition of success is in flux?

In a sense, that’s what has happened over the years in the field of artificial intelligence (AI). What would you call something that could add, subtract, multiply and divide large, complex numbers in an instant? You’d probably call it smart, right? Or what if it could memorize massive quantities of seemingly random data and recall it on the spot, in sequence, and never make a mistake? You might even interpret that sort of brain power as a sign of genius. But what exactly does it mean to be intelligent, anyway?

Now that calculators are included as default features on our phones and smartwatches, we don’t consider them to be particularly intelligent. We also have databases with seemingly infinite capacity at every turn, so we no longer view these abilities as indicative of some sort of higher intelligence, but rather as features of an ordinary, modern computer. The bottom line is that the bar for what is generally considered smart has moved — albeit far from the first time.

What Does It Mean to Be Intelligent?

There was a time when we thought that chess was such a complex game that only people with superior brain power could be champions. Surely, the ability to plot strategies, respond to an opponent’s moves and see many moves ahead with hundreds or even thousands of outcomes was proof of incredible intellect, right?

That was pretty much the case until 1997, when IBM’s Deep Blue computer beat grandmaster and world champion Gary Kasparov in a six-game match. Was Deep Blue intelligent even though the system couldn’t even read a newspaper? Surely, intelligence involved more than just being a chess savant. The bar for smart had moved.

Consider the ability to consume and comprehend huge stores of unstructured content written in a form that humans can read but computers struggle with due to the vagaries of normal expression, such as idioms, puns and other quirks of language. For example, saying, “it’s raining cats and dogs,” or that someone has “cold feet?” The former has nothing to do with animals and the latter is not a condition that can be remedied with wool socks.

What if a system could read this sort of information nonstop across a wide range of categories, never forget anything it reads and recall the facts relevant to a given clue with subsecond response time? What if it was so good at this exercise that it could beat the best in the world with more correct responses in less time? That would surely be the sign of a genius, wouldn’t it?

It would have been until, in 2011, IBM’s Watson computer beat two grand champions at the game of Jeopardy! while the world watched on live TV. Even so, was Watson intelligent, or just really good at a given task as its predecessors had been? The bar for smart had moved yet again.

Passing the Turing Test: Are We Near the Finish Line?

The gold standard for AI — proof that a machine is able to match or exceed human intelligence in its various forms by mimicking the human ability to discover, infer and reason — was established in 1950 by Alan Turing, widely considered the father of theoretical computer science and AI. The Turing Test involved having a person communicate with another human and a machine. If that person was unable to distinguish through written messages whether they were conversing with the other person or the computer, the computer would be considered intelligent.

This elegant test incorporated many elements of what we consider intelligence: natural language processing, general knowledge across a wide variety of subjects, flexibility and creativity, and a certain social intelligence that we all possess, but may take for granted in personal communications until we encounter a system that lacks it. Surely, a computer that can simulate human behavior and knowledge to the extent that a neutral observer could not tell difference would be the realization of the AI dream — finish line crossed.

That was the conventional wisdom until 2014, when a computer managed to fool 33 percent of evaluators into thinking they were talking to a 13-year old Ukrainian boy. Surely, this achievement would have convinced most people that AI was finally here now that a machine had passed the iconic Turing Test, right? Nope — you guessed it — the bar for smart had moved.

How AI for Cybersecurity Is Raising the Bar

Now, we have systems doing what was previously unthinkable, but there is still a sense that we’ve yet to see the full potential of AI for cybersecurity. The good news is that we now have systems like Watson that can do anything from recommending treatment for some of the most intractable cancer cases to detecting when your IT systems are under attack, by whom and to what extent. Watson for Cybersecurity can do the latter today by applying knowledge it has gleaned from reading millions of documents in unstructured form and applying that learning to the precise details of a particular IT environment. Better still, it does all this with the sort of speed even the most experienced security experts could only dream of.

Does it solve all the problems of a modern security operations center (SOC)? Of course not. We still need human intelligence and insight to guide the process, make sense of the results and devise appropriate responses that account for ethical dilemmas, legal considerations, business priorities and more. However, the ability to reduce the time for investigations from a few hours to a few minutes can be a game changer. There’s still much more to be done with AI for cybersecurity, but one thing’s for sure: We have, once again, raised the bar for smart.

The post With AI for Cybersecurity, We Are Raising the Bar for Smart appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Jeff Crume

Artificial intelligence, Artificial Intelligence (AI), Chief Information Security Officer (CISO), Cognitive Security, Data Management, Data Privacy, Governance, Internet of Things (IoT), Security Strategy, Security Technology,

How CISOs Can Facilitate the Advent of the Cognitive Enterprise

Just as organizations are getting more comfortable with leveraging the cloud, another wave of digital disruption is on the horizon: artificial intelligence (AI), and its ability to drive the cognitive enterprise.

In early 2019, the IBM Institute for Business Value (IBV) released a new report titled, “The Cognitive Enterprise: Reinventing your company with AI.” The report highlights key benefits and provides a roadmap to becoming a cognitively empowered enterprise, a term used to indicate an advanced digital enterprise that fully leverages data to drive operations and push its competitiveness to new heights.

Such a transformation is only possible with the extensive use of AI in business and technology platforms to continuously learn and adapt to market conditions and customer demand.

CISOs Are Key to Enabling the Cognitive Enterprise

The cognitive enterprise is an organization with an unprecedented level of convergence between technology, business processes and human capabilities, designed to achieve competitive advantage and differentiation.

To enable such a change, the organization will need to leverage more advanced technology platforms and must no longer be limited to dealing only with structured data. New, more powerful business platforms will enable a competitive advantage by combining data, unique workflows and expertise. Internal-facing platforms will drive more efficient operations while external-facing platforms will allow for increased cooperation and collaboration with business partners.

Yet these changes will also bring along new types of risks. In the case of the cognitive enterprise, many of the risks stem from the increased reliance on technology to power more advanced platforms — including AI and the internet of things (IoT) — and the need to work with a lot more data, whether it’s structured, unstructured, in large volume or shared with partners.

As the trusted adviser of the organization, the chief information security officer (CISO) has a strong role to play in enabling and securing the organization’s transformation toward:

  • Operational agility, powered in part by the use of new and advanced technologies, such as AI, 5G, blockchain, 3D printing and the IoT.

  • Data-driven decisions, supported by systems able to recognize and provide actionable insights based on both structured and unstructured data.

  • Fluid boundaries with multiple data flows going to a larger ecosystem of suppliers, customers and business partners. Data is expected to be shared and accessible to all relevant parties.

Shows relationship between data, processes, people, outside forces, and internal drivers (automation, blockchain, AI)Source: IBM Institute for Business Value (IBV) analysis.

Selection and Implementation of Business Platforms

Among the major tasks facing organizations embarking on this transformation is the need to choose and deploy new mega-systems, equivalent to the monumental task of switching enterprise resource planning (ERP) systems — or, in some cases, actually making the switch.

The choice of a new platform will impact many areas across the enterprise, including HR and capital allocation processes, in addition to the obvious impact on how the business delivers value via its product or service. Yet, as the IBM IBV report points out, the benefits can be significant. Leading organizations have been able to deliver higher revenues — as high as eight times the average — by adopting new business and technology platforms and fully leveraging all their data, both structured and unstructured.

That said, having large amounts of data doesn’t automatically translate into an empowered organization. As the report cautions, organizations can no longer simply “pour all their data into a data lake and expect everyone to go fishing.” The right digital platform choice can empower the organization to deliver enhanced profits or squeeze additional efficiency, but only if the data is accurate and can be readily accessed.

Once again, the CISO has an important role to play in ensuring the organization has considered all the implications of implementing a new system, so governance will be key.

Data Governance — When Security and Privacy Converge

For the organization to achieve the level of trust needed to power cognitive operations, the CISO will need to drive conversations and choices about the security and privacy of sensitive data flowing across the organization. Beyond the basic tenets of confidentiality, integrity and availability, the CISO will need to be fully engaged on data governance, ensuring data is accurate and trustworthy. For data to be trusted, the CISO will need to review and guarantee the data’s provenance and lineage. Yet the report mentions that, for now, fewer than half of organization had developed “a systemized approach to data curation,” so there is much progress to be made.

Organizations will need to balance larger amounts of data — several orders of magnitude larger — with greater access to this data by both humans and machines. They will also need to balance security with seamless customer and employee experiences. To handle this data governance challenge, CISOs must ensure the data flows with external partners are frictionless yet also provide security and privacy.

AI Can Enable Improved Cybersecurity

The benefits of AI aren’t limited to the business side of the organization. In 2016, IBM quickly recognized the benefits cognitive security could bring to organizations that leverage artificial intelligence in the cybersecurity domain. As attackers explore more advanced and more automated attacks, organizations simply cannot afford to rely on slow, manual processes to detect and respond to security incidents. Cognitive security will enable organizations to improve their ability to prevent and detect threats, as well as accelerate and automate responses.

Leveraging AI as part of a larger security automation and orchestration effort has clear benefits. The “2018 Cost of a Data Breach Study,” conducted by Ponemon Institute, found that security automation decreases the average total cost of a data breach by around $1.55 million. By leveraging AI, businesses can find threats up to 60 times faster than via manual investigations and reduce the amount of time spent analyzing each incident from one hour to less than one minute.

Successful Digital Transformation Starts at the Top

Whether your organization is ready to embark on the journey to becoming a cognitive enterprise or simply navigating through current digital disruption, the CISO is emerging as a central powerhouse of advice and strategy regarding data and technology, helping choose an approach that enables security and speed.

With the stakes so high — and rising — CISOs should get a head start on crafting their digital transformation roadmaps, and the IBM IBV report is a great place to begin.

The post How CISOs Can Facilitate the Advent of the Cognitive Enterprise appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Christophe Veltsos

Artificial intelligence, Artificial Intelligence (AI), Authentication, Automation, Biometric Security, Blockchain, cryptocurrency, Machine Learning, Social Engineering, Threat Detection,

Don’t Believe Your Eyes: Deepfake Videos Are Coming to Fool Us All

In 2017, an anonymous Reddit user under the pseudonym “deepfakes” posted links to pornographic videos that appeared to feature famous mainstream celebrities. The videos were fake. And the user created them using off-the-shelf artificial intelligence (AI) tools.

Two months later, Reddit banned the deepfakes account and related subreddit. But the ensuing scandal revealed a range of university, corporate and government research projects under way to perfect both the creation and detection of deepfake videos.

Where Deepfakes Come From (and Where They’re Going)

Deepfakes are created using AI technology called generative adversarial networks (GANs), which can be used broadly to create fake data that can pass as real data. To oversimplify how GANs work, two machine learning (ML) algorithms are pitted against each other. One creates fake data and the other judges the quality of that fake data against a set of real data. They continue this contest at massive scale, continually getting better at making fake data and judging it. When both algorithms become extremely good at their respective tasks, the product is a set of high-quality fake data.

In the case of deepfakes, the authentic data set consists of hundreds or thousands of still photographs of a person’s face, so the algorithm has a wide selection of images showing the face from different angles and with different facial expressions to choose from and judge against to experimentally add to the video during the learning phase.

Carnegie Mellon University scientists even figured out how to impose the style of one video onto another using a technique called Recycle-GAN. Instead of convincingly replacing someone’s face with another, the Recycle-GAN process enables the target to be used like a puppet, imitating every head movement, facial expression and mouth movement in the exact way as the source video. This process is also more automated than previous methods.

Most of these videos today are either pornography featuring celebrities, satire videos created for entertainment or research projects showing rapidly advancing techniques. But deepfakes are likely to become a major security concern in the future. Today’s security systems rely heavily on surveillance video and image-based biometric security. Since the majority of breaches occur because of social engineering-based phishing attacks, it’s certain that criminals will turn to deepfakes for this purpose.

Deepfake Videos Are Getting Really Good, Really Fast

The earliest publicly demonstrated deepfake videos tended to show talking heads, with the subjects seated. Now, full-body deepfakes developed in separate research projects at Heidelberg University and the University of California, Berkeley are able to transfer the movements of one person to another. One form of authentication involves gait analysis. These kinds of full-body deepfakes suggest that the gait of an authorized person could be transferred in video to an unauthorized person.

Here’s another example: Many cryptocurrency exchanges authenticate users by making them photograph themselves holding up their passport or some other form of identification as well as a piece of paper with something like the current date written on it. This can be easily foiled with Photoshop. Some exchanges, such as Binance, found many attempts by criminals to access accounts using doctored photos, so they and others moved to video instead of photos. Security analysts worry that it’s only a matter of time before deepfakes will become so good that neither photos nor videos like these will be reliable.

The biggest immediate threat for deepfakes and security, however, is in the realm of social engineering. Imagine a video call or message that appears to be your work supervisor or IT administrator, instructing you to divulge a password or send a sensitive file. That’s a scary future.

What’s Being Done About It?

Increasingly realistic deepfakes have enormous implications for fake news, propaganda, social disruption, reputational damage, evidence tampering, evidence fabrication, blackmail and election meddling. Another concern is that the perfection and mainstreaming of deepfakes will cause the public to doubt the authenticity of all videos.

Security specialists, of course, will need to have such doubts as a basic job requirement. Deepfakes are a major concern for digital security specifically, but also for society at large. So what can be done?

University Research

Some researchers say that analyzing the way a person in a video blinks, or how often they blink, is one way to detect a deepfake. In general, deepfakes show insufficient or even nonexistent blinking, and the blinking that does occur often appears unnatural. Breathing is another movement usually not present in deepfakes, along with hair (it often looks blurry or painted on).

Researchers from the State University of New York (SUNY) at Albany developed a deepfake detection method that uses AI technology to look for natural blinking, breathing and even a pulse. It’s only a matter of time, however, before deepfakes make these characteristics look truly “natural.”

Government Action

The U.S. government is also taking precautions: Congress could consider a bill in the coming months to criminalize both the creation and distribution of deepfakes. Such a law would likely be challenged in court as a violation of the First Amendment, and would be difficult to enforce without automated technology for identifying deepfakes.

The government is working on the technology problem, too. The National Science Foundation (NSF), Defense Advanced Research Projects Agency (DARPA) and Intelligence Advanced Research Projects Agency (IARPA) are looking for technology to automate the identification of deepfakes. DARPA alone has reportedly spent $68 million on a media forensics capability to spot deepfakes, according to CBC.

Private Technology

Private companies are also getting in on the action. A new cryptographic authentication tool called Amber Authenticate can run in the background while a device records video. As reported by Wired, the tool generates hashes — “scrambled representations” — of the data at user-determined intervals, which are then recorded on a public blockchain. If the video is manipulated in any way, the hashes change, alerting the viewer to the probability that the video has been tampered with. A dedicated player feature shows a green frame for portions of video that are faithful to the origina, and a red frame around video segments that have been altered. The system has been proposed for police body cams and surveillance video.

A similar approach was taken by a company called Factom, whose blockchain technology is being tested for border video by the Department of Homeland Security (DHS), according to Wired.

Security Teams Should Prepare for Anything and Everything

The solution to deepfakes may lie in some combination of education, technology and legislation — but none of these will work without the technology part. Because when deepfakes get really good, as they inevitably will, only machines will be able to tell the real videos from the fake ones. This deepfake technology is coming, but nobody knows when. We should also assume that an arms race will arise with malicious deepfake actors inventing new methods to overcome the latest detection systems.

Security professionals need to consider the coming deepfake wars when analyzing future security systems. If they’re video or image based — everything from facial recognition to gait analysis — additional scrutiny is warranted.

In addition, you should add video to the long list of media you cannot trust. Just as training programs and digital policies make clear that email may not come from who it appears to come from, video will need to be met with similar skepticism, no matter how convincing the footage. Deepfake technology will also inevitably be deployed for blackmail purposes, which will be used for extracting sensitive information from companies and individuals.

The bottom line is that deepfake videos that are indistinguishable from authentic videos are coming, and we can scarcely imagine what they’ll be used for. We should start preparing for the worst.

The post Don’t Believe Your Eyes: Deepfake Videos Are Coming to Fool Us All appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Mike Elgan

Advanced Threats, Artificial intelligence, Artificial Intelligence (AI), Chief Information Security Officer (CISO), Data Breaches, Risk Management, Security Costs, Security Intelligence & Analytics, Security Products, Security Strategy, Skills Gap, Threat Detection, Zero-Day Attacks,

Are Applications of AI in Cybersecurity Delivering What They Promised?

Many enterprises are using artificial intelligence (AI) technologies as part of their overall security strategy, but results are mixed on the post-deployment usefulness of AI in cybersecurity settings.

This trend is supported by a new white paper from Osterman Research titled “The State of AI in Cybersecurity: The Benefits, Limitations and Evolving Questions.” According to the study, which included responses from 400 organizations with more than 1,000 employees, 73 percent of organizations have implemented security products that incorporate at least some level of AI.

However, 46 percent agree that rules creation and implementation are burdensome, and 25 percent said they do not plan to implement additional AI-enabled security solutions in the future. These findings may indicate that AI is still in the early stages of practical use and its true potential is still to come.

How Effective Is AI in Cybersecurity?

“Any ITDM should approach AI for security very cautiously,” said Steve Tcherchian, chief information security officer (CISO) and director of product at XYPRO Technology. “There are a multitude of security vendors who tout AI capabilities. These make for great presentations, marketing materials and conversations filled with buzz words, but when the rubber meets the road, the advancement in technology just isn’t there in 2019 yet.”

The marketing Tcherchian refers to has certainly drummed up considerable attention, but AI may not yet be delivering enough when it comes to measurable results for security. Respondents to the Osterman Research study noted that the AI technologies they have in place do not help mitigate many of the threats faced by enterprise security teams, including zero-day and advanced threats.

Still Work to Do, but Promise for the Future

While applications of artificial intelligence must still mature for businesses to realize their full benefits, many in the industry still feel the technology offers promise for a variety of applications, such as improving the speed of processing alerts.

“AI has a great potential because security is a moving target, and fixed rule set models will always be evaded as hackers are modifying their attacks,” said Marty Puranik, CEO of Atlantic.Net. “If you have a device that can learn and adapt to new forms of attacks, it will be able to at least keep up with newer types of threats.”

Research from the Ponemon Institute predicted several benefits of AI use, including cost-savings, lower likelihood of data breaches and productivity enhancements. The research found that businesses spent on average around $3 million fighting exploits without AI in place. Those who have AI technology deployed spent an average of $814,873 on the same threats, a savings of more than $2 million.

Help for Overextended Security Teams

AI is also being considered as a potential point of relief for the cybersecurity skills shortage. Many organizations are pinched to find the help they need in security, with Cybersecurity Ventures predicting the skills shortage will increase to 3.5 million unfilled cybersecurity positions by 2021.

AI can help security teams increase efficiency by quickly making sense of all the noise from alerts. This could prove to be invaluable because at least 64 percent of alerts per day are not investigated, according to Enterprise Management Associates (EMA). AI, in tandem with meaningful analytics, can help determine which alerts analysts should investigate and discern valuable information about what is worth prioritizing, freeing security staff to focus on other, more critical tasks.

“It promises great improvements in cybersecurity-related operations, as AI releases security engineers from the necessity to perform repetitive manual processes and provides them with an opportunity and time to improve their skills, learn how to use new tools, technologies,” said Uladzislau Murashka, a certified ethical hacker (CEH) at ScienceSoft.

Note that while AI offers the potential for quicker, more efficient handling of alerts, human intervention will continue to be critical. Applications of artificial intelligence will not replace humans on the security team anytime soon.

Paving an Intelligent Path Forward

It’s important to consider another group that is investing in AI technology and using it for financial gains: cybercriminals. Along with enterprise security managers, those who make a living by exploiting sensitive data also understand the potential AI has for the future. It will be interesting to see how these capabilities play out in the future cat-and-mouse game of cybersecurity.

While AI in cybersecurity is still in the early stages of its evolution, its potential has yet to be fully realized. As security teams continue to invest in and develop AI technologies, these capabilities will someday be an integral part of cyberdefense.

The post Are Applications of AI in Cybersecurity Delivering What They Promised? appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Joan Goodchild

Artificial intelligence, Artificial Intelligence (AI), Authentication Systems, Biometric Security, Data Protection, facial recognition, Identity and Access Management (IAM), Machine Learning, passwords, Unified Endpoint Management (UEM),

AI May Soon Defeat Biometric Security, Even Facial Recognition Software

It’s time to face a stark reality: Threat actors will soon gain access to artificial intelligence (AI) tools that will enable them to defeat multiple forms of authentication — from passwords to biometric security systems and even facial recognition software — identify targets on networks and evade detection. And they’ll be able to do all of this on a massive scale.

Sounds far-fetched, right? After all, AI is difficult to use, expensive and can only be produced by deep-pocketed research and development labs. Unfortunately, this just isn’t true anymore; we’re now entering an era in which AI is a commodity. Threat actors will soon be able to simply go shopping on the dark web for the AI tools they need to automate new kinds of attacks at unprecedented scales. As I’ll detail below, researchers are already demonstrating how some of this will work.

When Fake Data Looks Real

Understanding the coming wave of AI-powered cyberattacks requires a shift in thinking and AI-based unified endpoint management (UEM) solutions that can help you think outside the box. Many in the cybersecurity industry assume that AI will be used to simulate human users, and that’s true in some cases. But a better way to understand the AI threat is to realize that security systems are based on data. Passwords are data. Biometrics are data. Photos and videos are data — and new AI is coming online that can generate fake data that passes as the real thing.

One of the most challenging AI technologies for security teams is a very new class of algorithms called generative adversarial networks (GANs). In a nutshell, GANs can imitate or simulate any distribution of data, including biometric data.

To oversimplify how GANs work, they involve pitting one neural network against a second neural network in a kind of game. One neural net, the generator, tries to simulate a specific kind of data and the other, the discriminator, judges the first one’s attempts against real data — then informs the generator about the quality of its simulated data. As this progresses, both neural networks learn. The generator gets better at simulating data, and the discriminator gets better at judging the quality of that data. The product of this “contest” is a large amount of fake data produced by the generator that can pass as the real thing.

GANs are best known as the foundational technology behind those deep fake videos that convincingly show people doing or saying things they never did or said. Applied to hacking consumer security systems, GANs have been demonstrated — at least, in theory — to be keys that can unlock a range of biometric security controls.

Machines That Can Prove They’re Human

CAPTCHAs are a form of lightweight website security you’re likely familiar with. By making visitors “prove” they’re human, CAPTCHAs act as a filter to block automated systems from gaining access. One typical kind of CAPTCHA asks users to identify numbers, letters and characters that have been jumbled, distorted and obfuscated. The idea is that humans can pick out the right symbols, but machines can’t.

However, researchers at Northwest University and Peking University in China and Lancaster University in the U.K. claimed to have developed an algorithm based on a GAN that can break most text-based CAPTCHAs within 0.05 seconds. In other words, they’ve trained a machine that can prove it’s human. The researchers concluded that because their technique uses a small number of data points for training the algorithm — around 500 test CAPTCHAs selected from 11 major CAPTCHA services — and both the machine learning part and the cracking part happen very quickly using a single standard desktop PC, CAPTCHAs should no longer be relied upon for front-line website defense.

Faking Fingerprints

One of the oldest tricks in the book is the brute-force password attack. The most commonly used passwords have been well-known for some time, and many people use passwords that can be found in the dictionary. So if an attacker throws a list of common passwords, or the dictionary, at a large number of accounts, they’re going to gain access to some percentage of those targets.

As you might expect, GANs can produce high-quality password guesses. Thanks to this technology, it’s now also possible to launch a brute-force fingerprint attack. Fingerprint identification — like the kind used by major banks to grant access to customer accounts — is no longer safe, at least in theory.

Researchers at New York University and Michigan State University recently conducted a study in which GANs were used to produce fake-but-functional fingerprints that also look convincing to any human. They said their method worked because of a flaw in the way many fingerprint ID systems work. Instead of matching the full fingerprint, most consumer fingerprint systems only try to match a part of the fingerprint.

The GAN approach enables the creation of thousands of fake fingerprints that have the highest likelihood of being matches for the partial fingerprints the authentication software is looking for. Once a large set of high-quality fake fingerprints is produced, it’s basically a brute-force attack using fingerprint patterns instead of passwords. The good news is that many consumer fingerprint sensors use heat or pressure to detect whether an actual human finger is providing the biometric data.

Is Face ID Next?

One of the most outlandish schemes for fooling biometric security involves tricking facial recognition software with fake faces. This was a trivial task with 2D technologies, in part because the capturing of 2D facial data could be done with an ordinary camera, and at some distance without the knowledge of the target. But with the emergence of high-definition 3D technologies found in many smartphones, the task becomes much harder.

A journalist working at Forbes tested four popular Android phones, plus an iPhone, using 3D-printed heads made by a company called Backface in Birmingham, U.K. The studio used 50 cameras and sophisticated software to scan the “victim.” Once a complete 3D image was created, the life-size head was 3D-printed, colored and, finally, placed in front of the various phones.

The results: All four Android phones unlocked with the phony faces, but the iPhone didn’t.

This method is, of course, difficult to pull off in real life because it requires the target to be scanned using a special array of cameras. Or does it? Constructing a 3D head out of a series of 2D photos of a person — extracted from, say, Facebook or some other social network — is exactly the kind of fake data that GANs are great at producing. It won’t surprise me to hear in the next year or two that this same kind of unlocking is accomplished using GAN-processed 2D photos to produce 3D-printed faces that pass as real.

Stay Ahead of the Unknown

Researchers can only demonstrate the AI-based attacks they can imagine — there are probably hundreds or thousands of ways to use AI for cyberattacks that we haven’t yet considered. For example, McAfee Labs predicted that cybercriminals will increasingly use AI-based evasion techniques during cyberattacks.

What we do know is that as we enter into a new age of artificial intelligence being everywhere, we’re also going to see it deployed creatively for the purpose of cybercrime. It’s a futuristic arms race — and your only choice is to stay ahead with leading-edge security based on AI.

The post AI May Soon Defeat Biometric Security, Even Facial Recognition Software appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Mike Elgan

Artificial intelligence, Artificial Intelligence (AI), Chief Information Security Officer (CISO), CISO, Cloud Security, Cognitive Security, Internet of Things (IoT), Machine Learning, Penetration Testing, Security Intelligence & Analytics, Security Leaders, Security Leadership, Security Operations Center (SOC), Security Solutions,

Break Through Cybersecurity Complexity With New Rules, Not More Tools

Let’s be frank: Chief information security officers (CISOs) and security professionals all know cybersecurity complexity is a major challenge in today’s threat landscape. Other folks in the security industry know this too — although some don’t want to admit it. The problem is that amid increasing danger and a growing skills shortage, security teams are overwhelmed by alerts and the growing number of complex tools they have to manage. We need to change that, but how? By completely rethinking our assumptions.

The basic assumption of security up until now is that new threats require new tools. After 12 years at IBM Security, leading marketing teams and making continuous contact with our clients — and, most recently, as VP of product marketing — I’ve seen a lot of promising new technology. But in our rapidly diversifying industry, there are more specialized products to face every kind of threat in an expanding universe of attack vectors. Complexity is a hidden cost of all these marvelous products.

It’s not just security products that contribute to the cybersecurity complexity conundrum; digitization, mobility, cloud and the internet of things (IoT) all contribute to the complexity of IT environments, making security an uphill battle for underresourced security teams. According to Forrester’s “Global Business Technographics Security Survey 2018,” 31 percent of business and IT decision-makers ranked the complexity of the IT environment among the biggest security challenges they face, tied with the changing nature of threats as the most-cited challenge.

I’ll give you one more mind-boggling statistic to demonstrate why complexity is the enemy of security: According to IBM estimates, enterprises use as many as 80 different security products from 40 vendors. Imagine trying to build a clear picture with pieces from 80 separate puzzles. That’s what CISOs and security operations teams are being asked to do.

7 Rules to Help CISOs Reduce Cybersecurity Complexity

The sum of the parts is not greater than the whole. So, we need to escape the best-of-breed trap to handle the problem of complexity. Cybersecurity doesn’t need more tools; it needs new rules.

Complexity requires us as security professionals and industry partners to turn the old ways of thinking inside out and bring in fresh perspectives.

Below are seven rules to help us think in new ways about the complex, evolving challenges that CISOs, security teams and their organizations face today.

1. Open Equals Closed

You can’t prevent security threats by piling on more tools that don’t talk to each other and create more noise for overwhelmed analysts. Security products need to work in concert, and that requires integration and collaboration. An open, connected, cloud-based security platform that brings security products together closes the gaps that point products leave in your defenses.

2. See More When You See Less

Security operations centers (SOCs) see thousands of security events every day — a 2018 survey of 179 IT professionals found that 55 percent of respondents handle more than 10,000 alerts per day, and 27 percent handle more than 1 million events per day. SOC analysts can’t handle that volume.

According to the same survey, one-third of IT professionals simply ignore certain categories of alerts or turn them off altogether. A smarter approach to the overwhelming volume of alerts leverages analytics and artificial intelligence (AI) so SOC analysts can focus on the most crucial threats first, rather than chase every security event they see.

3. An Hour Takes a Minute

When you find a security incident that requires deeper investigation, time is of the essence. Analysts can’t afford to get bogged down in searching for information in a sea of threats.

Human intelligence augmented by AI — what IBM calls cognitive security — allows SOC analysts to respond to threats up to 60 times faster. An advanced AI can understand, reason and learn from structured and unstructured data, such as news articles, blogs and research papers, in seconds. By automating mundane tasks, analysts are freed to make critical decisions for faster response and mitigation.

4. A Skills Shortage Is an Abundance

It’s no secret that greater demand for cybersecurity professionals and an inadequate pipeline of traditionally trained candidates has led to a growing skills gap. Meanwhile, cybercriminals have grown increasingly collaborative, but those who work to defend against them remain largely siloed. Collaboration platforms for security teams and shared threat intelligence between vendors are force multipliers for your team.

5. Getting Hacked Is an Advantage

If you’re not seeking out and patching vulnerabilities in your network and applications, you’re making an assumption that what you don’t know can’t hurt you. Ethical hacking and penetration testing turns hacking into an advantage, helping you find your vulnerabilities before adversaries do.

6. Compliance Is Liberating

More and more consumers say they will refuse to buy products from companies that they don’t trust to protect their data, no matter how great the products are. By creating a culture of proactive data compliance, you can exchange the checkbox mentality for continuous compliance, turning security into a competitive advantage.

7. Rigidity Is Breakthrough

The success of your business depends not only on customer loyalty, but also employee productivity. Balance security with productivity by practicing strong security hygiene. Run rigid but silent security processes in the background to stay out of the way of productivity.

What’s the bottom line here? Times are changing, and the current trend toward complexity will slow the business down, cost too much and fail to reduce cyber risk. It’s time to break through cybersecurity complexity and write new rules for a new era.

https://youtu.be/tgb-hpIrSbo

The post Break Through Cybersecurity Complexity With New Rules, Not More Tools appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Wangui McKelvey

Advanced Persistent Threat (APT), Analytics, Artificial intelligence, Big Data, Data Management, insider threats, Internet of Things (IoT), Machine Learning, Security Analytics, Security Intelligence & Analytics, Security Training, Threat Detection, Threat Intelligence, User Behavior Analytics (UBA),

Stay Ahead of the Growing Security Analytics Market With These Best Practices

As breach rates climb and threat actors continue to evolve their techniques, many IT security teams are turning to new tools in the fight against corporate cybercrime. The proliferation of internet of things (IoT) devices, network services and other technologies in the enterprise has expanded the attack surface every year and will continue to do so. This evolving landscape is prompting organizations to seek out new ways of defending critical assets and gathering threat intelligence.

The Security Analytics Market Is Poised for Massive Growth

Enter security analytics, which mixes threat intelligence with big data capabilities to help detect, analyze and mitigate targeted attacks and persistent threats from outside actors as well as those already inside corporate walls.

“It’s no longer enough to protect against outside attacks with perimeter-based cybersecurity solutions,” said Hani Mustafa, CEO and co-founder of Jazz Networks. “Cybersecurity tools that blend user behavior analytics (UBA), machine learning and data visibility will help security professionals contextualize data and demystify human behavior, allowing them to predict, prevent and protect against insider threats.”

Security analytics can also provide information about attempted breaches from outside sources. Analytics tools work together with existing network defenses and strategies and offer a deeper view into suspicious activity, which could be missed or overlooked for long periods due to the massive amount of superfluous data collected each day.

Indeed, more security teams are seeing the value of analytics as the market appears poised for massive growth. According to Global Market Insights, the security analytics market was valued at more than $2 billion in 2015, and it is estimated to grow by more than 26 percent over the coming years — exceeding $8 billion by 2023. ABI Research put that figure even higher, estimating that the need for these tools will drive the security analytics market toward a revenue of $12 billion by 2024.

Why Are Security Managers Turning to Analytics?

For most security managers, investment in analytics tools represents a way to fill the need for more real-time, actionable information that plays a role in a layered, robust security strategy. Filtering out important information from the massive amounts of data that enterprises deal with daily is a primary goal for many leaders. Businesses are using these tools for many use cases, including analyzing user behavior, examining network traffic, detecting insider threats, uncovering lost data, and reviewing user roles and permissions.

“There has been a shift in cybersecurity analytics tooling over the past several years,” said Ray McKenzie, founder and managing director of Red Beach Advisors. “Companies initially were fine with weekly or biweekly security log analytics and threat identification. This has morphed to real-time analytics and tooling to support vulnerability awareness.”

Another reason for analytics is to gain better insight into the areas that are most at risk within an IT environment. But in efforts to cull important information from a wide variety of potential threats, these tools also present challenges to the teams using them.

“The technology can also cause alert fatigue,” said Simon Whitburn, global senior vice president, cybersecurity services at Nominet. “Effective analytics tools should have the ability to reduce false positives while analyzing data in real-time to pinpoint and eradicate malicious activity quickly. At the end of the day, the key is having access to actionable threat intelligence.”

Personalization Is Paramount

Obtaining actionable threat intelligence means configuring these tools with your unique business needs in mind.

“There is no ‘plug and play’ solution in the security analytics space,” said Liviu Arsene, senior cybersecurity analyst at Bitdefender. “Instead, the best way forward for organizations is to identify and deploy the analytics tools that best fits an organization’s needs.”

When evaluating security analytics tools, consider the company’s size and the complexity of the challenges the business hopes to address. Organizations that use analytics may need to include features such as deployment models, scope and depth of analysis, forensics, and monitoring, reporting and visualization. Others may have simpler needs with minimal overhead and a smaller focus on forensics and advanced persistent threats (APTs).

“While there is no single analytics tool that works for all organizations, it’s important for organizations to fully understand the features they need for their infrastructure,” said Arsene.

Best Practices for Researching and Deploying Analytics Solutions

Once you have established your organization’s needs and goals for investing in security analytics, there are other important considerations to keep in mind.

Emphasize Employee Training

Chief information security officers (CISOs) and security managers must ensure that their staffs are prepared to use the tools at the outset of deployment. Training employees on how to make sense of information among the noise of alerts is critical.

“Staff need to be trained to understand the results being generated, what is important, what is not and how to respond,” said Steve Tcherchian, CISO at XYPRO Technology Corporation.

Look for Tools That Can Change With the Threat Landscape

Security experts know that criminals are always one step ahead of technology and tools and that the threat landscape is always evolving. It’s essential to invest in tools that can handle relevant data needs now, but also down the line in several years. In other words, the solutions must evolve alongside the techniques and methodologies of threat actors.

“If the security tools an organization uses remain stagnant in their programming and update schedule, more vulnerabilities will be exposed through other approaches,” said Victor Congionti of Proven Data.

Understand That Analytics Is Only a Supplement to Your Team

Analytics tools are by no means a replacement for your security staff. Having analysts who can understand and interpret data is necessary to get the most out of these solutions.

Be Mindful of the Limitations of Security Analytics

Armed with security analytics tools, organizations can benefit from big data capabilities to analyze data and enhance detection with proactive alerts about potential malicious activity. However, analytics tools have their limitations, and enterprises that invest must evaluate and deploy these tools with their unique business needs in mind. The data obtained from analytics requires context, and trained staff need to understand how to make sense of important alerts among the noise.

The post Stay Ahead of the Growing Security Analytics Market With These Best Practices appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Joan Goodchild

Artificial intelligence, Chief Information Security Officer (CISO), CISO, Incident Forensics, Incident Management, Incident Response, Incident Response (IR), orchestration, Security Intelligence & Analytics, Security Leaders, Security Operations and Response, Security Operations Center (SOC), Security Professionals, Skills Gap,

Maximize Your Security Operations Center Efficiency With Incident Response Orchestration

It’s 5:48 a.m. — only 48 minutes into your 12-hour shift in the security operations center (SOC), and you’ve already investigated three threats. You were prepared for a long shift, but since an analyst on the night crew just quit, now you’re covering her shift, too. How is anyone supposed to stay vigilant in the thick of a monotonous 24-hour slog in the SOC?

When you first started, you tried talking to your boss about how incident response orchestration software and other tools might work more efficiently. Today, you’re just trying to survive. It’s hard to not feel completely numb when you’re buried in hundreds of alerts you can’t possibly review.

When the tools in the SOC don’t integrate seamlessly into a unified security immune system of solutions, analysts can’t make the most of their time. Given the widening cybersecurity skills gap, the rising cost of a data breach and the blinding speed at which alerts pile up in security information and event management (SIEM) logs, security leaders must empower their analysts to maximize their efficiency.

The first step is to give them the tools they need to accurate prioritize all those alerts — but what does intelligent incident response look like in practice, and how can orchestration and automation help tranform a reactive response system into a proactive security powerhouse? Let’s zoom in on what’s holding SOCs back and how an integrated ecosystem of tools can help analysts overcome these challenges before, during and after an attack.

Learn to orchestrate incident response

Reactive, Manual Processes in the Understaffed SOC

The average security analyst investigates 20–25 incidents each day. It takes the average analyst 13–18 minutes to compare indicators of compromise (IoC) to logs, threat intelligence feeds and external intelligence, and manual research can yield false positive rates of 70 percent or higher.

To make matters worse, as security analysts struggle against an increased volume of complex alerts, the SOC is facing a talent crisis: Sixty-six percent of cybersecurity professionals believe there are too few qualified analysts to handle alert volume in the SOC.

According to the Ponemon Institute’s “2018 Cost of a Data Breach Study,” the average cost of a breach globally is $3.86 million, a 6.4 percent increase from 2017. As threat actors become more effective at evading and targeting the enterprise, the majority of analysts can’t keep up. Twenty-seven percent of SOCs receive more than 1 million alerts each day, and the most common response to alert fatigue is to modify policies for fewer alerts.

Orchestration and automation can free overwhelmed analysts in the SOC and significantly improve cyber resiliency throughout the enterprise. In act, research has shown that SOC orchestration can triple incident response volume and reduce time to response significantly.

“While data breach costs have been rising steadily, we see positive signs of cost savings through the use of newer technologies as well as proper planning for incident response, which can significantly reduce these costs,” said Dr. Larry Ponemon.

Automation reduces the average cost of a data breach by $1.55 million. To build a cyber resilient enterprise, security leaders need intelligent solutions for orchestration, automation, machine learning and artificial intelligence (AI).

What Are the Attributes of Intelligent Incident Response?

Enterprises can save an average of $1 million by containing a data breach in under 30 days, according to the Ponemon study. However, the average time to containment is 69 days. Security leaders should consider the risks of failing to adopt solutions to for intelligent and proactive response, including costlier data breaches caused by reactive response and longer containment times.

The SOC is facing a higher volume of more sophisticated threats, and there is a massive shortage of cybersecurity talent to boot. The right approach to intelligent response, therefore, encompasses solutions for the following:

  1. Orchestration and automation — An integrated, streamlined ecosystem can enable organizations to create dynamic incident response (IR) plans and automate remediation.
  2. Human and artificial intelligence — Operationalize human intelligence, leverage advanced threat intelligence and collaborate with experts.
  3. Case management — Establish systems for continual IR plan improvement while developing a clear understanding of internal workloads and skills.

Let’s take a closer look at how intelligence incident response orchestration works in practice and how it can help security leaders free up their overworked analysts for more pressing tasks.

3 Use Cases for Intelligent Incident Response Orchestration

A comprehensive ecosystem of security solutions can enable the enterprise to prepare for sophisticated cyberthreats, respond proactively to risks and apply lessons learned to create future safeguards. Intelligent orchestration creates efficiency and accuracy before an attack, during an incident and after remediation.

1. Before an Attack

Half of respondents to a recent survey believe it’s somewhat or highly likely that their organization will have to respond to a major incident in the next year, while 9 percent have “no doubt.” The right time to address SOC challenges, such as the increased volume of highly targeted threats and too many single-purpose solutions, is before an attack occurs.

The first step to build a cyber resilient enterprise involves adopting an advanced incident response platform to create automated, intelligent workflows that encompass people, processes and technology. This solution can be enhanced with a security information and event management (SIEM) solution to deliver comprehensive incident analytics and visibility into emerging threats.

Enlisting security operations consultants can help organizations supplement their internal talent. Collaborating with external IR experts, meanwhile, can help companies implement effective training and strategic preparation.

2. During an Attack

Minutes count when the enterprise is facing a sophisticated, targeted threat. The incident response platform (IRP) can act as a centralized solution for comprehensive response remediation. When coupled with cognitive intelligence, organizations can rapidly investigate threats without overwhelming their SOC staff.

When a critical incident is detected, the SOC can call in on-demand IR experts for assistance managing and remediating the incident. The IRP generates a response playbook, which updates dynamically as threat intelligence solutions provide analysis of the incident and endpoint analytics solutions deliver details of on-site infection and automated reporting to the legal team.

Using solutions for threat intelligence, forensics and other solutions, IR analysts can research the tactics used by attackers to pinpoint the source of the incident. By following instructions from the playbook, SOC analysts can coordinate with IT on remediation actions, such as global password resets and segregation of privileged accounts.

3. After an Attack

There are few genuinely random cybersecurity attacks. In the last 18 months, 56 percent of organizations that fell victim to a significant attack were targeted again in the same period.

When an attack is fully remediated, security analysts can prepare efficient reporting on the incident using data from security intelligence solutions, forensic investigation tools and insights from the response researchers. This research can be presented directly to the executive leadership team to communicate the status of the incident, actions taken and lessons learned.

By collaborating with third-party response experts and security service consultants, the SOC team can work to refine formal incident response policies and enhance security controls. As SOC operations resume, analysts can improve readiness with a customized response drill training.

Why Incident Response Orchestration Matters

By protecting the enterprise with solutions to automate and orchestrate incident response, security leaders can introduce the benefit of cyber resiliency to the organization. According to Forrester, “Technology products that provide automated, coordinated, and policy-based action of security processes across multiple technologies, [make] security operations faster, less error-prone, and more efficient.” Adding the right solutions for orchestration, cognitive intelligence, and case management can ease the burden on the SOC while reducing cybersecurity risks.

Six steps to proactive and resilient incident response

The post Maximize Your Security Operations Center Efficiency With Incident Response Orchestration appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Dan Carlson

Artificial intelligence, Artificial Intelligence (AI), Automation, CISO, Cloud Adoption, Compliance, Cybersecurity, Data Breach, Data Privacy, General Data Protection Regulation (GDPR), Incident Response, Incident Response (IR), Internet of Things (IoT), IoT Security, Machine Learning, privacy regulations, Risk Management, Security Intelligence & Analytics, Security Professionals, Security Trends,

Top 2019 Cybersecurity Predictions From the Resilient Year-End Webinar

2018 was another significant year for the cybersecurity industry, with sweeping changes that will impact security professionals for years to come.

The General Data Protection Regulation (GDPR) finally went into effect, dramatically reshaping the way companies and consumers manage data privacy. Security teams stepped up their battle against technology complexity by increasingly migrating to the cloud and adopting security platforms. And several emerging security technologies — such as incident response automation and orchestration, artificial intelligence (AI), and machine learning — continued to evolve and saw increased adoption as a result.

As security teams continue pushing to get ahead of adversaries, these trends will almost certainly have long-term impacts. But what do they mean for 2019?

Bold Cybersecurity Predictions for 2019

Recently, I was fortunate to host a panel of cybersecurity experts for IBM Resilient’s sixth annual end-of-year and predictions webinar, including Bruce Schneier, chief technology officer (CTO) at IBM Resilient and special advisor to IBM Security; Jon Oltsik, senior principal analyst at Enterprise Strategy Group; Ted Julian, co-founder and vice president of product management at IBM Resilient; and Gant Redmon, program director of cybersecurity and privacy at IBM Resilient.

During the webinar, the team discussed and debated the trends that defined 2018 and offered cybersecurity predictions on what the industry can expect in 2019. In the spirit of keeping our experts honest, below are the four boldest predictions from the panel.

Bruce Schneier: There Will Be a Major IoT Cyberattack … or Not

Last year, Bruce predicted that a major internet of things (IoT) cyberattack would make the news, perhaps targeting automobiles or medical devices. Fortunately, that wasn’t the case in 2018. But could it happen in 2019?

Bruce’s prediction: maybe (yes, he’s hedging his bet). There are certainly many risks and vulnerabilities associated with the rise of IoT devices. Regardless of whether a major attack is imminent, IoT security needs to be a top priority for security teams in 2019. This prediction is in line with Bruce’s latest book, “Click Here to Kill Everybody.”

Ted Julian: Security Automation Will Create Unintended Negative Consequences

Incident response automation and orchestration is an increasingly popular way for security teams to streamline repetitive processes and make analysts more efficient, but automating poorly defined processes could create bigger issues.

Automated processes accidentally taking down systems is a familiar problem in the IT space. In 2019, we will see an example of security automation hurting an organization in unforeseen ways.

To avoid this, organizations need to consider how they employ technology when orchestrating incident response processes. They should focus on aligning people, processes and technology and methodically employ automation to further empower their security employees.

Jon Oltsik: Continuous Risk Management Will Help Organizations Better Understand Risks

Today, risk assessments and vulnerability scans give organizations a point-in-time look at their security posture and threat landscape. But in 2019, that won’t be enough. Security leadership — as well as executives and board members — need real-time information about the risks they face and what needs to be done to improve. Establishing a system of continuous risk management will help security teams enable this reality.

Gant Redmon: New Laws Will Provide Safe Harbor to Compliant Organizations

A pending law in Ohio would provide a first in U.S. data privacy regulations: Providing safe harbor from tort claims to organizations that are in compliance with their security regulations. In other words, if an organization suffers a data breach but is in compliance with its regulatory obligations, it will be protected from lawsuits related to that breach.

While the Ohio law is the first of its kind, we will no doubt start to hear of similar regulations emerging throughout 2019.

What are your cybersecurity predictions for 2019? Tweet to us at @IBMSecurity and let us know!

Watch the complete webinar

The post Top 2019 Cybersecurity Predictions From the Resilient Year-End Webinar appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Maria Battaglia