Browsing category

Artificial Intelligence (AI)

Artificial intelligence, Artificial Intelligence (AI), Cognitive Security, IBM Watson, Security Information and Event Management (SIEM), Watson,

With AI for Cybersecurity, We Are Raising the Bar for Smart

It’s hard to imagine something more frustrating to a runner than moving the finish line after the race has started. After all, how can you set a proper pace if the distance keeps changing? How will you know you’ve succeeded if the definition of success is in flux?

In a sense, that’s what has happened over the years in the field of artificial intelligence (AI). What would you call something that could add, subtract, multiply and divide large, complex numbers in an instant? You’d probably call it smart, right? Or what if it could memorize massive quantities of seemingly random data and recall it on the spot, in sequence, and never make a mistake? You might even interpret that sort of brain power as a sign of genius. But what exactly does it mean to be intelligent, anyway?

Now that calculators are included as default features on our phones and smartwatches, we don’t consider them to be particularly intelligent. We also have databases with seemingly infinite capacity at every turn, so we no longer view these abilities as indicative of some sort of higher intelligence, but rather as features of an ordinary, modern computer. The bottom line is that the bar for what is generally considered smart has moved — albeit far from the first time.

What Does It Mean to Be Intelligent?

There was a time when we thought that chess was such a complex game that only people with superior brain power could be champions. Surely, the ability to plot strategies, respond to an opponent’s moves and see many moves ahead with hundreds or even thousands of outcomes was proof of incredible intellect, right?

That was pretty much the case until 1997, when IBM’s Deep Blue computer beat grandmaster and world champion Gary Kasparov in a six-game match. Was Deep Blue intelligent even though the system couldn’t even read a newspaper? Surely, intelligence involved more than just being a chess savant. The bar for smart had moved.

Consider the ability to consume and comprehend huge stores of unstructured content written in a form that humans can read but computers struggle with due to the vagaries of normal expression, such as idioms, puns and other quirks of language. For example, saying, “it’s raining cats and dogs,” or that someone has “cold feet?” The former has nothing to do with animals and the latter is not a condition that can be remedied with wool socks.

What if a system could read this sort of information nonstop across a wide range of categories, never forget anything it reads and recall the facts relevant to a given clue with subsecond response time? What if it was so good at this exercise that it could beat the best in the world with more correct responses in less time? That would surely be the sign of a genius, wouldn’t it?

It would have been until, in 2011, IBM’s Watson computer beat two grand champions at the game of Jeopardy! while the world watched on live TV. Even so, was Watson intelligent, or just really good at a given task as its predecessors had been? The bar for smart had moved yet again.

Passing the Turing Test: Are We Near the Finish Line?

The gold standard for AI — proof that a machine is able to match or exceed human intelligence in its various forms by mimicking the human ability to discover, infer and reason — was established in 1950 by Alan Turing, widely considered the father of theoretical computer science and AI. The Turing Test involved having a person communicate with another human and a machine. If that person was unable to distinguish through written messages whether they were conversing with the other person or the computer, the computer would be considered intelligent.

This elegant test incorporated many elements of what we consider intelligence: natural language processing, general knowledge across a wide variety of subjects, flexibility and creativity, and a certain social intelligence that we all possess, but may take for granted in personal communications until we encounter a system that lacks it. Surely, a computer that can simulate human behavior and knowledge to the extent that a neutral observer could not tell difference would be the realization of the AI dream — finish line crossed.

That was the conventional wisdom until 2014, when a computer managed to fool 33 percent of evaluators into thinking they were talking to a 13-year old Ukrainian boy. Surely, this achievement would have convinced most people that AI was finally here now that a machine had passed the iconic Turing Test, right? Nope — you guessed it — the bar for smart had moved.

How AI for Cybersecurity Is Raising the Bar

Now, we have systems doing what was previously unthinkable, but there is still a sense that we’ve yet to see the full potential of AI for cybersecurity. The good news is that we now have systems like Watson that can do anything from recommending treatment for some of the most intractable cancer cases to detecting when your IT systems are under attack, by whom and to what extent. Watson for Cybersecurity can do the latter today by applying knowledge it has gleaned from reading millions of documents in unstructured form and applying that learning to the precise details of a particular IT environment. Better still, it does all this with the sort of speed even the most experienced security experts could only dream of.

Does it solve all the problems of a modern security operations center (SOC)? Of course not. We still need human intelligence and insight to guide the process, make sense of the results and devise appropriate responses that account for ethical dilemmas, legal considerations, business priorities and more. However, the ability to reduce the time for investigations from a few hours to a few minutes can be a game changer. There’s still much more to be done with AI for cybersecurity, but one thing’s for sure: We have, once again, raised the bar for smart.

The post With AI for Cybersecurity, We Are Raising the Bar for Smart appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Jeff Crume

Artificial intelligence, Artificial Intelligence (AI), Chief Information Security Officer (CISO), Cognitive Security, Data Management, Data Privacy, Governance, Internet of Things (IoT), Security Strategy, Security Technology,

How CISOs Can Facilitate the Advent of the Cognitive Enterprise

Just as organizations are getting more comfortable with leveraging the cloud, another wave of digital disruption is on the horizon: artificial intelligence (AI), and its ability to drive the cognitive enterprise.

In early 2019, the IBM Institute for Business Value (IBV) released a new report titled, “The Cognitive Enterprise: Reinventing your company with AI.” The report highlights key benefits and provides a roadmap to becoming a cognitively empowered enterprise, a term used to indicate an advanced digital enterprise that fully leverages data to drive operations and push its competitiveness to new heights.

Such a transformation is only possible with the extensive use of AI in business and technology platforms to continuously learn and adapt to market conditions and customer demand.

CISOs Are Key to Enabling the Cognitive Enterprise

The cognitive enterprise is an organization with an unprecedented level of convergence between technology, business processes and human capabilities, designed to achieve competitive advantage and differentiation.

To enable such a change, the organization will need to leverage more advanced technology platforms and must no longer be limited to dealing only with structured data. New, more powerful business platforms will enable a competitive advantage by combining data, unique workflows and expertise. Internal-facing platforms will drive more efficient operations while external-facing platforms will allow for increased cooperation and collaboration with business partners.

Yet these changes will also bring along new types of risks. In the case of the cognitive enterprise, many of the risks stem from the increased reliance on technology to power more advanced platforms — including AI and the internet of things (IoT) — and the need to work with a lot more data, whether it’s structured, unstructured, in large volume or shared with partners.

As the trusted adviser of the organization, the chief information security officer (CISO) has a strong role to play in enabling and securing the organization’s transformation toward:

  • Operational agility, powered in part by the use of new and advanced technologies, such as AI, 5G, blockchain, 3D printing and the IoT.

  • Data-driven decisions, supported by systems able to recognize and provide actionable insights based on both structured and unstructured data.

  • Fluid boundaries with multiple data flows going to a larger ecosystem of suppliers, customers and business partners. Data is expected to be shared and accessible to all relevant parties.

Shows relationship between data, processes, people, outside forces, and internal drivers (automation, blockchain, AI)Source: IBM Institute for Business Value (IBV) analysis.

Selection and Implementation of Business Platforms

Among the major tasks facing organizations embarking on this transformation is the need to choose and deploy new mega-systems, equivalent to the monumental task of switching enterprise resource planning (ERP) systems — or, in some cases, actually making the switch.

The choice of a new platform will impact many areas across the enterprise, including HR and capital allocation processes, in addition to the obvious impact on how the business delivers value via its product or service. Yet, as the IBM IBV report points out, the benefits can be significant. Leading organizations have been able to deliver higher revenues — as high as eight times the average — by adopting new business and technology platforms and fully leveraging all their data, both structured and unstructured.

That said, having large amounts of data doesn’t automatically translate into an empowered organization. As the report cautions, organizations can no longer simply “pour all their data into a data lake and expect everyone to go fishing.” The right digital platform choice can empower the organization to deliver enhanced profits or squeeze additional efficiency, but only if the data is accurate and can be readily accessed.

Once again, the CISO has an important role to play in ensuring the organization has considered all the implications of implementing a new system, so governance will be key.

Data Governance — When Security and Privacy Converge

For the organization to achieve the level of trust needed to power cognitive operations, the CISO will need to drive conversations and choices about the security and privacy of sensitive data flowing across the organization. Beyond the basic tenets of confidentiality, integrity and availability, the CISO will need to be fully engaged on data governance, ensuring data is accurate and trustworthy. For data to be trusted, the CISO will need to review and guarantee the data’s provenance and lineage. Yet the report mentions that, for now, fewer than half of organization had developed “a systemized approach to data curation,” so there is much progress to be made.

Organizations will need to balance larger amounts of data — several orders of magnitude larger — with greater access to this data by both humans and machines. They will also need to balance security with seamless customer and employee experiences. To handle this data governance challenge, CISOs must ensure the data flows with external partners are frictionless yet also provide security and privacy.

AI Can Enable Improved Cybersecurity

The benefits of AI aren’t limited to the business side of the organization. In 2016, IBM quickly recognized the benefits cognitive security could bring to organizations that leverage artificial intelligence in the cybersecurity domain. As attackers explore more advanced and more automated attacks, organizations simply cannot afford to rely on slow, manual processes to detect and respond to security incidents. Cognitive security will enable organizations to improve their ability to prevent and detect threats, as well as accelerate and automate responses.

Leveraging AI as part of a larger security automation and orchestration effort has clear benefits. The “2018 Cost of a Data Breach Study,” conducted by Ponemon Institute, found that security automation decreases the average total cost of a data breach by around $1.55 million. By leveraging AI, businesses can find threats up to 60 times faster than via manual investigations and reduce the amount of time spent analyzing each incident from one hour to less than one minute.

Successful Digital Transformation Starts at the Top

Whether your organization is ready to embark on the journey to becoming a cognitive enterprise or simply navigating through current digital disruption, the CISO is emerging as a central powerhouse of advice and strategy regarding data and technology, helping choose an approach that enables security and speed.

With the stakes so high — and rising — CISOs should get a head start on crafting their digital transformation roadmaps, and the IBM IBV report is a great place to begin.

The post How CISOs Can Facilitate the Advent of the Cognitive Enterprise appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Christophe Veltsos

Artificial intelligence, Artificial Intelligence (AI), Authentication, Automation, Biometric Security, Blockchain, cryptocurrency, Machine Learning, Social Engineering, Threat Detection,

Don’t Believe Your Eyes: Deepfake Videos Are Coming to Fool Us All

In 2017, an anonymous Reddit user under the pseudonym “deepfakes” posted links to pornographic videos that appeared to feature famous mainstream celebrities. The videos were fake. And the user created them using off-the-shelf artificial intelligence (AI) tools.

Two months later, Reddit banned the deepfakes account and related subreddit. But the ensuing scandal revealed a range of university, corporate and government research projects under way to perfect both the creation and detection of deepfake videos.

Where Deepfakes Come From (and Where They’re Going)

Deepfakes are created using AI technology called generative adversarial networks (GANs), which can be used broadly to create fake data that can pass as real data. To oversimplify how GANs work, two machine learning (ML) algorithms are pitted against each other. One creates fake data and the other judges the quality of that fake data against a set of real data. They continue this contest at massive scale, continually getting better at making fake data and judging it. When both algorithms become extremely good at their respective tasks, the product is a set of high-quality fake data.

In the case of deepfakes, the authentic data set consists of hundreds or thousands of still photographs of a person’s face, so the algorithm has a wide selection of images showing the face from different angles and with different facial expressions to choose from and judge against to experimentally add to the video during the learning phase.

Carnegie Mellon University scientists even figured out how to impose the style of one video onto another using a technique called Recycle-GAN. Instead of convincingly replacing someone’s face with another, the Recycle-GAN process enables the target to be used like a puppet, imitating every head movement, facial expression and mouth movement in the exact way as the source video. This process is also more automated than previous methods.

Most of these videos today are either pornography featuring celebrities, satire videos created for entertainment or research projects showing rapidly advancing techniques. But deepfakes are likely to become a major security concern in the future. Today’s security systems rely heavily on surveillance video and image-based biometric security. Since the majority of breaches occur because of social engineering-based phishing attacks, it’s certain that criminals will turn to deepfakes for this purpose.

Deepfake Videos Are Getting Really Good, Really Fast

The earliest publicly demonstrated deepfake videos tended to show talking heads, with the subjects seated. Now, full-body deepfakes developed in separate research projects at Heidelberg University and the University of California, Berkeley are able to transfer the movements of one person to another. One form of authentication involves gait analysis. These kinds of full-body deepfakes suggest that the gait of an authorized person could be transferred in video to an unauthorized person.

Here’s another example: Many cryptocurrency exchanges authenticate users by making them photograph themselves holding up their passport or some other form of identification as well as a piece of paper with something like the current date written on it. This can be easily foiled with Photoshop. Some exchanges, such as Binance, found many attempts by criminals to access accounts using doctored photos, so they and others moved to video instead of photos. Security analysts worry that it’s only a matter of time before deepfakes will become so good that neither photos nor videos like these will be reliable.

The biggest immediate threat for deepfakes and security, however, is in the realm of social engineering. Imagine a video call or message that appears to be your work supervisor or IT administrator, instructing you to divulge a password or send a sensitive file. That’s a scary future.

What’s Being Done About It?

Increasingly realistic deepfakes have enormous implications for fake news, propaganda, social disruption, reputational damage, evidence tampering, evidence fabrication, blackmail and election meddling. Another concern is that the perfection and mainstreaming of deepfakes will cause the public to doubt the authenticity of all videos.

Security specialists, of course, will need to have such doubts as a basic job requirement. Deepfakes are a major concern for digital security specifically, but also for society at large. So what can be done?

University Research

Some researchers say that analyzing the way a person in a video blinks, or how often they blink, is one way to detect a deepfake. In general, deepfakes show insufficient or even nonexistent blinking, and the blinking that does occur often appears unnatural. Breathing is another movement usually not present in deepfakes, along with hair (it often looks blurry or painted on).

Researchers from the State University of New York (SUNY) at Albany developed a deepfake detection method that uses AI technology to look for natural blinking, breathing and even a pulse. It’s only a matter of time, however, before deepfakes make these characteristics look truly “natural.”

Government Action

The U.S. government is also taking precautions: Congress could consider a bill in the coming months to criminalize both the creation and distribution of deepfakes. Such a law would likely be challenged in court as a violation of the First Amendment, and would be difficult to enforce without automated technology for identifying deepfakes.

The government is working on the technology problem, too. The National Science Foundation (NSF), Defense Advanced Research Projects Agency (DARPA) and Intelligence Advanced Research Projects Agency (IARPA) are looking for technology to automate the identification of deepfakes. DARPA alone has reportedly spent $68 million on a media forensics capability to spot deepfakes, according to CBC.

Private Technology

Private companies are also getting in on the action. A new cryptographic authentication tool called Amber Authenticate can run in the background while a device records video. As reported by Wired, the tool generates hashes — “scrambled representations” — of the data at user-determined intervals, which are then recorded on a public blockchain. If the video is manipulated in any way, the hashes change, alerting the viewer to the probability that the video has been tampered with. A dedicated player feature shows a green frame for portions of video that are faithful to the origina, and a red frame around video segments that have been altered. The system has been proposed for police body cams and surveillance video.

A similar approach was taken by a company called Factom, whose blockchain technology is being tested for border video by the Department of Homeland Security (DHS), according to Wired.

Security Teams Should Prepare for Anything and Everything

The solution to deepfakes may lie in some combination of education, technology and legislation — but none of these will work without the technology part. Because when deepfakes get really good, as they inevitably will, only machines will be able to tell the real videos from the fake ones. This deepfake technology is coming, but nobody knows when. We should also assume that an arms race will arise with malicious deepfake actors inventing new methods to overcome the latest detection systems.

Security professionals need to consider the coming deepfake wars when analyzing future security systems. If they’re video or image based — everything from facial recognition to gait analysis — additional scrutiny is warranted.

In addition, you should add video to the long list of media you cannot trust. Just as training programs and digital policies make clear that email may not come from who it appears to come from, video will need to be met with similar skepticism, no matter how convincing the footage. Deepfake technology will also inevitably be deployed for blackmail purposes, which will be used for extracting sensitive information from companies and individuals.

The bottom line is that deepfake videos that are indistinguishable from authentic videos are coming, and we can scarcely imagine what they’ll be used for. We should start preparing for the worst.

The post Don’t Believe Your Eyes: Deepfake Videos Are Coming to Fool Us All appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Mike Elgan

Artificial Intelligence (AI), CISO, Cloud, Cloud Security, Connected Devices, Cyberattacks, Data Privacy, Data Protection, Healthcare, healthcare security, himss, Incident Response (IR), Information Sharing, Quantum Computing, Risk Management, Security Conferences, Threat Response, Watson, X-Force,

Recapping IBM Think 2019 and HIMSS19: The Shared Landscape of Global Security

With IBM Think 2019 and HIMSS19 in the books, it’s worth making time for a quick debrief. Which topics resonated the most with attendees? Where did conference themes and discussions overlap? And what’s on the horizon for global cybersecurity this year and beyond?

Key Takeaways From Think 2019 and HIMSS19

According to IBM CEO, President and Chairman Ginni Rometty in her Think opening address, “chapter two” of digital transformation has arrived. For Rometty, this next chapter is scalable, driven by artificial intelligence (AI) and embedded across the enterprise. But without information architecture, she noted, “there is no AI.”

Trust underpins every aspect of effective digital transformation. This ties into IBM’s biggest push during the conference: Watson Anywhere. Built on the open-source orchestration engine Kubernetes, the microservices-based Watson Anywhere empowers organizations to run AI across the cloud environment of their choice, in effect democratizing AI technology to meet consumers along the path of their digital transformation journey — wherever they may be.

HIMSS19, meanwhile, had a clear focus on patient data, specifically the development of interoperability rules that prevent data blocking and empower effective information sharing. But there was also significant overlap with IBM’s initiatives; as Healthcare Dive reported, cloud and AI innovations were on full display at the Orlando event. Even more telling was the conference’s tag line, “Champions of Health Unite,” which speaks to the democratization and rapid uptake of healthcare technology, in turn allowing patients to manage their own healthcare experiences.

Hot Topics in San Francisco and Orlando

In San Francisco, IBM thought leaders, innovators and industry front-runners provided hundreds of great sessions for attendees, covering topics from AI acceleration to quantum computing and innovative security. Highlights included:

  • Accelerating the Journey to AIWhile 80 percent of organizations recognize the strategic potential of AI, just 19 percent understand what’s required to convert potential into profitability. State of New Jersey Judiciary CIO Jack McCarthy was joined by IBM Cloud and Cognitive Software Senior Vice President Arvind Krishna and other experts to help attendees develop a prescriptive approach to AI development across any cloud.
  • Innovation Doesn’t Happen Without Security. And Security Needs InnovationGlobal security challenges demand innovative technologies capable of doing more than responding to threats as they occur. But the innovation required to stay ahead of your competition isn’t possible without a solid security foundation. In this session, IBM Security General Manager Mary O’Brien, Westfield Insurance CISO Kevin Baker and former professional racecar driver Danica Patrick tackled the cyclical challenge of security, innovation and IT evolution.
  • The Journey to Cloud Community CrowdChat — In a more free-form session, the #Think2019 conference community CrowdChat tackled the challenge of cloud transition. According to Silicon Angle, chat participants highlighted both emerging needs for cloud-native tools capable of delivering “unprecedented flexibility” and commensurate security practices that drive both effective application development and DevOps processes.
  • Access the Future Today: Quantum ComputingWhile quantum computing has largely been confined to high-level enterprise use, this IBM session — led by Dr. Dario Gil, director of IBM Research — spoke to the development of road maps for mainstream adoption of cloud computing and how businesses could benefit from quantum solutions in the near term.

At HIMSS, meanwhile, hot conference topics included:

  • Patient-Centric Health Information ExchangeDisparate health information management systems are causing problems for physicians and patients alike. In this session, IBM Blockchain Solutions Architect Shahryar Sedghi and AT&T Director of Healthcare Solutions Thyge Knuhtsen helped define the requirements for patient-centric healthcare interoperability resources that leverage tools such as blockchain to “liberate” personal healthcare data.
  • Combating Cyberattacks with a Security ResidencyJennifer Kady, director of IBM Security solutions for the U.S. public sector, tackled the increasing risk of cybersecurity incidents with a new solution: security “residencies” that help train healthcare IT teams to effectively respond in the event of an attack.
  • Mitigating the Next Generation of Risk: Connected Medical DevicesThe use of connected medical devices is on the rise, but just 51 percent of device manufacturers follow FDA guidance to mitigate risks. This session focused on the development of programmatic, end-to-end security approaches to secure both IT assets and medical devices.
  • Reactions from the Field: AIThree industry leaders came together for a discussion of healthcare AI in the field. What’s working, what isn’t and what needs to change? From streamlining workflows and eliminating repetitive tasks, cloud-based AI has real potential for healthcare if companies can leverage clean, normalized “good data” to make accurate predictions and take critical action.

The Future of Global Security

Cybersecurity is now a serious global concern. For healthcare organizations, this is reflected in the $1.4 million it costs to recover from “average” cyberattacks, according to HealthITSecurity, and worrisome data from Proofpoint that shows health-focused email attacks are up 473 percent over the last two years. For IBM, AI-driven digital transformations aren’t possible without the solid foundation of innovative security and consumer trust.

Taken together, the topics and keynotes from both conferences suggest three emerging trends for cybersecurity in 2019:

  • Intelligence-driven response — Innovation drives success, and security is no exception. The rise of any-cloud AI makes innovative, intelligence-led incident response (IR) an attainable goal, and one that will quickly become necessary as threat actors leverage their own versions of AI to compromise global targets.
  • Personalized accountability — Patient healthcare data is an incredibly valuable resource. While the shift to “unblocked” data offers more granular control for patients and caregivers alike, it also speaks to the need for increased accountability; from connected devices to security readiness, enterprises must be prepared to defend data both at scale and in-situ.
  • Open data defense — Interoperability is critical for healthcare data, and data sharing is paramount for advanced AI systems. As data becomes more “open,” organizations must leverage advanced solutions such as quantum computing and IBM X-Force residencies to help defend this critical resource.

We’re only a few months into the year, but HIMSS19 and Think 2019 have already helped shape this year’s focus on enterprise transformation, innovation and global cybersecurity.

The post Recapping IBM Think 2019 and HIMSS19: The Shared Landscape of Global Security appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Douglas Bonderud

Artificial Intelligence (AI), Business Email Compromise (BEC), Credentials Theft, Data Protection, email, Network, Network Security, Phishing, Phishing Attacks, Risk Management, Security Awareness, Security Training, Social Engineering, Social networks, spear-phishing, Threat Detection,

Workplace Expectations and Personal Exceptions: The Social Flaws of Email Security

Even though they’ve been around for quite some time, phishing attacks continue to climb. According to Proofpoint’s 2019 “State of the Phish Report,” 83 percent of businesses experienced a phishing attack and 64 percent of security professionals encountered spear phishing threats in 2018. New vectors are also emerging: As noted by Forbes, software-as-a-service (SaaS) credential theft, messaging app attacks and malicious link embedding within shared files are all on the horizon for 2019.

The data begs the question: What’s wrong with email security? For years, thought leadership articles and information security experts alike have been recommending commonsense best practices that should curtail email attack efforts. Don’t click on unknown links. Don’t open unsolicited attachments. Use automated detection tools. And yet phishers are hauling in bigger catches than ever before, expanding their operations to include new threats and grab more data.

I believe the problem is tied to phishing’s fundamental premise: Social barriers are far easier to break than their technological counterparts. By exploiting critical social flaws — specifically, workplace expectations and personal exceptions — attackers can gain the upper hand.

Email Still Reigns Supreme

Despite recent challenges from up-and-comers such as social messaging apps and unified collaboration tools, email still reigns supreme in the workplace. As noted by CMS Wire, “There appears to be a general consensus that while social networks are useful to achieve work-related goals, email remains the undisputed communications tool in the enterprise.”

Email is timely and transparent — users can quickly send and receive information while creating a digital paper trail. Unlike some messaging apps, users can include attachments and draft longer responses and, since email exists outside of most collaboration continuums, employees can temporarily take a break from their inbox.

But that’s not the whole story. For better or worse, corporate email itself is a kind of social network. As Nathan Schneider, a professor of media studies at the University of Colorado, told The New York Times, “Email is the most resilient social network on the internet.” While it lacks the bells and whistles of social media platforms and the intimacy of face-to-face communication, email has evolved its own set of social rules around usage, etiquette and response times. For example, users are expected to create clear subject lines, reply to all emails (even if received in error), limit the amount of humor and restrict the use of punctuation such as exclamation marks, as noted by Inc.

The rise of interactive business email compromise (BEC) attacks also speaks to the social nature of email. New BECs don’t start with malicious payloads, but instead leverage short social messages to compel employee replies and create a compelling, albeit fake, interactive dialogue before dropping infected documents.

Simply put, email is the biggest, most used social network in the enterprise — and that’s not changing anytime soon.

The Psychology of Urgent Requests

The fundamentally social nature of email leads us to our first security issue: expectations.

Consider common phishing security advice that warns against emails marked “urgent” or “DO NOW.” Why the focus? Because humans are naturally conditioned to meet social norms and feel substantial pressure to conform. According to the Havard Business Review, “Throughout our careers, we are taught to conform — to the status quo, to the opinions and behaviors of others, and to information that supports our views.” What’s more, as noted by Psychology Today, this conformity is accelerated in a small group setting — such as a corporate team or enterprise department — and further enhanced, according to Psych Central, by neurotransmitters such as dopamine that are produced when humans are part of a social group.

As a result, when it comes to well-written phishing emails that are purportedly coming from CEOs or HR mangers, staff are preconditioned to reply ASAP with requested information — even if they’ve had previous security training. Social pressure almost invariably trumps learned email security.

It Won’t Happen to Me!

While socially driven email networks increase the likelihood of faux-insider messages getting through the security chain, what about outside attacks? Much time and attention has been devoted to educating employees about the telltale signs of external phishing attempts, such as emails purportedly from financial institutions, government agencies or new business contacts.

Here, another facet of human social interaction is at work: Our natural disposition to believe we’re better than everyone else. It’s called the superiority illusion and, as noted by Scientific American, causes most people to think they’re better than average at most things, such as the ability to spot and prevent phishing attacks.

Since it’s impossible for the majority of people to be above average, the result is that advanced spam and phishing campaigns that make it past initial defenses may get overlooked by overconfident employees who assume they would recognize any sign of these attacks. It’s the old “it won’t happen to me” argument: Users presume they’ve got all the knowledge they need to spot attacks and if they’re victimized, there’s no way anyone could have seen it coming.

Evolve Your Email Security Strategy

What does this mean for companies looking to prevent phishing attacks?

First, there’s no need to ditch current security training. But, as CSO Online pointed out, it’s also a good idea to educate users on how not to craft an email. Don’t be your own worst enemy by sending unexpected, hastily typed emails with “URGENT” in the subject line.

Fundamental shifts in email security, however, require a rethinking of current best practices. To handle social expectation issues, companies must adopt top-down cultural change that prioritizes safety over speed. This is easier said than done when CEOs need hard data for stakeholders or chief financial officers (CFOs) are handling financial fluctuations in real-time, but giving staff time to double-check message origins and intentions before replying goes a long way toward reducing the number of reeled-in employees.

For security professionals, this means developing the ability to present potential phishing losses as line-of-business issues. In practice, this requires leading with context: How are current security issues impacting strategic objectives such as cost savings, customer confidence and regional performance? This can help shore up the notion that time lost to double-checking email requests via phone calls, face-to-face meetings or other methods is preferable to the monetary loss associated with successful attack campaigns.

Dealing with exceptional behavior, meanwhile, starts with a layered email security approach that eliminates obvious phishing attempts before they hit inboxes. Another key component of this defensive strategy is artificial intelligence (AI). AI-based tools capable of analyzing enterprise communication patterns and spotting inconsistencies already exist. Making them applicable to “above-average” phishing finders means leveraging a kind of low-key notification process, in turn aligning with user beliefs about their own ability to recognize phishing attempts.

Address the Human Components of Phishing

Email remains the top enterprise communication method and the obvious choice for attackers looking to compromise business networks. While current email security solutions can help mitigate phishing impacts, companies must recognize the role of corporate email as a social network to address the critical human components of this risk: social expectation and the superiority exception.

The post Workplace Expectations and Personal Exceptions: The Social Flaws of Email Security appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Douglas Bonderud

Advanced Threats, Artificial intelligence, Artificial Intelligence (AI), Chief Information Security Officer (CISO), Data Breaches, Risk Management, Security Costs, Security Intelligence & Analytics, Security Products, Security Strategy, Skills Gap, Threat Detection, Zero-Day Attacks,

Are Applications of AI in Cybersecurity Delivering What They Promised?

Many enterprises are using artificial intelligence (AI) technologies as part of their overall security strategy, but results are mixed on the post-deployment usefulness of AI in cybersecurity settings.

This trend is supported by a new white paper from Osterman Research titled “The State of AI in Cybersecurity: The Benefits, Limitations and Evolving Questions.” According to the study, which included responses from 400 organizations with more than 1,000 employees, 73 percent of organizations have implemented security products that incorporate at least some level of AI.

However, 46 percent agree that rules creation and implementation are burdensome, and 25 percent said they do not plan to implement additional AI-enabled security solutions in the future. These findings may indicate that AI is still in the early stages of practical use and its true potential is still to come.

How Effective Is AI in Cybersecurity?

“Any ITDM should approach AI for security very cautiously,” said Steve Tcherchian, chief information security officer (CISO) and director of product at XYPRO Technology. “There are a multitude of security vendors who tout AI capabilities. These make for great presentations, marketing materials and conversations filled with buzz words, but when the rubber meets the road, the advancement in technology just isn’t there in 2019 yet.”

The marketing Tcherchian refers to has certainly drummed up considerable attention, but AI may not yet be delivering enough when it comes to measurable results for security. Respondents to the Osterman Research study noted that the AI technologies they have in place do not help mitigate many of the threats faced by enterprise security teams, including zero-day and advanced threats.

Still Work to Do, but Promise for the Future

While applications of artificial intelligence must still mature for businesses to realize their full benefits, many in the industry still feel the technology offers promise for a variety of applications, such as improving the speed of processing alerts.

“AI has a great potential because security is a moving target, and fixed rule set models will always be evaded as hackers are modifying their attacks,” said Marty Puranik, CEO of Atlantic.Net. “If you have a device that can learn and adapt to new forms of attacks, it will be able to at least keep up with newer types of threats.”

Research from the Ponemon Institute predicted several benefits of AI use, including cost-savings, lower likelihood of data breaches and productivity enhancements. The research found that businesses spent on average around $3 million fighting exploits without AI in place. Those who have AI technology deployed spent an average of $814,873 on the same threats, a savings of more than $2 million.

Help for Overextended Security Teams

AI is also being considered as a potential point of relief for the cybersecurity skills shortage. Many organizations are pinched to find the help they need in security, with Cybersecurity Ventures predicting the skills shortage will increase to 3.5 million unfilled cybersecurity positions by 2021.

AI can help security teams increase efficiency by quickly making sense of all the noise from alerts. This could prove to be invaluable because at least 64 percent of alerts per day are not investigated, according to Enterprise Management Associates (EMA). AI, in tandem with meaningful analytics, can help determine which alerts analysts should investigate and discern valuable information about what is worth prioritizing, freeing security staff to focus on other, more critical tasks.

“It promises great improvements in cybersecurity-related operations, as AI releases security engineers from the necessity to perform repetitive manual processes and provides them with an opportunity and time to improve their skills, learn how to use new tools, technologies,” said Uladzislau Murashka, a certified ethical hacker (CEH) at ScienceSoft.

Note that while AI offers the potential for quicker, more efficient handling of alerts, human intervention will continue to be critical. Applications of artificial intelligence will not replace humans on the security team anytime soon.

Paving an Intelligent Path Forward

It’s important to consider another group that is investing in AI technology and using it for financial gains: cybercriminals. Along with enterprise security managers, those who make a living by exploiting sensitive data also understand the potential AI has for the future. It will be interesting to see how these capabilities play out in the future cat-and-mouse game of cybersecurity.

While AI in cybersecurity is still in the early stages of its evolution, its potential has yet to be fully realized. As security teams continue to invest in and develop AI technologies, these capabilities will someday be an integral part of cyberdefense.

The post Are Applications of AI in Cybersecurity Delivering What They Promised? appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Joan Goodchild

Artificial Intelligence (AI), Machine Learning, Security Intelligence & Analytics, Threat Intelligence, Threat Monitoring,

Now That You Have a Machine Learning Model, It’s Time to Evaluate Your Security Classifier

This is the third installment in a three-part series about machine learning. Be sure to read part one and part two for more context on how to choose the right artificial intelligence solution for your security problems.

As we move into this third part, we hope we have helped our readers better identify an artificial intelligence (AI) solution and select the right algorithm to address their organization’s security needs. Now, it’s time to evaluate the effectiveness of the machine learning (ML) model being used. But with so many metrics and systems available to measure security success, where does one begin?

Classification or Regression? Which Can Get Better Insights From Your Data?

By this time, you may have selected an algorithm of choice to use with your machine learning solution. It could fall into one of two categories in general, classification or regression. Here is a reminder of the main difference: From a security standpoint, these two types of algorithms tend to solve different problems. For example, a classifier might be used as an anomaly detector, which is often the basis of the new generation of intrusion detection and prevention systems. Meanwhile, a regression algorithm might be better at things such as detecting denial-of-service attacks (DoS) because these problems tend to involve numbers rather than nominal labels.

At first look, the difference between Classification and Regression might seem complicated, but it really isn’t. It just comes down to what type of value our target variable, also called our dependent variable, contains. In that sense, the main difference between the two is that the output variable in Regression is numerical while he output for Classification is categorical/discrete.

For our purposes in this blog, we’ll focus on metrics that are used to evaluate algorithms applied to supervised ML. For reference, supervised machine learning is the form of learning where we have complete labels and a ground truth. For example, we know that the data can be divided into class1 and class2, and each of our training, validation, and testing samples is labeled as belonging to class1 or class2.

Classification Algorithms – or Classifiers

To have ML work with data, we can select a security classifier, which is an algorithm with a non-numeric class value. We want this algorithm to look at data and classify it into predefined data “classes.” These are usually two or more categorical, dependent variables.

For example, we might try to classify something as an attack or not an attack. We would create two labels, one for each of those classes. A classifier then takes the training set and tries to learn a “decision boundary” between the two classes. There could be more than two classes, and in some cases only one class. For example, the Modified National Institute of Standards and Technology (MNIST) database demo tries to classify an image as one of the ten possible digits from hand-written samples. This demo is often used to show the abilities of deep learning, as the deep net can output probabilities for each digit rather than one single decision. Typically, the digit with the highest probability is chosen as the answer.

A Regression Algorithm – or Regressor

A Regression algorithm, or regressor, is used when the target variable is a number. Think of a function in math: there are numbers that go into the function and there is a number that comes out of it. The task in Regression is to find what this function is. Consider the following example:

Y = 3x+9

We will now find ‘Y’ for various values of ‘X’. Therefore:

X = 1 -> y = 12

X = 2 -> y = 15

X = 3 -> y = 18

The regressor’s job is to figure out what the function is by relying on the values of X and Y. If we give the algorithm enough X and Y values, it will hopefully find the function 3x+9.

We might want to do this in cases where we need to calculate the probability of an event being malicious. Here, we do not want a classification, as the results are not fine-grained enough. Instead, we want a confidence or probability score. So, for example, the algorithm might provide the answer that “there is a 47 percent probability that this sample is malicious.”

In the next section, we will be looking at the various metrics for each, Classification, and Regression, which can help us determine the efficacy of our security posture by using our chosen ML model.

Metrics for Classification

Before we dive into common classification metrics, let’s define some key terms:

  • Ground truth is a set of known labels or descriptions of which class or target variable represents the correct solution. In a binary classification problem, for instance, each example in the ground truth is labeled with the correct classification. This mirrors the training set, where we have known labels for each example.
  • Predicted labels represent the classifications that the algorithm believes are correct. That is, the output of the algorithm.

Now let’s take a closer look at some of the most useful metrics against which we can choose to measure the success of our machine learning deployment.

True Positive Rate

This is the ratio of correctly predicted positive examples to the total number of examples in the ground truth. If there are 100 examples in the ground truth and the model correctly predicts 65 of them as positive, then the true positive rate (TPR) is 65 percent, sometimes written as 0.65.

False Positive Rate

The false positive rate (FPR) is the number of incorrectly predicted examples that are labeled as positive by the algorithm but are actually negative in the ground truth. If we have 100 examples and 15 of them are incorrectly predicted as positive, then the false positive rate would be 15 percent, sometimes written as 0.15.

True Negative Rate

The true negative rate (TNR) is the number of correctly predicted negative examples divided by the number of examples in the ground truth. Let us say that in the scenario of 100 examples that another 15 of these examples were correctly predicted as negative. Therefore, the true negative rate (TNR) is 15 percent, also written as 0.15. Notice here that there were 15 false positives and 15 true negatives. This makes for a total of 30 negative examples.

False Negative Rate

The false negative rate (FNR) is the ratio of examples predicted incorrectly as belonging to the negative class over the number of examples in the ground truth. Continuing with the aforementioned case, let’s say that out of 100 examples in the ground truth, the algorithm correctly predicted 65 as positive. We also know that 15 were predicted as false positives and 15 were predicted as true negatives. This leaves us with 5 examples unaccounted for, so our false negative rate is 5 percent, or 0.05. The false negative rate is the complement to the true positive rate, so the sum of the two metrics should be 70 percent (0.7), as 70 examples actually belong to the positive class.

Accuracy

Accuracy measures the proportion of correct predictions, both positive and negative, to the total number of examples in the ground truth. This metric can often be misleading if, for instance, there is a large proportion of positive examples in the ground truth compared to the number of negative examples. Similarly, if the model predicts only the positive class correctly, accuracy will not give you a sense of how well the model does with negative predictions versus negative examples in the ground truth even though the accuracy could be quite high because the positive examples were predicted.

Accuracy = (TP+TN)/(TP+TN+FP+FN)

Precision

Before we explore the precision metric, it’s important to define a few more terms:

  • TP is the raw number of true positives (in the above example, the TP is 65).
  • FP is the raw number of false positives (15 in the above example).
  • TN is the raw number of true negatives (15 in the above example).
  • FN is the raw number of false negatives (5 in the above example).

Precision, sometimes known as the positive predictive value, is the proportion of true positives predicted by the algorithm over the sum of all examples predicted as positive. That is, precision=TP/(TP+FP).

In our example, there were 65 positives in the ground truth that the algorithm correctly labeled as positive. However, it also labeled 15 examples as positive when they were actually negative.

These false positives go into the denominator of the precision calculation. So, we get 65/(65+15), which yields a precision of 0.81.

What does this mean? In brief, high precision means that the algorithm returned far more true positives than false positives. In other words, it is a qualitative measure. The higher the precision, the better job the algorithm did of predicting true positives while rejecting false positives.

Recall

Recall, also known as sensitivity, is the ratio of true positives to true positives plus false negatives: TP/(TP+FN).

In our example, there were 65 true positives and 5 false negatives, giving us a recall of 65/(65+5) = 0.93. Recall is a quantitative measure; in a classification task, it is a measure of how well the algorithm “memorized” the training data.

Note that there is often a trade-off between precision and recall. In other words, it’s possible to optimize one metric at the expense of the other. In a security context, we may often want to optimize recall over precision because there are circumstances where we must predict all the possible positives with a high degree of certainty.

For example, in the world of automotive security, where kinetic harm may occur, it is often heard that false positives are annoying, but false negatives can get you killed. That is a dramatic example, but it can apply to other situations as well. In intrusion prevention, for instance, a false positive on a ransomware sample is a minor nuisance, while a false negative could cause catastrophic data loss.

However, there are cases that call for optimizing precision. If you are constructing a virus encyclopedia, for example, higher precision might be preferred when analyzing one sample since the missing information will presumably be acquired from another sample.

F-Measure

An F-measure (or F1 score) is defined as the harmonic mean of precision and recall. There is a generic F-measure, which includes a variable beta that causes the harmonic mean of precision and recall to be weighted.

Typically, the evaluation of an algorithm is done using the F1 score, meaning that beta is 1 and therefore the harmonic mean of precision and recall is unweighted. The term F-measure is used as a synonym for F1 score unless beta is specified.

The F1 score is a value between 0 and 1 where the ideal score is 1, and is calculated as 2 * Precision * Recall/(Precision+Recall), or the harmonic mean. This metric typically lies between precision and recall. If both are 1, then the F-measure equals 1 as well. The F1 score has no intuitive meaning per se; it is simply a way to represent both precision and recall in one metric.

Matthews Correlation Coefficient

The Matthews Correlation Coefficient (MCC), sometimes written as Phi, is a representation of all four values — TP, FP, TN and FN. Unlike precision and recall, the MCC takes true negatives into account, which means it handles imbalanced classes better than other metrics. It is defined as:

MCC=((TP*TN)–(FP*FN))/sqrt((TP+FP)*(TP+FN)*(TN+FP)*(TN+FN))

If the value is 1, then the classifier and ground truth are in perfect agreement. If the value is 0, then the result of the classifier is no better than random chance. If the result is -1, the classifier and the ground truth are in perfect disagreement. If this coefficient seems low (below 0.5), then you should consider using a different algorithm or fine-tuning your current one.

Youden’s Index

Also known as Youden’s J statistic, Youden’s index is the binary case of the general form of the statistic known as ‘informedness’, which applies to multiclass problems. It is calculated as (sensitivity + specificity–1) and can be seen as the probability of an informed decision verses a random guess. In other words, it takes all four predictors into account.

Remember from our examples that recall=TP/(FP+FN) and specificity, or TNR, is also the complement of the FPR. Therefore, the Youden index incorporates all measures of predictors. If the value of Youden’s index is 0, then the probability of the decision actually being informed is no better than random chance. If it is 1, then both false positives and false negatives are 0.

Area Under the Receiver Operator Characteristic Curve

This metric, usually abbreviated as AUC or ROC, measures the area under the curve plotted with true positives on the Y-axis and false positives on the X-axis. This metric can be useful because it provides a single number that lets you compare models of different types. An AUC value of 0.5 means the result of the test is essentially a coin flip. You want the AUC to be as close to 1 as possible because this enables researchers to make comparisons across experiments.

Area Under the Precision Recall Curve

Area under the precision recall curve (AUPRC) is a measurement that, like MCC, accounts for imbalanced class distributions. If there are far more negative examples than positive examples, you might want to use AUPRC as your metric and visual plot. The curve is precision plotted against recall. The closer to 1, the better. Note that since this metric/plot works best when there are more negative predictions than positive predictions, you might have to invert your labels for testing.

Average Log Loss

Average log loss represents the penalty of wrong prediction. It is the difference between the probability distributions of the actual and predicted models.

In deep learning, this is sometimes known as the cross-entropy loss, which is used when the result of a classifier such as a deep learning model is a probability rather than a binary label. Cross-entropy loss is therefore the divergence of the predicted probability from the actual probability in the ground truth. This is useful in multiclass problems but is also applicable to the simplified case of binary classification.

By using these metrics to evaluate your ML model, and tailoring them to your specific needs, you could fine-tune the output from the data and essentially get more certain results, thus detecting more issues/threats, and optimizing controls as needed.

Metrics for Regression

For regression, the goal is to determine the amount of errors produced by the ML algorithm. The model is considered good if the error value between the predicted and observed value is small.

Let’s take a closer look at some of the metrics used for evaluating regression models.

Mean Absolute Error

Mean absolute error (MAE) is the closeness of the predicted result to the actual result. You can think of this as the average of the differences between the predicted value and the ground truth value. As we proceed along each test example when evaluating against the ground truth, we subtract the actual value reported in the ground truth from the predicted value from the regression algorithm and take the absolute value. We can then calculate the arithmetic mean of these values.

While the interpretation of this metric is well-defined, because it is an arithmetic mean, it could be affected by very large, or very small differences. Note that this value is scale-dependent, meaning that the error is on the same scale as the data. Because of this, you cannot compare two MAE values across datasets.

Root Mean Squared Error

Root mean squared error (RMSE) attempts to represent all error across moments in time in one value. This is often the metric that optimization algorithms seek to minimize in regression problems. When an optimization algorithm is tuning so-called hyperparameters, it seeks to make RMSE as small as possible.

Consider, however, that like MAE, RMSE is both sensitive to large and small outliers and is scale-dependent. Therefore, you have to be careful and examine your residuals to look for outliers — values that are significantly above or below the rest of the residuals. Also, like MAE, it is improper to compare RMSE across datasets unless the scaling translations have been accounted for, because data scaling, whether by normalization or standardization, is dependent upon the data values.

For example, in Standardization, the scale from -1 to 1 is determined by subtracting the mean from each value and dividing the value by the standard deviation. This gives the normal distribution. If, on the other hand, the data is normalized, the scaling is done by taking the current value and subtracting the minimum value, then dividing this by the quantity (maximum value – minimum value). These are completely different scales, and as a result, one cannot compare the RMSE between these two data sets.

Relative Absolute Error

Relative absolute error (RAE) is the mean difference divided by the arithmetic mean of the values in the ground truth. Note that this value can be compared across scales because it has been normalized.

Relative Squared Error

Relative squared error (RSE) is the total squared error of the predicted values divided by the total squared error of the observed values. This also normalizes the error measurement so that it can be compared across datasets.

Machine Learning Can Revolutionize Your Organization’s Security

Machine learning is integral to the enhancement of cybersecurity today and it will only become more critical as the security community embraces cognitive platforms.

In this three-part series, we covered various algorithms and their security context, from cutting-edge technologies such as generative adversarial networks to more traditional algorithms that are still very powerful.

We also explored how to select the appropriate security classifier or regressor for your task, and, finally, how to evaluate the effectiveness of a classifier to help our readers better gauge the impact of optimization. With a better idea about these basics, you’re ready to examine and implement your own algorithms and to move toward revolutionizing your security program with machine learning.

The post Now That You Have a Machine Learning Model, It’s Time to Evaluate Your Security Classifier appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Brad Harris

Artificial Intelligence (AI), C-Suite, Chief Information Security Officer (CISO), Cryptography, cyber risk, Data Protection, fraud, General Data Protection Regulation (GDPR), Incident Response (IR), Infrastructure Security, Machine Learning, Quantum Computing, regulatory compliance, Risk Management, Security Leadership, World Economic Forum (WEF),

Manage Emerging Cybersecurity Risks by Rallying Around Mutual Concerns

Global risks are intensifying but the collective will to tackle them appears to be lacking. — The World Economic Forum’s “Global Risks Report 2019”

With the start of a new calendar year, chief information security officers (CISOs) are looking for ways to set the tone for the year and have more engaged conversations with top leadership regarding cybersecurity risks. The good news is January provided such an opportunity, but it’s not what you might expect.

Every year, the world’s elite descends on Davos, Switzerland, as part of the global gathering known as the World Economic Forum (WEF). A few weeks before they hold this event, the WEF releases its “Global Risks Report,” and this year, once again, cyber risks figured prominently. The report was based on survey responses from nearly 1,000 decision-makers from the business and government sectors, academia, nongovernmental organizations (NGOs), and other international organizations.

Cybersecurity Risks Once Again in the Top 5

The report opens with its distinctive global risks landscape diagram, and cyber-related risks fall in the top-right quadrant of global risks, both in terms of likelihood and impact. When it comes to likelihood, data fraud or theft came in fourth place after three environmental risks, with cyberattacks rounding out the top five.

When ranked by impact, cyberattacks still made it into the top 10, in seventh place, followed immediately by critical information infrastructure breakdown. The fact that data fraud or theft wasn’t in the top 10 risks by impact might indicate that markets and business leaders are more confident about the global economy’s ability to detect and respond to such an event.

This is by no means the first time that technology-related risks made it to the top of the list: Cyberattacks have appeared four times in the top five risks by likelihood since 2010 (in 2012, 2014, 2018 and 2019). However, in terms of impact, the only technology-related risk to make the top five was critical information infrastructure breakdown in 2014.

Is it symptomatic of a larger disconnect that, in the last decade, global leaders only once perceived a technology-related risk as a top-five risk in terms of impact? Do top leadership and board directors at your organization share this attitude?

A Conversation Starter for CISOs and Top Leadership

Of course, the WEF report is aimed at a global audience of business and government executives, so it might not be immediately apparent how CISOs could benefit from grabbing a copy and leafing through it. However, because technology-based risks — and more specifically, cyber-related risks — feature so prominently in the report, there is a unique opportunity to engage or re-engage top leadership and boards to discuss these issues and re-evaluate the organization’s current risk appetite. Among the topics covered in the report are many areas that CISOs should be ready to engage on, including:

  • Machine learning and artificial intelligence (AI) — How, if at all, is your organization leveraging these technologies? Is the security function engaged at the earliest part of the process to implement them?

  • Regulatory changes, such as the General Data Protection Regulation (GDPR) — Is your organization now fully compliant with the GDPR? Are there other GDPR-like regulations on the horizon that need to be on your radar?

  • Interconnectedness of cybersecurity risks — Is your organization on its way to becoming cyber resilient? How often is your organization’s resilience put to the test?

  • Quantum computing and cryptography — Who, if anyone, is keeping track of developments in quantum computing? How often is this disruptive technology being discussed, both in terms of the opportunities it presents, but also the risks to traditional cryptographic methods of protecting company secrets?

Interconnectedness Versus Resilience

If there’s one section of the report that CISOs should share with top leadership, it is the portion titled “Managing in the Age of Meltdowns” (just three pages long). As the interconnectedness of technology increases the potential for cascading failures, this section reminds us of the stakes: “When something goes wrong in a complex system, problems start popping up everywhere, and it is hard to figure out what’s happening. And tight coupling means that the emerging problems quickly spiral out of control and even small errors can cascade into massive meltdowns.”

The section covers different strategies to help deal with complex, dynamic systems and provides guidance for CISOs to review and improve the effectiveness of existing processes. Strategies include encouraging healthy skepticism and recognizing the value of clear and honest lines of reporting. CISOs should also try to “imagine failure” or, better yet, simulate a breach to practice their response. The report also reminds security leaders to perform thorough root-cause analysis, as “too often, we base decisions on predictions that are overly simplistic, missing important possible outcomes.”

Find a Rallying Point

Most CISOs know they’re more likely to be heard when aligning their messages and efforts with the concerns of top leadership. In a world of increasing global risks, security leaders must engage with all levels of the organization to truly understand what cybersecurity risks are top of mind, from the board and C-suite all the way down to entry-level analysts. Organizing around mutual concerns will help maximize security at the enterprise.

The post Manage Emerging Cybersecurity Risks by Rallying Around Mutual Concerns appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Christophe Veltsos

Artificial intelligence, Artificial Intelligence (AI), Authentication Systems, Biometric Security, Data Protection, facial recognition, Identity and Access Management (IAM), Machine Learning, passwords, Unified Endpoint Management (UEM),

AI May Soon Defeat Biometric Security, Even Facial Recognition Software

It’s time to face a stark reality: Threat actors will soon gain access to artificial intelligence (AI) tools that will enable them to defeat multiple forms of authentication — from passwords to biometric security systems and even facial recognition software — identify targets on networks and evade detection. And they’ll be able to do all of this on a massive scale.

Sounds far-fetched, right? After all, AI is difficult to use, expensive and can only be produced by deep-pocketed research and development labs. Unfortunately, this just isn’t true anymore; we’re now entering an era in which AI is a commodity. Threat actors will soon be able to simply go shopping on the dark web for the AI tools they need to automate new kinds of attacks at unprecedented scales. As I’ll detail below, researchers are already demonstrating how some of this will work.

When Fake Data Looks Real

Understanding the coming wave of AI-powered cyberattacks requires a shift in thinking and AI-based unified endpoint management (UEM) solutions that can help you think outside the box. Many in the cybersecurity industry assume that AI will be used to simulate human users, and that’s true in some cases. But a better way to understand the AI threat is to realize that security systems are based on data. Passwords are data. Biometrics are data. Photos and videos are data — and new AI is coming online that can generate fake data that passes as the real thing.

One of the most challenging AI technologies for security teams is a very new class of algorithms called generative adversarial networks (GANs). In a nutshell, GANs can imitate or simulate any distribution of data, including biometric data.

To oversimplify how GANs work, they involve pitting one neural network against a second neural network in a kind of game. One neural net, the generator, tries to simulate a specific kind of data and the other, the discriminator, judges the first one’s attempts against real data — then informs the generator about the quality of its simulated data. As this progresses, both neural networks learn. The generator gets better at simulating data, and the discriminator gets better at judging the quality of that data. The product of this “contest” is a large amount of fake data produced by the generator that can pass as the real thing.

GANs are best known as the foundational technology behind those deep fake videos that convincingly show people doing or saying things they never did or said. Applied to hacking consumer security systems, GANs have been demonstrated — at least, in theory — to be keys that can unlock a range of biometric security controls.

Machines That Can Prove They’re Human

CAPTCHAs are a form of lightweight website security you’re likely familiar with. By making visitors “prove” they’re human, CAPTCHAs act as a filter to block automated systems from gaining access. One typical kind of CAPTCHA asks users to identify numbers, letters and characters that have been jumbled, distorted and obfuscated. The idea is that humans can pick out the right symbols, but machines can’t.

However, researchers at Northwest University and Peking University in China and Lancaster University in the U.K. claimed to have developed an algorithm based on a GAN that can break most text-based CAPTCHAs within 0.05 seconds. In other words, they’ve trained a machine that can prove it’s human. The researchers concluded that because their technique uses a small number of data points for training the algorithm — around 500 test CAPTCHAs selected from 11 major CAPTCHA services — and both the machine learning part and the cracking part happen very quickly using a single standard desktop PC, CAPTCHAs should no longer be relied upon for front-line website defense.

Faking Fingerprints

One of the oldest tricks in the book is the brute-force password attack. The most commonly used passwords have been well-known for some time, and many people use passwords that can be found in the dictionary. So if an attacker throws a list of common passwords, or the dictionary, at a large number of accounts, they’re going to gain access to some percentage of those targets.

As you might expect, GANs can produce high-quality password guesses. Thanks to this technology, it’s now also possible to launch a brute-force fingerprint attack. Fingerprint identification — like the kind used by major banks to grant access to customer accounts — is no longer safe, at least in theory.

Researchers at New York University and Michigan State University recently conducted a study in which GANs were used to produce fake-but-functional fingerprints that also look convincing to any human. They said their method worked because of a flaw in the way many fingerprint ID systems work. Instead of matching the full fingerprint, most consumer fingerprint systems only try to match a part of the fingerprint.

The GAN approach enables the creation of thousands of fake fingerprints that have the highest likelihood of being matches for the partial fingerprints the authentication software is looking for. Once a large set of high-quality fake fingerprints is produced, it’s basically a brute-force attack using fingerprint patterns instead of passwords. The good news is that many consumer fingerprint sensors use heat or pressure to detect whether an actual human finger is providing the biometric data.

Is Face ID Next?

One of the most outlandish schemes for fooling biometric security involves tricking facial recognition software with fake faces. This was a trivial task with 2D technologies, in part because the capturing of 2D facial data could be done with an ordinary camera, and at some distance without the knowledge of the target. But with the emergence of high-definition 3D technologies found in many smartphones, the task becomes much harder.

A journalist working at Forbes tested four popular Android phones, plus an iPhone, using 3D-printed heads made by a company called Backface in Birmingham, U.K. The studio used 50 cameras and sophisticated software to scan the “victim.” Once a complete 3D image was created, the life-size head was 3D-printed, colored and, finally, placed in front of the various phones.

The results: All four Android phones unlocked with the phony faces, but the iPhone didn’t.

This method is, of course, difficult to pull off in real life because it requires the target to be scanned using a special array of cameras. Or does it? Constructing a 3D head out of a series of 2D photos of a person — extracted from, say, Facebook or some other social network — is exactly the kind of fake data that GANs are great at producing. It won’t surprise me to hear in the next year or two that this same kind of unlocking is accomplished using GAN-processed 2D photos to produce 3D-printed faces that pass as real.

Stay Ahead of the Unknown

Researchers can only demonstrate the AI-based attacks they can imagine — there are probably hundreds or thousands of ways to use AI for cyberattacks that we haven’t yet considered. For example, McAfee Labs predicted that cybercriminals will increasingly use AI-based evasion techniques during cyberattacks.

What we do know is that as we enter into a new age of artificial intelligence being everywhere, we’re also going to see it deployed creatively for the purpose of cybercrime. It’s a futuristic arms race — and your only choice is to stay ahead with leading-edge security based on AI.

The post AI May Soon Defeat Biometric Security, Even Facial Recognition Software appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Mike Elgan

Artificial intelligence, Artificial Intelligence (AI), Chief Information Security Officer (CISO), CISO, Cloud Security, Cognitive Security, Internet of Things (IoT), Machine Learning, Penetration Testing, Security Intelligence & Analytics, Security Leaders, Security Leadership, Security Operations Center (SOC), Security Solutions,

Break Through Cybersecurity Complexity With New Rules, Not More Tools

Let’s be frank: Chief information security officers (CISOs) and security professionals all know cybersecurity complexity is a major challenge in today’s threat landscape. Other folks in the security industry know this too — although some don’t want to admit it. The problem is that amid increasing danger and a growing skills shortage, security teams are overwhelmed by alerts and the growing number of complex tools they have to manage. We need to change that, but how? By completely rethinking our assumptions.

The basic assumption of security up until now is that new threats require new tools. After 12 years at IBM Security, leading marketing teams and making continuous contact with our clients — and, most recently, as VP of product marketing — I’ve seen a lot of promising new technology. But in our rapidly diversifying industry, there are more specialized products to face every kind of threat in an expanding universe of attack vectors. Complexity is a hidden cost of all these marvelous products.

It’s not just security products that contribute to the cybersecurity complexity conundrum; digitization, mobility, cloud and the internet of things (IoT) all contribute to the complexity of IT environments, making security an uphill battle for underresourced security teams. According to Forrester’s “Global Business Technographics Security Survey 2018,” 31 percent of business and IT decision-makers ranked the complexity of the IT environment among the biggest security challenges they face, tied with the changing nature of threats as the most-cited challenge.

I’ll give you one more mind-boggling statistic to demonstrate why complexity is the enemy of security: According to IBM estimates, enterprises use as many as 80 different security products from 40 vendors. Imagine trying to build a clear picture with pieces from 80 separate puzzles. That’s what CISOs and security operations teams are being asked to do.

7 Rules to Help CISOs Reduce Cybersecurity Complexity

The sum of the parts is not greater than the whole. So, we need to escape the best-of-breed trap to handle the problem of complexity. Cybersecurity doesn’t need more tools; it needs new rules.

Complexity requires us as security professionals and industry partners to turn the old ways of thinking inside out and bring in fresh perspectives.

Below are seven rules to help us think in new ways about the complex, evolving challenges that CISOs, security teams and their organizations face today.

1. Open Equals Closed

You can’t prevent security threats by piling on more tools that don’t talk to each other and create more noise for overwhelmed analysts. Security products need to work in concert, and that requires integration and collaboration. An open, connected, cloud-based security platform that brings security products together closes the gaps that point products leave in your defenses.

2. See More When You See Less

Security operations centers (SOCs) see thousands of security events every day — a 2018 survey of 179 IT professionals found that 55 percent of respondents handle more than 10,000 alerts per day, and 27 percent handle more than 1 million events per day. SOC analysts can’t handle that volume.

According to the same survey, one-third of IT professionals simply ignore certain categories of alerts or turn them off altogether. A smarter approach to the overwhelming volume of alerts leverages analytics and artificial intelligence (AI) so SOC analysts can focus on the most crucial threats first, rather than chase every security event they see.

3. An Hour Takes a Minute

When you find a security incident that requires deeper investigation, time is of the essence. Analysts can’t afford to get bogged down in searching for information in a sea of threats.

Human intelligence augmented by AI — what IBM calls cognitive security — allows SOC analysts to respond to threats up to 60 times faster. An advanced AI can understand, reason and learn from structured and unstructured data, such as news articles, blogs and research papers, in seconds. By automating mundane tasks, analysts are freed to make critical decisions for faster response and mitigation.

4. A Skills Shortage Is an Abundance

It’s no secret that greater demand for cybersecurity professionals and an inadequate pipeline of traditionally trained candidates has led to a growing skills gap. Meanwhile, cybercriminals have grown increasingly collaborative, but those who work to defend against them remain largely siloed. Collaboration platforms for security teams and shared threat intelligence between vendors are force multipliers for your team.

5. Getting Hacked Is an Advantage

If you’re not seeking out and patching vulnerabilities in your network and applications, you’re making an assumption that what you don’t know can’t hurt you. Ethical hacking and penetration testing turns hacking into an advantage, helping you find your vulnerabilities before adversaries do.

6. Compliance Is Liberating

More and more consumers say they will refuse to buy products from companies that they don’t trust to protect their data, no matter how great the products are. By creating a culture of proactive data compliance, you can exchange the checkbox mentality for continuous compliance, turning security into a competitive advantage.

7. Rigidity Is Breakthrough

The success of your business depends not only on customer loyalty, but also employee productivity. Balance security with productivity by practicing strong security hygiene. Run rigid but silent security processes in the background to stay out of the way of productivity.

What’s the bottom line here? Times are changing, and the current trend toward complexity will slow the business down, cost too much and fail to reduce cyber risk. It’s time to break through cybersecurity complexity and write new rules for a new era.

https://youtu.be/tgb-hpIrSbo

The post Break Through Cybersecurity Complexity With New Rules, Not More Tools appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Wangui McKelvey