Browsing category

Biometric Security

Artificial intelligence, Artificial Intelligence (AI), Authentication, Automation, Biometric Security, Blockchain, cryptocurrency, Machine Learning, Social Engineering, Threat Detection,

Don’t Believe Your Eyes: Deepfake Videos Are Coming to Fool Us All

In 2017, an anonymous Reddit user under the pseudonym “deepfakes” posted links to pornographic videos that appeared to feature famous mainstream celebrities. The videos were fake. And the user created them using off-the-shelf artificial intelligence (AI) tools.

Two months later, Reddit banned the deepfakes account and related subreddit. But the ensuing scandal revealed a range of university, corporate and government research projects under way to perfect both the creation and detection of deepfake videos.

Where Deepfakes Come From (and Where They’re Going)

Deepfakes are created using AI technology called generative adversarial networks (GANs), which can be used broadly to create fake data that can pass as real data. To oversimplify how GANs work, two machine learning (ML) algorithms are pitted against each other. One creates fake data and the other judges the quality of that fake data against a set of real data. They continue this contest at massive scale, continually getting better at making fake data and judging it. When both algorithms become extremely good at their respective tasks, the product is a set of high-quality fake data.

In the case of deepfakes, the authentic data set consists of hundreds or thousands of still photographs of a person’s face, so the algorithm has a wide selection of images showing the face from different angles and with different facial expressions to choose from and judge against to experimentally add to the video during the learning phase.

Carnegie Mellon University scientists even figured out how to impose the style of one video onto another using a technique called Recycle-GAN. Instead of convincingly replacing someone’s face with another, the Recycle-GAN process enables the target to be used like a puppet, imitating every head movement, facial expression and mouth movement in the exact way as the source video. This process is also more automated than previous methods.

Most of these videos today are either pornography featuring celebrities, satire videos created for entertainment or research projects showing rapidly advancing techniques. But deepfakes are likely to become a major security concern in the future. Today’s security systems rely heavily on surveillance video and image-based biometric security. Since the majority of breaches occur because of social engineering-based phishing attacks, it’s certain that criminals will turn to deepfakes for this purpose.

Deepfake Videos Are Getting Really Good, Really Fast

The earliest publicly demonstrated deepfake videos tended to show talking heads, with the subjects seated. Now, full-body deepfakes developed in separate research projects at Heidelberg University and the University of California, Berkeley are able to transfer the movements of one person to another. One form of authentication involves gait analysis. These kinds of full-body deepfakes suggest that the gait of an authorized person could be transferred in video to an unauthorized person.

Here’s another example: Many cryptocurrency exchanges authenticate users by making them photograph themselves holding up their passport or some other form of identification as well as a piece of paper with something like the current date written on it. This can be easily foiled with Photoshop. Some exchanges, such as Binance, found many attempts by criminals to access accounts using doctored photos, so they and others moved to video instead of photos. Security analysts worry that it’s only a matter of time before deepfakes will become so good that neither photos nor videos like these will be reliable.

The biggest immediate threat for deepfakes and security, however, is in the realm of social engineering. Imagine a video call or message that appears to be your work supervisor or IT administrator, instructing you to divulge a password or send a sensitive file. That’s a scary future.

What’s Being Done About It?

Increasingly realistic deepfakes have enormous implications for fake news, propaganda, social disruption, reputational damage, evidence tampering, evidence fabrication, blackmail and election meddling. Another concern is that the perfection and mainstreaming of deepfakes will cause the public to doubt the authenticity of all videos.

Security specialists, of course, will need to have such doubts as a basic job requirement. Deepfakes are a major concern for digital security specifically, but also for society at large. So what can be done?

University Research

Some researchers say that analyzing the way a person in a video blinks, or how often they blink, is one way to detect a deepfake. In general, deepfakes show insufficient or even nonexistent blinking, and the blinking that does occur often appears unnatural. Breathing is another movement usually not present in deepfakes, along with hair (it often looks blurry or painted on).

Researchers from the State University of New York (SUNY) at Albany developed a deepfake detection method that uses AI technology to look for natural blinking, breathing and even a pulse. It’s only a matter of time, however, before deepfakes make these characteristics look truly “natural.”

Government Action

The U.S. government is also taking precautions: Congress could consider a bill in the coming months to criminalize both the creation and distribution of deepfakes. Such a law would likely be challenged in court as a violation of the First Amendment, and would be difficult to enforce without automated technology for identifying deepfakes.

The government is working on the technology problem, too. The National Science Foundation (NSF), Defense Advanced Research Projects Agency (DARPA) and Intelligence Advanced Research Projects Agency (IARPA) are looking for technology to automate the identification of deepfakes. DARPA alone has reportedly spent $68 million on a media forensics capability to spot deepfakes, according to CBC.

Private Technology

Private companies are also getting in on the action. A new cryptographic authentication tool called Amber Authenticate can run in the background while a device records video. As reported by Wired, the tool generates hashes — “scrambled representations” — of the data at user-determined intervals, which are then recorded on a public blockchain. If the video is manipulated in any way, the hashes change, alerting the viewer to the probability that the video has been tampered with. A dedicated player feature shows a green frame for portions of video that are faithful to the origina, and a red frame around video segments that have been altered. The system has been proposed for police body cams and surveillance video.

A similar approach was taken by a company called Factom, whose blockchain technology is being tested for border video by the Department of Homeland Security (DHS), according to Wired.

Security Teams Should Prepare for Anything and Everything

The solution to deepfakes may lie in some combination of education, technology and legislation — but none of these will work without the technology part. Because when deepfakes get really good, as they inevitably will, only machines will be able to tell the real videos from the fake ones. This deepfake technology is coming, but nobody knows when. We should also assume that an arms race will arise with malicious deepfake actors inventing new methods to overcome the latest detection systems.

Security professionals need to consider the coming deepfake wars when analyzing future security systems. If they’re video or image based — everything from facial recognition to gait analysis — additional scrutiny is warranted.

In addition, you should add video to the long list of media you cannot trust. Just as training programs and digital policies make clear that email may not come from who it appears to come from, video will need to be met with similar skepticism, no matter how convincing the footage. Deepfake technology will also inevitably be deployed for blackmail purposes, which will be used for extracting sensitive information from companies and individuals.

The bottom line is that deepfake videos that are indistinguishable from authentic videos are coming, and we can scarcely imagine what they’ll be used for. We should start preparing for the worst.

The post Don’t Believe Your Eyes: Deepfake Videos Are Coming to Fool Us All appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Mike Elgan

Artificial intelligence, Artificial Intelligence (AI), Authentication Systems, Biometric Security, Data Protection, facial recognition, Identity and Access Management (IAM), Machine Learning, passwords, Unified Endpoint Management (UEM),

AI May Soon Defeat Biometric Security, Even Facial Recognition Software

It’s time to face a stark reality: Threat actors will soon gain access to artificial intelligence (AI) tools that will enable them to defeat multiple forms of authentication — from passwords to biometric security systems and even facial recognition software — identify targets on networks and evade detection. And they’ll be able to do all of this on a massive scale.

Sounds far-fetched, right? After all, AI is difficult to use, expensive and can only be produced by deep-pocketed research and development labs. Unfortunately, this just isn’t true anymore; we’re now entering an era in which AI is a commodity. Threat actors will soon be able to simply go shopping on the dark web for the AI tools they need to automate new kinds of attacks at unprecedented scales. As I’ll detail below, researchers are already demonstrating how some of this will work.

When Fake Data Looks Real

Understanding the coming wave of AI-powered cyberattacks requires a shift in thinking and AI-based unified endpoint management (UEM) solutions that can help you think outside the box. Many in the cybersecurity industry assume that AI will be used to simulate human users, and that’s true in some cases. But a better way to understand the AI threat is to realize that security systems are based on data. Passwords are data. Biometrics are data. Photos and videos are data — and new AI is coming online that can generate fake data that passes as the real thing.

One of the most challenging AI technologies for security teams is a very new class of algorithms called generative adversarial networks (GANs). In a nutshell, GANs can imitate or simulate any distribution of data, including biometric data.

To oversimplify how GANs work, they involve pitting one neural network against a second neural network in a kind of game. One neural net, the generator, tries to simulate a specific kind of data and the other, the discriminator, judges the first one’s attempts against real data — then informs the generator about the quality of its simulated data. As this progresses, both neural networks learn. The generator gets better at simulating data, and the discriminator gets better at judging the quality of that data. The product of this “contest” is a large amount of fake data produced by the generator that can pass as the real thing.

GANs are best known as the foundational technology behind those deep fake videos that convincingly show people doing or saying things they never did or said. Applied to hacking consumer security systems, GANs have been demonstrated — at least, in theory — to be keys that can unlock a range of biometric security controls.

Machines That Can Prove They’re Human

CAPTCHAs are a form of lightweight website security you’re likely familiar with. By making visitors “prove” they’re human, CAPTCHAs act as a filter to block automated systems from gaining access. One typical kind of CAPTCHA asks users to identify numbers, letters and characters that have been jumbled, distorted and obfuscated. The idea is that humans can pick out the right symbols, but machines can’t.

However, researchers at Northwest University and Peking University in China and Lancaster University in the U.K. claimed to have developed an algorithm based on a GAN that can break most text-based CAPTCHAs within 0.05 seconds. In other words, they’ve trained a machine that can prove it’s human. The researchers concluded that because their technique uses a small number of data points for training the algorithm — around 500 test CAPTCHAs selected from 11 major CAPTCHA services — and both the machine learning part and the cracking part happen very quickly using a single standard desktop PC, CAPTCHAs should no longer be relied upon for front-line website defense.

Faking Fingerprints

One of the oldest tricks in the book is the brute-force password attack. The most commonly used passwords have been well-known for some time, and many people use passwords that can be found in the dictionary. So if an attacker throws a list of common passwords, or the dictionary, at a large number of accounts, they’re going to gain access to some percentage of those targets.

As you might expect, GANs can produce high-quality password guesses. Thanks to this technology, it’s now also possible to launch a brute-force fingerprint attack. Fingerprint identification — like the kind used by major banks to grant access to customer accounts — is no longer safe, at least in theory.

Researchers at New York University and Michigan State University recently conducted a study in which GANs were used to produce fake-but-functional fingerprints that also look convincing to any human. They said their method worked because of a flaw in the way many fingerprint ID systems work. Instead of matching the full fingerprint, most consumer fingerprint systems only try to match a part of the fingerprint.

The GAN approach enables the creation of thousands of fake fingerprints that have the highest likelihood of being matches for the partial fingerprints the authentication software is looking for. Once a large set of high-quality fake fingerprints is produced, it’s basically a brute-force attack using fingerprint patterns instead of passwords. The good news is that many consumer fingerprint sensors use heat or pressure to detect whether an actual human finger is providing the biometric data.

Is Face ID Next?

One of the most outlandish schemes for fooling biometric security involves tricking facial recognition software with fake faces. This was a trivial task with 2D technologies, in part because the capturing of 2D facial data could be done with an ordinary camera, and at some distance without the knowledge of the target. But with the emergence of high-definition 3D technologies found in many smartphones, the task becomes much harder.

A journalist working at Forbes tested four popular Android phones, plus an iPhone, using 3D-printed heads made by a company called Backface in Birmingham, U.K. The studio used 50 cameras and sophisticated software to scan the “victim.” Once a complete 3D image was created, the life-size head was 3D-printed, colored and, finally, placed in front of the various phones.

The results: All four Android phones unlocked with the phony faces, but the iPhone didn’t.

This method is, of course, difficult to pull off in real life because it requires the target to be scanned using a special array of cameras. Or does it? Constructing a 3D head out of a series of 2D photos of a person — extracted from, say, Facebook or some other social network — is exactly the kind of fake data that GANs are great at producing. It won’t surprise me to hear in the next year or two that this same kind of unlocking is accomplished using GAN-processed 2D photos to produce 3D-printed faces that pass as real.

Stay Ahead of the Unknown

Researchers can only demonstrate the AI-based attacks they can imagine — there are probably hundreds or thousands of ways to use AI for cyberattacks that we haven’t yet considered. For example, McAfee Labs predicted that cybercriminals will increasingly use AI-based evasion techniques during cyberattacks.

What we do know is that as we enter into a new age of artificial intelligence being everywhere, we’re also going to see it deployed creatively for the purpose of cybercrime. It’s a futuristic arms race — and your only choice is to stay ahead with leading-edge security based on AI.

The post AI May Soon Defeat Biometric Security, Even Facial Recognition Software appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Mike Elgan

Access Management, Banking & Financial Services, Biometric Security, biometrics, Fraud Protection, Identity & Access, Identity and Access Management (IAM), Identity Management, Multifactor Authentication (MFA), passwords, Retail, Retail Industry, Retail Security, Threat Intelligence, Two-Factor Authentication (2FA), User Behavior Analytics (UBA),

Multifactor Authentication Delivers the Convenience and Security Online Shoppers Demand

Another holiday shopping season has ended, and for exhausted online consumers, this alone is good news. The National Retail Federation (NRF), the world’s largest retail trade association, reported that the number of online transactions surpassed that of in-store purchases during Thanksgiving weekend in the U.S. Online shopping is a growing, global trend that is boosted by big retailers and financial institutions.

However, according to a Javelin Strategy & Research study, many consumers remain skeptical about the security of online shopping and mobile banking systems. While 70 percent of those surveyed said they feel secure purchasing items from a physical store, the confidence level dropped to 56 percent for online purchases and 50 percent for mobile banking. How can retailers increase customer trust toward online transactions?

Security Versus Convenience: The Search for Equilibrium Continues

When we register for online services, we implicitly balance security and convenience. When we’re banking and shopping online, the need for security is greater. We are willing to spend more time to complete a transaction — for example, by entering a one-time password (OTP) received via SMS — in exchange for a safer experience. On the other hand, convenience becomes paramount when logging into social networks, often at the expense of security.

App or account types respondents cared most to protect

(Source: IBM Future of Identity Study 2018)

A growing number of users are finding the right balance between convenience and security in biometric authentication capabilities such as fingerprint scanning and facial recognition. Passwords have done the job so far, but they are destined for an inexorable decline due to the insecurity of traditional authentication systems.

According to the “IBM Future of Identity Study 2018,” a fingerprint scan is perceived as the most secure authentication method, while alphanumeric passwords and digital personal identification numbers (PINs) are decidedly inferior. However, even biometrics have their faults; there is already a number of documented break-ins, data breaches, viable attack schemes and limitations. For instance, how would facial recognition behave in front of twins?

The Future of Identity Verification and Multifactor Authentication

Multifactor authentication (MFA) represents a promising alternative. MFA combines multiple authentication factors so that if one is compromised, the overall system can remain secure. The familiar system already in use for many online services — based on the combination of a password and an SMS code to authorize a login or transaction — is a simple example of two-factor authentication (2FA).

Authentication factors that are not visible, such as device fingerprinting, geolocation, IP reputation, device reputation and mobile network operator (MNO) data, can contribute substantially to identity verification. Some threat intelligence platforms can already provide most of this information to third-party applications and solutions. These elements add context to the user and device used for the online transaction and assist in quantifying the risk level of each operation.

The new available features open the way to context-based access, which conditions access to the dynamic assessment of the risk associated with a single transaction, modulating additional verification actions when the risk level becomes too great.

Existing technologies for context-based access allow security teams to:

  • Register the user’s device, silently or subject to consent, and promptly identify any device substitution or attempt to impersonate the legitimate device;
  • Associate biometric credentials to registered devices, thus binding the legitimate device, user and online application;
  • Spot known users accessing data from unregistered devices and require additional authentication steps;
  • Move to passwordless login, based on scanning a time-based QR code without typing a password;
  • Verify the user presence, limiting the effectiveness of reply attacks and other automated attacks;
  • Use an authenticator app to access online services with 2FA that leverages the biometric device on the smartphone, such as the fingerprint reader, and stores biometric data only on the user’s device;
  • Use advanced authentication mechanisms, such as FIDO2, which standardizes the use of authentication devices for access to online services in mobile and desktop environments; and
  • Calculate the risk value for a transaction based on the user’s behavioral patterns.

Combining all these elements, context-based access solutions conduct a dynamic risk assessment of each transaction. The transaction risk score, compared against predefined policies, can allow or block an operation or request additional authentication elements.

Get Your Customers Excited About Security

The aforementioned “IBM Future of Identity Study 2018” revealed clear demographic, geographic and cultural differences regarding the acceptance of authentication methods. It is therefore necessary to favor the adoption of next-generation authentication mechanisms and other emerging alternatives to traditional passwords.

Imposing a particular method of identity verification in the name of improved security can lead to user frustration, missed opportunities and even loss of customers. Instead, you should present new authentication mechanisms as more practical and convenient — that way, your customers will perceive it as a step toward innovation and progress rather than an impediment. If your authentication method feels “cool,” your users will be more excited to show it to colleagues and friends and less frustrated with a clunky login experience. You may even want to consider offering a wide range of authentication options and letting your users choose which they prefer.

Multifactor authentication is here to stay as traditional passwords lose favor with both security professionals and increasingly privacy-aware customers. If retailers can frame these new techniques in a way that gets users excited about security, the future of identity verification in the industry looks bright.

The post Multifactor Authentication Delivers the Convenience and Security Online Shoppers Demand appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Pier Luigi Rotondo

Biometric Security, Data Protection, Fraud Prevention, Identity & Access, Multifactor Authentication (MFA), Password, Password Management, password reuse, passwords, Security Awareness, Two-Factor Authentication (2FA),

We Need to Talk About NIST’s Dropped Password Management Recommendations

Passwords and their protection are among the most fundamental, essential aspects of enterprise data security. They also make up the bane of most users’ relationships with their enterprise devices, resources and assets. It seems no matter how stringent or lax your password policy is, the directive will be met with dissension from a significant portion of your staff. It’s frustrating for everyone — the IT department, C-suite and employees.

Recently, the National Institute of Standards and Technology (NIST) reversed its stance on organizational password management requirements. The institute now recommends banishing forced periodic password changes and getting rid of complexity requirements.

The reasoning behind these changes is that users tend to recycle difficult-to-remember passwords on multiple domains and resources. If one network is compromised, that’s a potential risk for other domains.

Are password managers the answer? Sure, they help generate great, complex passwords and act as a vault for all of our credentials. But they still require a master password — a risk similar to using one set of credentials across platforms. So where do we go from here? Are password managers safe from compromise, or are we doomed to a future of continued password problems?

Passwords: Can’t Live With ‘Em…

It’s clear that a winning formula for password management and policy isn’t one-size-fits-all. Based on my years of experience drafting and enforcing corporate password policies, most tactics fail to catch on.

Two of the best-known experts in the field — Kevin Mitnick, chief hacking officer for KnowBe4, and security pundit Frank Abagnale, made famous in the film “Catch Me If You Can” — have slightly differing opinions. But at the end of the day, their views generally echo each other.

Abagnale once told CRN that passwords themselves are “the root of all evil.” More recently, he told SecurityIntelligence that passwords “are for treehouses.”

“Many of the security issues we see today stem from passwords,” Abagnale said. “This is a 1964 technology, developed when I was 16 and still being used in 2018 — and I’m 70 years old.”

…Can’t Live Without ‘Em

Mitnick and Abagnale foresee a world in which passwords are no longer part of the security equation. But until that happens, we need to work with them. Mitnick recommended implementing simple, but long passphrases of 25 characters or more, such as “I love it when my cat purrs me to sleep.” But this is only the first step.

“The 25-character password is for the initial login to the user workstation; then you should have another 25-character password for the password,” he said. “The user only has to remember two pass-sentences, and the manager will take those credentials.”

The next step for those responsible for creating and enforcing security policy is to decide how often users must change their passwords. Mitnick recommended at least every quarter, but that depends on the type of company and its risk tolerance. Government and financial institutions, for instance, may want to enforce changes every 60 days.

How to Master the Fine Art of Multifactor Authentication

Both experts advise businesses to incorporate multifactor authentication (MFA) in their login policies. MFA requires users to present at least two credentials to authenticate: something they know (like a password), something they have (like a token) and possibly something they are (like a fingerprint or facial scan).

“I believe that this is the best of both worlds, where the CISO sleeps better at night knowing there is nothing static in the login process, and users are elated to login without passwords,” Abagnale said.

“MFA should be used wherever possible for any type of external access like VPN, Outlook Web Access or Citrix,” Mitnick added. He also warned that if you’re going to use two-factor authentication (2FA), you should implement the First IDentity Online (FIDO) Alliance’s Universal Second Factor (U2F) protocol because it can prevent a type of attack in which a user’s session key can be stolen with a phishing email.

Are Password Managers Safe?

The use of password managers is where the experts disagree. While Abagnale is doubtful about their effectiveness, Mitnick believes password managers are necessary and helpful.

“It is still so important to choose a pass-sentence [for the password manager], and to the best of your ability don’t get malware on your machine,” Mitnick said. “If you get malware on your machine with keylogger ability, it won’t matter if you have a password manager or not.”

For Abagnale, password managers are a great way to mask the issue: addressing the password problem by storing passwords.

“Some of the passwords vaults have been breached already, which emphasizes my former point about why passwords are bad for our security,” he said. “I think that we should move beyond static passwords, and not succumb to password vaults as our solution. It makes me nervous to store all my passwords in one place, and protect that with…a password.”

Never Could Say Goodbye

Finally, both Mitnick and Abagnale are bullish on companies like Trusona, a forward-thinking security business that hopes to crack the code on a password-less internet by focusing on the user experience. Trusona offers a range of MFA processes that don’t require a password. Abagnale is an adviser for the firm.

“Passwords will be here for a while,” said Mitnick. “The challenge companies like Trusona have is early adoption. It’s all about the market. Even though you have a technology out there, it doesn’t matter if nobody’s adopting it.”

According to Abagnale, that day may come in three to five years.

“The technology is already here, and now needs to be implemented,” he said. “There is reason to think that passwords may remain in legacy systems for years to come, as the cost of ripping them out is too high. Nonetheless, password-less logins are the way of the future, and companies would adopt this method once they realize the benefits.”

But passwords aren’t going away anytime soon. We are seeing progress, however, toward a day when authentication is much more secure. Until then, we are stuck with them, and the enterprise must do all it can not only to move the revolution forward, but to ensure that security awareness lives in simpatico with password policy.

Read the IBM Study: The Future of Identity

The post We Need to Talk About NIST’s Dropped Password Management Recommendations appeared first on Security Intelligence.

This post appeared first on Security Intelligence
Author: Mark Stone

Biometric Security, CISO, Connected Devices, Government, immune system, Internet of Things (IoT), IoT Security, Mobile Security, Security Intelligence, Security Intelligence & Analytics, Smart Devices, Threat Intelligence, WannaCry,

How Can We Make Smart Cities Even Smarter? Start With Security Intelligence

People from all corners of the globe are flocking to cities: Sixty-eight percent of the world’s population will live in urban areas by 2050, according to May 2018 projections from the United Nations.

Many of these urban areas will be smart cities, where citizens will interact directly with local governments through apps and other digital services. From paying a water bill online and communicating with the mayor on social media to scheduling the use of a public facility, top priorities for these cities are citizen engagement and the development of new services.

But smart cities are much more than these services. They use the Internet of Things (IoT) to connect operational technology, such as smart meters, and employ artificial intelligence (AI) to make sense of all the data.

For smart cities to be truly smart, they must incorporate a solid cybersecurity posture that both protects against and mitigates risk.

Why Traditional Defenses Won’t Work in Smart Cities

Cities are complex organisms. They have residents, commerce, schools, hospitals, public works and more. Smart cities integrate related activities in a digital dance that connects, automates and optimizes daily operations. The challenge is to protect data privacy, reduce data loss and maintain safe boundaries around technology when access is opened up to proprietary systems through all these new interconnections.

But despite heightened awareness about cybersecurity, smart cities often fail to update security measures to meet the daily barrage of new threats.

The creativity and persistence of bad actors is not to be underestimated. Traditional perimeter defenses, which are designed to control traffic in and out of data centers, don’t work when information moves directly to the cloud in public spaces. Firewalls don’t work when apps are used remotely or on mobile devices. And anti-virus software? It can barely keep up. Similarly, layered perimeter defenses are not enough to protect against today’s most advanced threats.

A good example is the malware Mirai, which turned networked devices running Linux into bots used for large-scale distributed denial-of-service (DDoS) network attacks. While these attacks primarily targeted consumer devices, Mirai can also render a city’s surveillance cameras (or other perimeter IoT devices) highly vulnerable if they are not protected.

Business email and social media accounts are also potential entry points for threat actors. As cities expand their online presence and encourage employees to use social channels, the risk that users will neglect privacy settings — or that fraudsters will use their personal data to launch phishing schemes — increases.

Another issue is that government workplaces are notoriously slow to update software and technologies. The infamous WannaCry ransomware campaign took advantage of this tendency, targeting computers using out-of-date Microsoft Windows operating systems. The malware encrypted data and demanded ransom payments while installing backdoors onto infected systems.

Applying the Immune System Approach to Smart City Security

There may not be a singular solution to protect smart cities, but there are proven methods and technologies that can help prevent, detect and respond to sophisticated cybersecurity threats.

The use of biometric security in enterprise environments for authentication can help organizations validate the identity of users accessing sensitive data and systems. It’s also crucial to regularly update investigative tools and take the proper steps to follow GDPR requirements — otherwise, companies risk incurring heavy fines for noncompliance.

Perhaps most critically, IT professionals must educate all stakeholders about cyber risk and hold them accountable for good security hygiene. These activities must be consistent and involve stakeholders from all departments throughout the organization. They should also be orchestrated as part of a comprehensive security immune system that has security analytics and intelligent orchestration at its core and integrates capabilities to provide multiple layers of defense.

Think of it like the human body: When a specific organ is under attack, word of the threat makes its way to the body’s central nervous system, which then sends antibodies to gather information about the issue, prioritize response actions and execute them to cure the ailment. The security immune system serves as a framework to help analysts identify which parts of the network are affected by an incident, quickly devise a remediation strategy and take definitive action to contain and eliminate the threat.

Let’s say, for example, that most of the security events related to a particular incident are coming from an endpoint. The immune system approach enables the team to understand the vulnerability and patch it immediately with the click of a button. If the incident is part of a wider attack, this strategy offers full visibility into the threat actors’ tactics and motives.

Cities Need Smarter Tools to Keep Up With Cybercrime

With an advanced security intelligence platform at the center of this immune system, organizations can block not only specific attacks, but also variations that might otherwise evade pure correlation. These tools can digest collections of events that are potentially connected to a specific threat, helping analysts more efficiently identify opportunistic attacks.

The next step is to support this platform with the expertise to understand and act on specific kill chains, vulnerabilities and threat intelligence. To set this team up for success, organizations should consider building a cognitive security operations center (SOC) that’s easy to implement and manage and capable of responding to advanced threats around the clock.

Smart cities will always push to be more interconnected, intelligent and instrumented. Unfortunately, criminals will continue to move their malicious activities from the real world to the cyber world as smart cities progress. To make these cities even smarter, governments need to implement security controls that are integrated and orchestrated to react immediately to any possible attack on the ever-widening perimeter.

The post How Can We Make Smart Cities Even Smarter? Start With Security Intelligence appeared first on Security Intelligence.

Go to Source
Author: Domenico Raguseo