Browsing category

Web Security

add-ons, browser add on, browser bug, browser privacy, browser security, chrome, Creative Software Solutions, Firefox, Google, Google Chrome, Mozilla, Organisations, rce, Security threats, Web Browsers, Web Security,

Firefox axes add-ons, developer pushes back

Mozilla has wiped 23 extensions from its directory of Firefox browser add-ons after finding what it says were inappropriate functions in the code.

This post appeared first on Naked Security Blog by Sophos
Author: Danny Bradbury

Web Security,

SSL is dead, long live TLS

An attack affectionately known as “POODLE” (Padding Oracle On Downgraded Legacy Encryption), should put a stake in the heart of SSL, and move the world forward to TLS. There are two interesting vulnerabilities: POODLE, and the SSL/TLS versioning fallback mechanism. Both of these vulnerabilities are discussed in detail in the initial disclosure.

POODLE

POODLE is a chosen-plaintext attack similar in effect to BREACH; an adversary who can trigger requests from an end user can extract secrets from the sessions (in this case, encrypted cookie values). This happens because the padding on SSLv3 block ciphers (to fill out a request to a full block size) is not verifiable – it isn’t covered by the message authentication code. This allows an adversary to alter the final block in ways that will slowly leak information (based on whether their alteration survives verification or not, leaking information about *which* bytes are interesting). Thomas Pornin independently discovered this, and published at StackExchange.

On its own, POODLE merely makes certain cipher choices no longer as trustworthy. Unfortunately, these were the last ciphers that were even moderately trustworthy – the other ciphers available in SSLv3 having fallen into untrustworthiness due to insufficient key size (RC2, DES, Export ciphers); cryptanalytic attacks (RC4); or a lack of browser support (RC2, SEED, Camellia). The POODLE attack takes out the remaining two (3DES and AES) as trustworthy (and covers SEED and Camellia as well, so we can’t advocate for those).

One simple answer is for all systems to stop using these cipher suites, effectively deprecating SSLv3. Unfortunately, it isn’t that easy – there are both clients and servers on the Internet that still don’t support the TLS protocols or ciphersuites. To support talking to these legacy systems, an entity may not be able to just disable SSLv3; instead they’d like to be able to talk SSLv3 with those that only support SSLv3, but ensure that they’re using the best available TLS version. And that’s where the next vulnerability lies.

SSL/TLS Version Selection Fallback

We’ve probably all encountered – either in real life or in fiction – two strangers attempting to find a common language in which to communicate. Each one proposes a language, hoping to get a response, and, if they fail, they move on to the next. Historically, SSL/TLS protocol version selection behaved that way – a client would suggest the best protocol it could; but if it had an error – even as simple as dropped packets – it would try again, with the next best version. And then the next best … until it got to a pretty bad version state.

This is a problem if there’s an adversary in the middle, who doesn’t want you picking that “best” language, but would much prefer that you pick something that they can break (and we now know that since all of the ciphers available in SSLv3 are breakable, merely getting down to SSLv3 is sufficient). All the adversary has to do is to block all negotiations until the client and server drop down to a SSLv3.

There is a quick fix, to merely disable SSLv3. This means that if an adversary succeeds at dropping down, the connection will fail – the server will think it’s talking to a legacy client, and refuse the connection. But that’s merely a solution for the short term problem of POODLE, because there are other reasons an adversary might want to trigger protocol version downgrade today (e.g., to eliminate TLS extensions) or in the future (when TLS1.0 ciphers are all broken). A longer term fix is the TLS Signaling Cipher Suite Value (SCSV). This is a new “cipher suite,” that encodes the best protocol version that the client would have liked to use. This is carried as a new cipher suite; servers that support SCSV don’t actually treat it as a cipher to choose from (what cipher suites normally list), instead, if the value carried in the SCSV is *worse* than the best protocol version that the server supports, it treats this connection as one that has been attacked, and fails the connection. A client only sends an SCSV value if it has already been forced to version downgrade; it’s a way of signaling “I tried to connect with a better protocol than I think you support; if you did support it, then somebody is messing with us.”

So POODLE should put a stake most of the way through SSL’s heart, and SCSV will help us keep it there. Long live TLS.

Web Security,

Shellshock Update

The Shellshock vulnerability, originally announced as one critical issue in bash that allowed an adversary to execute arbitrary code, has grown from one vulnerability to six in the last week. For background on Shellshock, we’ve collected an overview and list of the vulnerabilities; for some history on Akamai’s initial responses, read our original blog post
       
 Shellshock raised a lot of questions among our customers, peers, auditors, and prospects. This post addresses some of the most frequently asked questions, and provides an update on how Akamai is handling its operations during this industry-wide event.
 
Are Akamai production servers vulnerable?  What is the status of Akamai mitigation?
Akamai’s HTTP and HTTPS edge servers never exposed any vulnerability to any of the six currently available CVEs, including the original ShellShock vulnerability.  Our SSH services (including NetStorage) were vulnerable post-authentication, but we quickly converted those to use alternate shells. Akamai did not use bash in processing end-user requests on almost any service. We did use bash in other applications that support our operations and customers, such as our report generation tools. We switched shells immediately on all applications that had operated via bash and are deploying a new version of bash that disables function exporting. 
Akamai’s Director of Adversarial Resilience, Eric Kobrin, released a patch for bash that disables the Shellshock-vulnerable export_function field. His code has aggregated additional upstream patches as available, meaning that if you enable function import using his code, the same behaviors and protections available from the HEAD of the bash git tree are also available. His patch is available for public review, use, and critique.
We do not believe at this time that there is any customer or end user exposure on Akamai systems as a result of Shellshock.
What about Akamai’s internal and non-production systems?
Akamai has a prioritized list of critical systems, integrated across production, testing, staging, and enterprise environments. Every identified critical system has had one or more of the following steps applied:
  • Verify that the system/application is not using bash (if so, we disabled the vulnerable feature in bash or switched shells);
  • Test that the disabled feature/new shell operates seamlessly with the application (if not, we repeated with alternate shells);
  • Accept upstream patches for all software/applications where available (this is an ongoing process, as vendors provide updates to their patches); and
  • Review/Audit system/application performance to update non-administrative access and disable non-critical functions.
Can we detect if someone has attempted to exploit ShellShock? Has Akamai been attacked?
Because the ShellShock Vulnerability is a Remote Code Execution vulnerability at the command shell, there are many possible exploits available using the ShellShock vulnerability. Customers behind our Web Application Firewall (WAF) can enable our new custom rules to prevent exploits using legacy CGI systems and other application-level exploits. These WAF rules protect against exploits of four of the six current vulnerabilities, all that apply to our customers’ layer seven applications.
However, because ShellShock was likely present for decades in bash, we do not expect to be able to find definitive evidence — or lack thereof — of exploits.
There have been news reports indicating that Akamai was a target of a recent ShellShock-related BotNet attack. (See information about WopBot). Akamai did observe DDOS commands being sent to a IRC-controlled botnet to attack us, although the scale of the attack was insufficient to trigger an incident or need for remediation. Akamai was not compromised, nor were its customers inconvenienced.  We receive numerous attacks on a daily basis with little or no impact to our customers or the services we provide. 
Akamai’s Cloud Security Research team has published an analysis of some attack traffic that Akamai has seen across its customers for Shellshock. As the authors note in that article, the kinds of payloads being delivered using the ShellShock vulnerability have been incredibly creative, with Akamai’s researchers seeing more than 20,000 unique payloads. This creativity, coupled with the ease of the ShellShock vulnerability, is one of the many reasons that Akamai is keeping a close eye on all of the associated CVEs and continuing to update its systems and developing better protections for its customers, including custom WAF rules.
Where can I find updates on Akamai’s WAF rules?
Information about our WAF rules can be found on our security site
How will Akamai communicate updates?
We will maintain this blog with interesting news from Akamai. 
As the list of CVEs and implications of ShellShock expand, we do our best to only deliver verified information, sacrificing frequency of updates for accuracy.
Akamai is maintaining additional materials for the public on its security site at , including a running tally of the bash-related vulnerabilities.
If you have questions that aren’t addressed by one of these vehicles, please feel free to contact your account team.
Web Security,

Environment Bashing

[UPDATE: 9/25/2014 11:30AM]

Akamai is aware that the fix to CVE-2014-6271 did not completely address the critical vulnerability in the Bourne Again Shell (bash). This deficiency is documented in CVE-2014-7169. The new vulnerability presents an unusually complex threat landscape as it is an industry-wide risk.

Akamai systems and internal Akamai control systems have been or are being urgently patched or otherwise mitigated in prioritized order of criticality.

Akamai has developed an emergency patch for today’s vulnerability which makes function forwarding conditional on the compile-time switch “FUNCTION_EXPORT”. We’re using it for systems that can’t be switched to the Almquist Shell. In the hope that it’s useful to others, and for public review, we’ve posted that patch at https://bit.ly/ShellShockPatch.

_____________

 

About CVE-2014-6271
Today, CVE-2014-6271 was made public after its discovery last week by Stephane Chazelas. This vulnerability in bash allows an adversary who can pass commands to bash to execute arbitrary code.  As bash is a common shell for evaluating and executing commands from other programs, this vulnerability may affect many applications that evaluate user input, and call other applications via a shell.  This is not an Akamai-specific issue; this impacts any system that uses a vulnerable bash.
Akamai has validated the existence of the vulnerability in bash, and confirmed its presence in bash for an extended period of time.   We have also verified that this vulnerability is exposed in ssh—but only to authenticated sessions. Web applications like cgi-scripts may be vulnerable based on a number of factors; including calling other applications through a shell, or evaluating sections of code through a shell.  
There are several functional mitigations for this vulnerability: upgrading to a new version of bash, replacing bash with an alternate shell, limiting access to vulnerable services, or filtering inputs to vulnerable services.  Akamai has created a WAF rule to filter this exploit; see “For Web Applications” below for details.
Customer Mitigations
Systems under our customers’ control which might be impacted include not only vulnerable web applications, but also servers which expose bash in various ways.  System owners should apply an updated bash with a fix for this vulnerability as expeditiously as possible.  In the meantime, there are several workarounds that may assist.
For SSH servers:  Evaluate the users who have access to critical systems, removing non-administrative users until the systems are patched.
For Web Applications: CGI functionality which makes calls to a shell can be disabled entirely as a short term measure; alternately WAF mitigations can be deployed.  The following Akamai WAF protections will assist customers that have purchased the Kona Web Application Firewall or Kona Site Defender services:
  • All customers of Akamai’s web acceleration services (Ion, Dynamic Site Acceleration, EdgeSuite, Object Delivery, and similar) are protected against some attacks using this vulnerability by Akamai’s HTTP normalization.  Attack text in Host headers, HTTP version numbers, and similar will be silently discarded by the Akamai Platform.  
  • Use of the “Site Shield” feature can further enhance the HTTP normalization protection by removing attackers’ ability to connect directly to a customer web server.
  • Customers who use the new Kona Rule Set are protected against some attacks using this vulnerability if they enable the Command Injection risk scoring group.  This scoring group is being extended today to provide more comprehensive coverage by incorporating specific rule triggers for this vulnerability.  
  • We also have a custom WAF rule which will provide a targeted defense against this specific vulnerability.  This WAF rule operates by filtering on the four-byte attack string ‘() {‘ in the request header or body . It can be implemented to cover both headers and query strings (for maximum safety), or headers only (in the event that query strings generate too many false positives).  The custom rule is available to customers by contacting their account team.  It will be available for self service today.
  • This custom WAF rule will shortly be available for self-service provisioning for non-KRS customers.
Akamai Mitigations
Public-facing Akamai systems and internal Akamai control systems have been or are being urgently patched or otherwise mitigated in prioritized order of criticality. 
For many of our critical systems, Akamai first switched away from using bash to another shell, to protect those systems.  We did not make a universal switch, as the alternate shell was not completely feature-compatible with bash (not all of our systems would continue to operate with this change).
For other systems which could tolerate the downtime, we disabled those systems pending receiving an updated bash. For Akamai web applications which couldn’t take an alternate shell and couldn’t be disabled, we blocked many Akamai-owned CGI scripts at the Akamai Edge, until we developed and deployed the WAF rules above.
 
Do you have any evidence of system compromises?
 
No.  And unfortunately, this isn’t “No, we have evidence that there were no compromises;” rather, “we don’t have evidence that spans the lifetime of this vulnerability.”  We doubt many people do – and this leaves system owners in the uncomfortable position of not knowing what, if any, compromises might have happened. 

Web Security,

OpenSSL vulnerability (CVE-2014-0224)

The OpenSSL Project today disclosed new vulnerabilities in the widely-used OpenSSL library.  These are vulnerabilities that can potentially impact OpenSSL clients and servers worldwide.
The most interesting is the ChangeCipherSpec Injection, which would enable a man-in-the-middle attack to force weaker ciphers into a communication stream.  
Akamai SSL services (both Secure Content Delivery and Secure Object Delivery) have been patched for this vulnerability. The other vulnerabilities are relatively uninteresting for our environment – we don’t support DTLS, and we don’t enable SSL_MODE_RELEASE_BUFFERS.
Standards and Protocols, State of the Internet, Web Security,

The Brittleness of the SSL/TLS Certificate System

Despite the time and inconvenience caused to the industry by Heartbleed, its impact does provide some impetus for examining the underlying certificate hierarchy. (As an historical example, in the wake of CA certificate misissuances, the industry looked at one set of flaws: how any one of the many trusted CAs can issue certificates for any site, even if the owner of that site hasn’t requested them to do so; that link is also a quick primer on the certificate hierarchy.)
Three years later, one outcome of the uncertainty around Heartbleed – that any certificate on an OpenSSL server *might* have been compromised – is the mass revocation of thousands of otherwise valid certificates.  But, as Adam Langley has pointed out, the revocation process hasn’t really worked well for years, and it isn’t about to start working any better now.
Revocation is Hard
The core of the problem is that revocation wasn’t designed for an epochal event like this; it’s never really had the scalability to deal with more than a small number of actively revoked certificates.  The original revocation model was organized around each CA publishing a certificate revocation list (CRL): the list of all non-expired certs the CA would like to revoke.  In theory, a user’s browser should download the CRL before trusting the certificate presented to it, and check that the presented certificate isn’t on the CRL.  In practice, most don’t.  Partly because HTTPS isn’t really a standalone protocol: it is the HTTP protocol tunneled over the TLS protocol.  The signaling between these two protocols is limited, and so the revocation check must happen inside the TLS startup, making it a performance challenge for the web, as a browser waits for a CA response before it continues communicating with a web server.
CRLs are a problem not only for the browser, which has to pull the entire CRL when it visits a website, but also for the CA, which has to deliver the entire CRL when a user visits one site.  This led to the development of the online certificate status protocol (OCSP).  OCSP allows a browser to ask a CA “Is this specific cert still good?” and get an answer “That certificate is still good (and you may cache this message for 60 minutes).”  Unfortunately, while OCSP is a huge step forward from CRLs, it still leaves in place the need to not only trust *all* of the possible CAs, but also make a real-time call to one during the initial HTTPS connection.  As Adam notes, the closest thing we have in the near term to operationally “revocable” certs might be OCSP-Must-Staple, in which the OCSP response (signed by the CA) is actually sent to the browser from the HTTPS server alongside the server’s certificate.
One Possible Future
A different option entirely might be to move to DANE (DNSSEC Assertion of Named Entities).  In DANE, an enterprise places a record which specifies the exact certificate (or set of certificates, or CA which can issue certificates) which is valid for a  given hostname into its DNS zone file.  This record is then signed with DNSSEC, and a client would then only trust that specific certificate for that hostname. (This is similar to, but slightly more scalable than, Google’s certificate pinning initiative.)
DANE puts more trust into the DNSSEC hierarchy, but removes all trust from the CA hierarchy.  That might be the right tradeoff.  Either way, the current system doesn’t work and, as Heartbleed has made evident, doesn’t meet the web’s current or future needs.
(Footnote:  No conversation made herein around Certificate Transparency, or HSTS, both of which are somewhat orthogonal to this problem.)
This entry crossposted at www.csoandy.com.