04 April 2012

The Hidden Dangers of Short URLs

Services such bit.ly and tinyurl which shorten long website addresses have grown rapidly in popularity over the past few years spurred on dramatically by the rise of Twitter and other social media sites.
Short URLs are now so common that most users will click on http://bit.ly/ugh or http://tinyurl.com/1c2 just as quickly as http://www.google.com.  Since it is not readily apparent where any of these less conventional URLs will send the user, they should all be viewed with some degree of skepticism. 
Any of the short URLs could point to a website serving spyware or other malware just as easily as they could point to Google.  Fortunately, users don’t have to “roll the dice” and click the link to determine whether a link shortened by bit.ly or tinyurl is safe. 
Both bit.ly and tinyurl provide mechanisms for expanding and validating links.  To expand and view statistics for URLs shortened with bit.ly, simply add a '+' to the end of the URL as in:  http://bit.ly/ugh+.  To do the same for URLs shortened with tinyurl, just add the word ‘preview’ at the beginning of the link, as in:  http://preview.tinyurl.com/1c2.
On a related topic, most users would probably grow suspicious if presented with URLs such as http://0x4a.0x7d.0x41.0x63 or http://1249722723 because these URLs “look malicious”—or at least unconventional. 
Both of these URLs are perfectly valid (and safe) and demonstrate the use of two different obfuscation techniques. Both of these techniques are commonly used by malware authors to obscure the destination IP addresses of command and control nodes associated with their software.  Additionally, URL obfuscation techniques can sometimes be used to bypass security controls such as web proxy servers or, when combined with cleverly formatted HTML, to trick users into visiting malicious websites in much the same manner as URLs shortened by bit.ly or tinyurl.

So, bottom line, think (and verify!) before you click! 

30 January 2012

Real World WAF Detection and Bypass Made Easy

As more companies fall victim to hacks based on SQL Injection and as the regulatory environment becomes more stringent, more and more companies are implementing network-based Web Application Firewalls (WAFs).  Shares of one Web Application Firewall maker Imperva (NASDAQ: IMPV) are up about 40% since their November 2011 initial public offering so the clear expectation is that this trend will continue.  For end-users, deployment of WAF technology can be a serious mitigating control that enhances the security of their data.  For penetration testers whose job it is to evaluate the effectiveness of security controls, the presence of a WAF can be a serious hindrance to productivity.  So, as a penetration tester, how can you detect the presence of a network-based web application firewall and how can you bypass it? 
The best way to detect the presence of a WAF is to understand what threats the WAF is trying to protect against and how it will behave when it detects the threat.  Regulatory and audit frameworks almost always focus on protecting applications against the OWASP Top 10, so that’s a great place to start.  That includes some fairly easy things to test like Cross Site Scripting (XSS) and SQL Injection.  Some WAFs also, by virtue of their default policies, try to protect against e-mail collector robots, internet worms, content gathering “leeches” and all sorts of other things.  Typically, when a threat is detected, the WAF will behave by returning a standard error message of some sort to the user and by returning an HTTP response code of 200 (OK).  Since the WAF provides an HTTP response code of “OK”, the HTTP response code is really not useful in determining whether a WAF is present or not.  This is done by design to foil automated scanners. 
So, if the response code is always 200, how do you create an automated utility to detect the WAF?  Submit multiple responses and track how the web server behaves under various conditions.  Start by making a request that you know, with high probability, will be successful.  This is typically a “GET /” using the fully qualified domain name of the host you are testing.  This should (hopefully) result in an HTTP response of 200.  Make note the HTTP response along with the content length.  Next, submit various requests that you would expect to fail if a WAF were present.  Make note of the HTTP response codes and the content lengths.  Compare the response codes and content length to the base case.  Additionally, submit some requests that you would expect to generate 404 or other return codes if a WAF were not present.  Capture the HTTP response codes and content lengths.  By comparing the HTTP response codes and content lengths returned by the various tests with the base scenario, you should get a good indication of whether or not a WAF is present. 
Fortunately, there are a couple of utilities to make the identification process easy.  An excellent utility called waffit helps you identify the type of device present.  This utility is available in BackTrack or at http://code.google.com/p/waffit.   If you find that you have an Imperva WAF, you can use the utility imperva-detect, available at http://code.google.com/p/imperva-detect/.  This tool runs a baseline test plus five additional tests against a user-specified website using the method just described in order to give an indication of the likelihood of an Imperva WAF being present.  Both tools operate on the principles described above.  Waffit has the advantage of being more comprehensive in terms of devices supported; imperva-detect is fast (generally 2-3 seconds per host) and can be used to quickly validate coverage of a large environment that you know contains Imperva WAFs.
# ./imperva-detect.sh https://www.example.com

--- Testing [https://www.example.com] for presence of application firewall ---

Test 0 - Good User Agent...
  -- HTTP Return Code = 200
  -- Content Size Downloaded = 385
Test 1 - Web Leech User Agent...
  -- Size of content inconsistent versus Test 0 - application firewall possibly present
  -- Details:  Test 0 Size = 385 Size Recvd = 764
Test 2 - E-mail Collector Robot User Agent Blocking...
  -- Size of content inconsistent versus Test 0 - application firewall possibly present
  -- Details:  Test 0 Size = 385 Size Recvd = 764
Test 3 - BlueCoat Proxy Manipulation Blocking...
  -- HTTP Return Code = 200 -- expected 404 -- application firewall possibly present
Test 4 - Web Worm Blocking...
  -- HTTP Reutrn Code  = 200 & downloaded content size is the same -- application firewall not detected
Test 5 - XSS Blocking...
  -- HTTP Return Code = 200 -- while checking XSS blocking

--- Tests Finished on [https://www.example.com] -- 4 out of 5 tests indicate Imperva application firewall present ---

So, now that you know (or suspect) a WAF is present, how do you bypass it? 
One solution would be simply to look for an easier target.  You can perform an nmap scan of the target network looking for other IPs with services running on attractive ports (like 80 and 443) and then verify, using waffit and/or imperva-detect, which of those services have an application firewall protecting them.  An unprotected IP represents a soft target that you may flag for additional exploitation. 
Another solution makes use of the ciphers supported by the web server sitting behind the WAF.  The network-based WAF typically acts as a “man in the middle” positioned between the browser and the SSL session termination point.  The WAF observes the key negotiation and, using the private keys stored within the WAF, performs decryption and inspection of SSL traffic before passing the still encrypted packets downstream to the web server or other SSL termination point. 
Ephemeral mode Diffie-Hellman key-agreement protocols were designed to provide perfect forward secrecy to prevent man in the middle attacks.  However, if the server or device performing the actual SSL session termination supports an ephemeral mode Diffie-Hellman key-agreement protocol, any device acting as a “man in the middle” will be unable to decrypt the traffic.  This is true because the WAF is unable to observe the key negotiation process when ephemeral mode Diffie-Hellman key agreement is used.  Hence, if you can force the use of a cipher algorithm that uses an ephemeral mode Diffie-Hellman protocol for key agreement, your traffic should pass through the application firewall without being inspected. 
A full list of SSL and TLS ciphers along with their OpenSSL equivalents can be found at http://www.openssl.org/docs/apps/ciphers.html.   Depending on the cipher suite, the name of the ciphers supporting the Diffie-Hellman ephemeral mode should contain either “EDH” or “DHE” (for “Ephemeral Diffie-Hellman” and “Diffie-Hellman Ephemeral”, respectively).  You can run the check_ciphers.sh script included in the imperva-detect project to see exactly which ciphers your targeted server supports. 
Other methods of bypass certainly exist, but require a great deal more manual effort than just described.  Use the device type detected by waffit to search for bypass vulnerabilities specific to the device type detected. 
Look for a weak protection profile.  Since the WAF must be integrated with the application to function properly, there is always the chance that certain parameters or URLs are not included in the list of items the WAF has “learned” or been programmed to enforce.  Manually attempting to explore the limits of the WAF’s protection profile is time consuming, but may yield results depending on the skill of the security administrators who manage the device. 
Use social engineering.  Since security administrators must review any blocked transactions and adjust the rules accordingly, it is possible that entering a few seemingly innocent requests that are blocked may result in the loosening of a rule within the WAF profile.  Use social engineering of the application support personnel to request a loosening of the input validation rules.  Ask the application support personnel to contact the security administrators on your behalf because, after all, a “valuable customer is being terribly inconvenienced” by all this ridiculous security.  This may take a few days to succeed, but the skillfulness with which your request is made and your ability to convince support personnel of the burdensome nature of the controls may make your request successful. 
Exploit organizational communication problems.  If you have time or can schedule the engagement in advance, wait for an SSL certificate to expire or try to find one that just became valid.  Since the SSL certificate must be loaded on the WAF, there is sometimes a lag time during which traffic can be uninspected.  Depending on the organizational alignment or internal processes, this gap can be significant and may extend to several days or more of exposure. 
Cause the organization to drop their defenses.  Availability is easy to measure while confidentiality is more difficult.  Since customers demand availability, organizations often prioritize availability above confidentiality.    WAF devices are expensive and the task they perform is, by its nature, computationally expensive.  For this reason, WAF resources tend to be oversubscribed in most environments and may represent a point of attack, especially when presented with computationally intensive tasks. 
Network defenders:  Here is how you stop this from happening. 
1.       Use nmap from an external IP to locate all the web servers within your IP range.
2.       Use the waffit or imperva-detect tools to verify coverage of your environment.
3.       If you find gaps, resolve them promptly.
4.       Use the check_ciphers tool included in the imperva-detect project to make certain no EDH or DHE ciphers are supported in your environment.
5.       Make sure that your protection profiles are complete and accurate for the applications you are protecting.
6.       Review any newly learned URLs to make certain they are protected as soon as possible.
7.       Think carefully before modifying a protection profile.  It is better to block a few legitimate transactions than to open the door to SQLI or XSS.
8.       Listen to user complaints or complaints from support but then see above.  Be prepared with statistics to defend maintaining strict input validation controls.
9.       Integrate yourself into the certificate management / replacement process.  Ideally, you should have the new SSL certificate in place on the WAF before it is used to pass traffic.
10.   Make certain your WAF resources are not unduly oversubscribed.   If you are inspecting HTTPS traffic, explore the use of SSL accelerator devices. 
11.   Make certain that SSL renegotiation is disabled in your environment as this can be used to conduct a denial of service. 

24 January 2012

Data Destruction in the Age of Outsourcing

With more IT work being outsourced and more sensitive data being hosted by outside parties, maintaining positive control over sensitive data is a hot topic.  Maintaining positive control of sensitive data is required during all phases of the system lifecycle, including lifecycle events which require decommissioning of environments and destruction of data.  When choosing a hosting provider, it is important to understand all of the protections afforded to the sensitive data entrusted to the provider.  While this article is not intended to be a comprehensive guide or checklist with which to evaluate a hosting provider, I wanted to touch on one important data security topic that contains a lot of nuance in terminology and is confusing to many people, including security professionals:  data destruction. 

For providers aligning to the Payment Card Industry Data Security Specification (PCI-DSS), audit requirements are specific in the need to conduct media destruction.  Requirement 9.10 states that media containing cardholder data must be destroyed when it is no longer needed for business or legal reasons.  Other types of audits have similar requirements for data destruction.  But how is data destruction accomplished?  And what is acceptable to consider the data “destroyed”?  Many hosting providers, when faced with audit requirements such as PCI, will state that they align to US Department of Defense (DOD) guidance for data destruction. 

Oh, really?  DoD guidance?  So what does that mean?  This is actually a complicated question and one that has more nuance than one might expect.  In many cases, customers and security professionals believe “compliance with DOD data destruction guidance” to imply overwriting of the data with either a 3-pass or 7-pass algorithm.  This is often attributed to Department of Defense instruction 5520 and represents a generally correct understanding of DOD procedures as they existed in the past.    As the threat has changed, so too have DOD procedures for data clearing and sanitization.  

The National Industrial Security Program Operating Manual (DOD 5520.22-M, also called the “NISPOM”) defines two different scenarios in paragraph 8-301 relating to clearing, sanitization and release of media.  Clearing is the process of eradicating data on media before reusing the media in an environment that provides an acceptable level of protection for the data that was on the media before clearing.  Sanitization is the process of removing the data from the media before reusing the media in an environment that does not provide an acceptable level of protection for the data before sanitizing.  In other words, hard drives or other media staying at the same data classification level need to be cleared, but hard drives being used at a lower classification level need to be sanitized.  To put this into a perspective that more closely resembles a typical business environment, media containing sensitive information that is leaving a secured, production data center floor would likely need to be sanitized, not cleared, since only a secured, production data center environment is likely to provide controls adequate to protect sensitive customer data.    (The NISPOM can be found at http://www.dss.mil/isp/fac_clear/download_nispom.html .)

In the DOD, the National Security Agency Central Security Service (NSA/CSS) is the agency responsible for determining the procedures for conducting clearing and sanitization.  Current NSA/CSS guidance on drive sanitization can be found in NSA/CSS Storage Device Declassification Manual 9/12 located at http://www.nsa.gov/ia/_files/government/MDG/NSA_CSS_Storage_Device_Declassification_Manual.pdf .  Current guidance calls for sanitization to be accomplished with an approved automatic degausser, an approved wand-type degausser or via incineration.  Hence, by virtue of the process required, a sanitized hard disk would not be usable in an environment that provided lower levels of protection than the data originally stored on the device required.  Again, to put this into the context of a typical business, if the business were to comply with current guidance, once a drive contains sensitive data, it could likely not be used outside of the secured data center environment since only the secured data center environment likely provides the necessary controls.    For most businesses, complying with current DOD data clearing and sanitization guidelines as outlined in NSA/CSS 9/12 would be a fairly simple if somewhat costly proposition since media re-use would be fairly limited once sensitive customer data is written to the media. 

For most businesses, strict compliance with DOD guidelines may not be warranted.  When evaluating a hosting provider, it is better to look beyond claims of DOD compliance and focus instead on the processes used to control data within the hosting provider environment and develop an understanding of the controls in place to enforce the process.  Don’t allow claims of compliance with DOD guidelines derail your understanding of the processes and controls.  NIST Special Publication 800-88, Guidelines for Media Sanitization, available at http://csrc.nist.gov/publications/nistpubs/800-88/NISTSP800-88_rev1.pdf, provides both a decision making process for when sanitization and clearing are appropriate and guidance on recommended technical methods.   When evaluating a hosting provider, looking at data destruction in light of the NIST guidance is highly recommended along with understanding and defining contractually your requirements around data handling and destruction.  Bottom line:  focus not on claims of compliance, but on making sure the controls are adequate given the value of the data you need to protect.