Monday, December 22, 2008

Cisco AnyConnect 2.x Local Privilege Escalation

I have been holding on to these vulnerabilities for several months now. Cisco's AnyConnect VPN client, which provides VPN connectivity to Cisco's ASA using SSL, suffers from a number of security vulnerabilities that result in local privilege escalation on the Linux and Macintosh platforms. Versions 2.3.0185 and newer reportedly have had these issues remedied, but unfortunately I do not have the time or the resources any longer to validate the fixes.

Exploit code for Linux and Macintosh platforms is available, however the code is in an unknown state. At one point I had attempted to unify the code so that it would work regardless of where it was run, however I never got much further than the thought. Since I cannot validate the exploit code any further, the code is being release "as is".

There are three vulnerabilities on both platforms and the exploit techniques are similar if not identical.

Vulnerability #1: /tmp/routechangesv4.bin is written mode 666 (world readable and writable) every time AnyConnect is launched. Because it is never executed directly by AnyConnect, we must use its world writable goodness to achieve our goal. In this case, a simple symlink-attack to create a world-writable crontab for root into which we inject our commands is any easy approach.

Vulnerability #2: The /tmp/Temp8-Vpn2e8 directory used as part of the Java applet is not checked for existence prior to use and, since there is no randomness in the name, is easily tricked into utilizing our maliciously placed executables. When the Java applet runs, it creates and uses /tmp/Temp8-Vpn2e8 to drop vpndownloader.sh, which in turn (you guessed it!) downloads the remainder of the files needed to utilize the VPN. Exploitation is relatively straight forward -- create /tmp/Temp8-Vpn2e8 with permissions that we control, wait for vpndownloader.sh to be placed by the applet and then swap it out with our payload prior to the applet executing it.

Vulnerability #3: /tmp/vpn-uninstall.log is written mode 666, but I cannot recall if this happens only install, uninstall or at every launch. Exploit method is identical to the first vulnerability.

Thanks to Cisco's PSIRT for their help in getting these issues addressed.

Monday, July 14, 2008

Mitigating DNS cache poisoning with PF

By now, nearly everyone has heard about the latest round of DNS vulnerabilities, including non-technical people given the massive media coverage. If you have not heard of it, you probably wouldn't be reading this, however on the off chance that I am wrong, please read US-CERT VU#800113 and CVE-2008-1447 and then continue here. Most organizations are scrambling to find and apply fixes while some are still sitting idly debating the merits of the find.

There is nothing to debate. This is, in my opinion, an epic find. Because DNS is (in most cases) based on UDP (connectionless), the "security" is based on a handful of characteristics. Other flaws aside, a proper DNS client will only accept DNS responses that have the following characteristics:

  1. The response contains an answer to a question that the client asked
  2. The response came from a nameserver that the client originally initiated the request to
  3. The TXID of the response matches that of the request
  4. The destination port of the response matches the source port of the original request
Any attacker wishing to forge a DNS response in the hopes of manipulating the behavior of a DNS client could fairly easily guess or otherwise obtain points one and two. Point three has been flawed in the past, but despite the best RNGs this is still just a 16-bit number, which only gives you 65536 possible values. Point four is in theory also a 16-bit number with a few caveats, so it is only slightly less random than a true 16-bit space.

Dan's find, of which the full details are being withheld until BlackHat 2008 this August, nearly eliminates the source port difficulty and likely also tackles or at least reduces the problem space of the TXID issue. I have to be honest here -- anyone who didn't notice this particular anomaly in the past should be embarrassed -- I know I am. If you were to view the state table of a firewall fronting a vulnerable DNS implementation, you'd see that the source port of all outbound DNS requests from that particular DNS implementation would be fixed for an inordinately long time:

sis0 udp a.b.c.d:6489 -> 192.58.128.30:53       MULTIPLE:SINGLE
sis0 udp a.b.c.d:6489 -> 192.35.51.30:53       MULTIPLE:SINGLE
sis0 udp a.b.c.d:6489 -> 144.142.2.6:53       MULTIPLE:MULTIPLE
sis0 udp a.b.c.d:6489 -> 199.7.83.42:53       MULTIPLE:MULTIPLE
sis0 udp a.b.c.d:6489 -> 192.12.94.30:53       MULTIPLE:MULTIPLE
sis0 udp a.b.c.d:6489 -> 192.153.156.3:53       MULTIPLE:MULTIPLE
sis0 udp a.b.c.d:6489 -> 204.2.178.133:53       MULTIPLE:MULTIPLE
sis0 udp a.b.c.d:6489 -> 192.33.4.12:53       MULTIPLE:MULTIPLE
sis0 udp a.b.c.d:6489 -> 204.74.113.1:53       MULTIPLE:MULTIPLE
sis0 udp a.b.c.d:6489 -> 61.200.81.111:53       SINGLE:NO_TRAFFIC
sis0 udp a.b.c.d:6489 -> 207.252.96.3:53       MULTIPLE:SINGLE
sis0 udp a.b.c.d:6489 -> 163.192.1.10:53       MULTIPLE:SINGLE
sis0 udp a.b.c.d:6489 -> 192.31.80.30:53       MULTIPLE:SINGLE
sis0 udp a.b.c.d:6489 -> 216.239.38.10:53       MULTIPLE:SINGLE
sis0 udp a.b.c.d:6489 -> 216.239.34.10:53       MULTIPLE:SINGLE
sis0 udp a.b.c.d:6489 -> 64.233.167.9:53       MULTIPLE:SINGLE
sis0 udp a.b.c.d:6489 -> 66.249.93.9:53       MULTIPLE:SINGLE
sis0 udp a.b.c.d:6489 -> 64.233.161.9:53       MULTIPLE:SINGLE
sis0 udp a.b.c.d:6489 -> 206.132.100.105:53       SINGLE:NO_TRAFFIC

As you can see, the source port is identical for all outbound DNS requests. This port will change eventually, and honestly I am not sure what the exact details are regarding its change, however I believe it is sufficient to say that it remains constant long enough for you to be concerned with it.

In addition to my work related duties, I also have my own personal systems to worry about. My client systems were quickly fixed, however they remain vulnerable because of my setup -- all DNS requests from my network are forced to go through an internal DNS server that I control. I use OpenBSD almost exclusively for my routing, firewalling and other common "gateway" type functionality, and it just so happens that my DNS server runs OpenBSD too. So despite my clients being secure from this attack if they were querying external nameservers directly, my internal DNS server put them at risk. I was surprised to find that on the US-CERT announcement that they were listed as "unknown" and now, more than a week later, there is still no mention of the issue on any of the OpenBSD mailing lists or on the website.

Since my DNS server runs on a little, low-power device, the thought of having to rebuild something from source or other disk/CPU intensive operations makes me cringe. Thats what got me thinking -- is there a way I can protect myself until a patch is available from my vendor and I have time to deal with it?

As it turns out, I believe there is. My interim solution is to use PF to provide some randomization to my outbound DNS requests.

If your DNS server sits behind a NATing PF device, the source port randomization is implicit -- see the section titled TRANSLATION in the pf.conf manpage. If your DNS server is also running on the PF device, as it is in my case, you are still vulnerable unless you configure things correctly.

The trick is to force requests coming from your DNS server, which have non-random source ports, to be processed by PF so that they too are subject to the implicit source port randomization. This is actually quite simple, and you may already be doing this without realizing it, depending on your setup. Assuming the outbound address of your DNS server (or PF device itself) is a.b.c.d, the following NAT entry in your pf.conf should be sufficient:

nat on $WAN_IF inet proto { tcp, udp } from a.b.c.d to any port 53 -> a.b.c.d

This results in a dramatically different state table:

sis0 udp a.b.c.d:40783 -> a.b.c.d:59623 -> 192.55.83.30:53       MULTIPLE:SINGLE
sis0 udp a.b.c.d:40783 -> a.b.c.d:64916 -> 199.7.83.42:53       MULTIPLE:MULTIPLE
sis0 udp a.b.c.d:40783 -> a.b.c.d:50591 -> 202.12.28.131:53       MULTIPLE:SINGLE
sis0 udp a.b.c.d:40783 -> a.b.c.d:60017 -> 192.55.83.30:53       MULTIPLE:SINGLE
sis0 udp a.b.c.d:40783 -> a.b.c.d:63472 -> 200.160.0.7:53       MULTIPLE:SINGLE
sis0 udp a.b.c.d:40783 -> a.b.c.d:55603 -> 200.189.40.10:53       MULTIPLE:SINGLE
sis0 udp a.b.c.d:40783 -> a.b.c.d:50749 -> 192.52.178.30:53       MULTIPLE:SINGLE
sis0 udp a.b.c.d:40783 -> a.b.c.d:64190 -> 192.83.166.11:53       MULTIPLE:SINGLE
sis0 udp a.b.c.d:40783 -> a.b.c.d:64346 -> 128.63.2.53:53       MULTIPLE:SINGLE
sis0 udp a.b.c.d:40783 -> a.b.c.d:59970 -> 61.220.48.1:53       SINGLE:NO_TRAFFIC

As you can see, the original source port is still fixed in the internal table, however the source port that is used out there on the big bad Internet, where the risk lies, is random. To add even further randomness, if you have multiple addresses at your disposal, you can distribute outbound DNS requests across a pool of those addresses:

nat on $WAN_IF inet proto { tcp, udp } from a.b.c.d to any port 53 -> { a.b.c.1, a.b.c.2, a.b.c.3, ... } round-robin
# or if you have a contiguous space available, use the 'random' modifier
nat on $WAN_IF inet proto { tcp, udp } from a.b.c.d. to any port 53 -> { somenetwork/mask } random

The astute reader will have probably already noticed that it is major overkill to include TCP DNS traffic in these countermeasures, however I cannot think of any adverse side effects of doing so.

The proof is in the pudding. Utilizing the handy service setup by DNS-OARC, you can see that my setup appears "FAIR":

$  dig +short porttest.dns-oarc.net TXT 
z.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net.
"a.b.c.d is FAIR: 14 queries in 10598.6 seconds from 14 ports with std dev 4249.80"
As usual, YMMV. If I am mistaken in any of my assumptions or suggestions, I would love to hear differently. Otherwise, enjoy.

Monday, July 7, 2008

Netflix Queue Randomizer

Security is where I spend the bulk of my time, however I have dabbled quite extensively in other areas. So it is no surprise when every once in a while I do something that is not security related. This is one of those times.

Netflix. Love it or hate it, but me not having a TV makes Netflix a lifesaver because I can get my dose of movies while multitasking (current movie: Golden Child). The problem I continually find is that my queue directly follows my particular mood I was in when I was adding to the queue. The result is that the movies in my queue will have this flow to them that is not always appealing. For example, I watched a string of westerns last week, however that was because a week or two prior I watched Tombstone and then went and selected movies that Netflix said I'd like based on my 4-star rating of Tombstone.

Several months I ago I wrote to Netflix asking them for the ability to randomize my queue. Sure, I can do this by hand, however when I have 50 movies in my queue it gets old after the first time you do it. Since I figured this would be useful to others, it would not be unreasonable to ask Netflix to implement this. Furthermore, the queue manipulation stuff is already there and used by the queue manager, so it makes me wonder why it was never implemented. Understandably, they probably have other things on their plate. An API, perhaps?

The result is the following Greasemonkey abomination:

// ==UserScript==
// @name           Netflix Queue Randomizer
// @namespace      http://www.netflix.com/Queue
// @include        http://www.netflix.com/Queue
// ==/UserScript==

function moveToTop(id, from) {
 GM_xmlhttpRequest({
   method: 'GET',
   url: 'http://www.netflix.com/QueueReorder?movieid=' + id + '&from=' + from + '&pos=1&sz=0&mt=true&ftype=DD',
   headers: { 'X-Requested-With': 'XMLHttpRequest' }
 });
}

function randomSort() {
 return (Math.round(Math.random()) - 0.5);
}

function randomizeQueue() {
 var inputs = document.getElementsByTagName('input');
 var idRe = /^OP(\d{8})$/;
 var curIds = new Array();
 var curPoss = new Array();
 var newOrder = new Array();


// find all of the movies in the queue and store their ID and current
  // position in the queue
  for (var i=0; i < inputs.length; i++) {
    if (inputs[i].type == 'hidden' && inputs[i].name.match(idRe)) {
      var matches = idRe.exec(inputs[i].name);
      curIds.push(matches[1]);
      curPoss.push(inputs[i].value);
    }
  }

  // create a new array that is the same size as the movie queue and
  // randomize it.  This has to be done this was because QueueReorder does
  // not support moving to arbitrary positions, only the top.
  for (var i=0; i < curIds.length; i++) {
    newOrder[i] = i;
  }
  newOrder.sort(randomSort);

  // now map the current movies and their positions to their new homes
  for (var i=0; i < newOrder.length; i++) {
    moveToTop(curIds[newOrder[i]], curPoss[newOrder[i]])
  }

  window.location.reload();
}

// Create a new div that contains the 'Randomize Queue' link, and then prove
// that I have nearly zero HTML foo by being unable to place the link to the
// left of the bottom 'Update DVD Queue' button
var qFooter = document.getElementById('updateQueue2');
var rDiv = document.createElement('div');
rDiv.setAttribute('class', 'd1_qt');

var rLink = document.createElement('a');
rLink.addEventListener('click', randomizeQueue, false);

var rText = document.createTextNode("Randomize Queue");
rLink.appendChild(rText);
rDiv.appendChild(rLink);

qFooter.parentNode.insertBefore(rDiv, qFooter);


Obviously for any of this to work you'll need to have Greasemonkey installed, which requires Firefox, an understanding of how to load this (Tools -> Greasemonkey -> New User Script), and a Netflix account.

Enjoy?

Saturday, July 5, 2008

Craigslist Posting Security -- Adequate

If you have not bought or sold something on Craigslist, or at minimum browsed your particular region's Craigslist section, you truly have not experienced the best that the Internet has to offer. I use Craigslist probably half a dozen times per year for legitimate reasons -- to sell something I want to make a buck on or simply cannot bring myself to throw away, or perhaps I need to buy something particularly exotic or maybe something I'm looking to get on the cheap. The remainder of the time I'm cruising Craigslist purely for entertainment purposes. The "Best of Craigslist" and "Free" sections consume the bulk of my time.

Thursday was one of those days where I was posting an item for sale on Craigslist. I received the email that contains a link to publish, edit or delete my post, and at that moment my subconscious tazed me and told me there was something of interest in that link. It was not too unlike other links I have received in the past from sites that require me to verify that I do, in fact, own a particular email address. It contains a link that, among other things, contains some seemingly random garbage either as part of the URI or as part of the query string. This "random garbage" is generally an MD5 checksum or similar mechanism that ensures that it cannot be easily guessed and allows all involved parties to sleep comfortably knowing that posts cannot be tampered with by anyone other than authorized parties. Poor ways of implementing this would include anything that bases the MD5 on anything that can be easily guessed or otherwise obtained. Obviously, if the system in question simply MD5'd the poster's email address and posting title, a little trickery would get an attacker access to the management of that particular post.

When I received the email the other day, I quickly parsed through the past ~3 years or so of Craigslist posting emails and quickly noticed there was a pattern. All posts are of the form https://post.craigslist.org/manage/[8 digits]/[5 lower case letters or numbers]. I legitimately thought I was on to something. A few bogus posts later (which subsequently got flagged. Thanks, Craigslist overlords!) I was wondering, could it really be this easy?

As it turns out, no. It is no simple task to defeat Craigslist posting security. The first 8 digits in the path are easily obtained. In fact, they simply correspond to the posting ID which is freely available from any posting. This brings up two interesting points:

  1. This provides no security, and in reality probably was not chosen for security reasons
  2. Craigslist cannot handle more than 10^8-1 (99,999,999) posts in any one posting window, which is typically 7 days. This presents a curious DoS condition that is probably entirely impractical, however is interesting to consider.
This brings us to the last 5 characters of the URI. Another quick analysis of my posts shows that they are always 5 characters and only ever contain a mixture of numbers and lower-case characters. The mathematicians in the house have already busted out the answer on their pocket calculator, however for those not so inclined that means there are (26+10)^5 possible values for this field (26 lowercase characters, 10 digits, 5 places, which results in just over 60 million possibilities. 60,466,176, to be exact).

If those 5 characters were based on something that could be easily guessed or obtained, there would be cause for concern, however no correlation was determined between the 5 characters and the following characteristics:

  • Poster's email address
  • Posting title
  • Date/time
  • Post ID
This leads me to believe that it is a randomly generated string of some sort that serves as an index into a database of posts. Anyone that has ever had to develop, enforce or audit a password policy knows that a 5 character password, regardless of content, is prone to failure. In this particular case, however, is it adequate?

In my opinion, yes. Given the nature of how Craigslist posts are managed -- HTTPS -- and the relatively limited time window in which the management URLs can be accessed (7 days for most posts, 30 for a limited few), the chances of someone brute-forcing these seemingly simple 5 characters is virtually 0. Since these require HTTPS posts, even if you can pull off 1 per second, it will still take you nearly 2 years to guess the correct URI ((26+10)^5)/60/60/24) == 699 days). By the time you guess it, the post will have expired or been deleted, and on the off chance that you get lucky and it still exists, you will almost certainly have tripped up something on Craigslist's side and Craig Newark himself will be on his way to your house to slap you around.

Tuesday, July 1, 2008

Update your bookmarks/feeds -- to Blogger

In case you have not noticed, things have changed a bit around here.

When it comes to things that can kill me, make me money and/or affect my reputation in one way or another, I'm a firm believer in "If you want the job done right, do it yourself." Those who know me know that there are very, very few individuals I trust with my online presence and fortunately I have never had to call upon them to lend a hand.

How does this relate to Blogger, you ask? Well, for years I had not trusted any third party to host my blog and host it right, however just recently I took at new look at Blogger and they definitely appear to run a tight ship. Whether its the Google strings or not, I have nearly zero problems with their operations.

Most everything has been migrated permanently with the exception of a few random posts from many years ago, however those will come in time. I have a great pile of mod_rewrite foo going on right now to move people, crawlers and the like over to the new site, however it does not appear that RSS readers are going to play along. They'll happily follow the redirect, but the source for the feed remains the same.

So... update your bookmarks, else come this time next week the old blog will be gone and the redirects will be replaced with feeds to something much less pleasant. I'll happily take suggestions of convincing items to put in the old RSS feed to coerce people into moving over.

Thanks, and enjoy!

Monday, June 30, 2008

Defeating Private Domain Registration

The concept of private domain registration has probably been around longer than I think it has, but if I had to guess its rise in popularity coincides approximately with the rise in identity theft, spam and other Internet-related annoyances. Actually, you can just ask one of the largest providers of private domain registration, domainsbyproxy.com:
$  whois domainsbyproxy.com |egrep " on( |:)"
    Created on: 15-Jan-02
    Expires on: 15-Jan-17
    Last Updated on: 05-Sep-07
The idea, for those who are not familiar with private domain registration, is that you allow a third party to register or take ownership of a domain name that you are interested in, and then they give you some level of control over it. The benefit here is that the various bits of contact information typically associated with WHOIS data -- registrant, technical, billing, administrative -- are now that of the third party who now owns that domain on your behalf. Had you not used a private registration of one form or another, you'd either have to leave your "docs" at the mercy of the general public, or simply put in some bogus data and hope nobody notices. Which is worse? You be the judge.

The risks of not masking your WHOIS data in one way or another are fairly common knowledge, but to summarize, the following will likely become a problem for you at one point or another:

  • Identity theft
  • Domain theft or abuse
  • Stalkers
  • SPAM
  • Verne Troyer getting mad at you because you posted his sex tape
Nearly every registrar, no matter how seemingly small or insignificant, is offering private registration as an option lately. Just Google it and you'll see. I have some good news and some bad news, and it just so happens that it is the same bit of news. So whether this is good or bad really depends on what side you are on.

Private domain registration works.

Of course, like many things in security, the devil is in the details and things usually get tripped up in implementation rather than in specification. If you simply want to register a domain and possibly put up some witty content also hosted by the private registrar, then you'll probably be safe. However, in nearly all situations that I know of or have heard about, private domain registration is used because the owner of the domain wants to take full advantage of the domain instead of cowering in a corner. Yes, that means talking shit and/or making a buck.

What this means is that at some point you are going to start offering or utilizing some services that are oh so vital, typically HTTP, SMTP or DNS. You gotta post content, you gotta send/receive email and without a DNS server somewhere to handle all of this for you, you'd be dead in the water.

There are secure and proper ways to utilize private domain registration, offer common services like HTTP, SMTP and DNS and still not leave your goods out there for everyone to ogle. Unfortunately, this means you are probably going to have to limit yourself to the services offered by your registrar. The result is that the services will either be expensive, featureless or both. You'll get some bastardized webmail system with limited functionality and a WYSIWYG HTML editor for your site, maybe PHP-Nuke if you are lucky.

And this is where things go wrong and the point of my post begins. You've got a domain private registered somewhere but decide to actually use it. You stand up an HTTP and SMTP server somewhere and point DNS accordingly and before you know it your efforts to stay private have taken a giant leap back towards square one.

What use to be a relatively private setup is now becoming increasingly more public. The three services that most any domain needs to survive -- HTTP, SMTP and DNS -- are now the soft spot in your otherwise secure underbelly.

Off the top of my head, the following are some potential points of disclosure for each of the above services. This list is by no means comprehensive, and excludes the usual gamut of security best practices for each service. Furthermore, organizations continually find new and rofl-able ways of screwing this up. That said:

HTTP:

  • Putting sensitive information in your "contact page"
  • Hosting your web content on a site who's forward or reverse DNS somehow link back to you
  • Improperly handling non HTTP/1.1 requests and disclosing private information such as the server name
SMTP:
  • Disclosing your true(r) identity in an SMTP greeting
  • Sending information-filled NDRs for bounced or otherwise undeliverable email
DNS:
  • Disclosing your anti-spam efforts by way of SPF TXT records (hint: `host -t txt domain.com`)
  • Making some DNS server that can be tied back to you (forward or reverse DNS, WHOIS) authoritative for one or more records in your domain
  • Publishing a DNS record that resolves by way of forward or reverse DNS to something that can be tied to you.

I honestly find DNS related issues the most relevant here, and if you don't have access to something like dnsstuff, your local DNS friends host(1) and nslookup(1) can start getting you some dirt. Of particular interest is anything that has intelligence and or some brute forcing capabilities built into it. Fierce is arguably the best tool for this particular task.

I have seen some organizations that do seemingly everything right, however most of them inevitably have a dirty history that can be had for $39.99 by any number of organizations that offer historical WHOIS or DNS data. If their dirty laundry has been aired in the past, it can be had for a nominal fee. In most situations where this sort of research is warranted, the cost of getting a membership to such a service (if you don't already have one ;)) is insignificant compared to the cost of winning whatever battle it is you happen to be in.

It should be mentioned that there a number of privately registered domains that do seemingly everything correct, however those domains are backed by some combination of well paid and extremely technically savvy staff, a crack legal team and SEO zealots.

Well played, sir.

Friday, May 23, 2008

Temporary files -- yer doin it wrong

The number of security vulnerabilities I've discovered over the years that have started from casually observing how a particular system operates is a non-trivial amount. I don't recall where i was reading this or what the exact wording was, but it boiled down to the fact that some of the best hacker minds are those that act upon the thoughts that start with "I wonder what happens if I ..". They say that curiosity killed the cat. What about the hacker? I have this great picture of myself when I was probably 6 or 7. I had a dozen D cell batteries taped together in series hooked up to a small DC motor I had ripped out of one of my toys. My desk or the family workbench had met it's maker. And then there was the time I cut the power cord off the back of my broken alarm clock, split the leads and taped them to a dead 9V battery. A poor mans recharger, right? Wrong. A convenient way to splatter battery acid and toxic fumes all over my room.

I have this nervous habit that every time I open a terminal or change directories, I type ls. Besides an overly large bash/zsh history file, this actually led me to stumble up on a number of temporarily files, directories and other things that an application may litter in a directory as part of its normal operations. Right now, list the contents of /tmp. Aside from random files you stashed there for lack of space elsewhere, you'll almost certainly see files that were dropped there by applications that have run recently on your system.

If you have any sort of security background, you can see where this is going. The problem is that these applications don't always handle all situations carefully when it comes to temporary files. What if the file already exists? Symlinks? What if the directory is owned by another user, but is world writable? What if the filename is predictable? These are the breeding grounds for race conditions, symlink attacks and other related security vulnerabilities.

The result is tmpsnarl, a quick little script designed to look for and capture temporary files, directories, sockets, symlinks and the like in the hopes of being able to exploit the above mentioned vulnerabilities. I've used this tool to re-discover some of my past vulnerabilities, as well as find a few 0day race conditions that I was unaware of. I now instinctively run tmpsnarl on all systems I have shells on and the results are amusing. Give it a spin, and shoot any feedback or discovered vulnerabilities back my way.

Temporary files -- yer doin it wrong.

Wednesday, April 9, 2008

Why SSL?

Over the years I've been involved in a number of projects where SSL was needed to help secure communications between endpoints. Without fail, every time I was rolling out a certificate authority or installing an X.509 certificate on a particular node I was met with some level of resistance. Encountering someone who was pleased with this effort is truly a rare experience.

A lot of resistance comes from the fact that too many people don't like to do work or do the job right the first time. I'll admit -- a PKI requires work and is not easy. I'd even go as far to say that it is a painful experience. Well, so is getting your authentication credentials or your identity stolen.

This all begs the question, "Why SSL?". I'm glad you asked.

By using SSL, a system is able to provide:

  • Authentication
  • Encryption

Authentication allows you, the client, to verify that the server you are speaking to is, in fact, the server you expect to be speaking with. In more complex setups, it also allows you, the client, to prove your identity to the server.

Encryption allows the two endpoints in the conversation to ensure that the data transfered cannot be tampered with and that it cannot be read by unauthorized eyes.

Unfortunately, these two points alone are typically not enough to justify SSL to some people. So, practically speaking, where are the risks? As I see it, the risks are almost entirely network related.

For sake of discussion, we'll ignore the fact that the two endpoints in a conversation could be compromised and, regardless of SSL, the conversation could be compromised. We'll also ignore the fact that it is entirely possible that they way in which the encryption system was designed or implemented could be flawed, thereby exposing any supposedly encrypted communications.

A compromise of any of the following could result in unencrypted communications between two endpoints to be exposed.

En-route Network Equipment

I am sure there is some survey somewhere that measures or attempts to approximate the average number of network hops the average human's network traffic passes through prior to it reaching its destination, but I say that 10 is a good guess. We've all heard the phrase "cut out the middle man", but when it comes to your personal data the saying could never be more true. When I connect to my bank, my traffic hops through at least 11 network devices that I do not control, all of which are owned and operated by at least 3 major, multi-million/billion dollar providers:

$  traceroute -q 1 -n -A  -w 1 nomansbank.com
traceroute to nomansbank.com (a.b.c.d), 30 hops max, 40 byte packets
1  192.168.0.1 [AS28513]  1.455 ms
2  208.127.144.1 [AS19817]  25.108 ms
3  66.51.203.33 [AS19817]  26.738 ms
4  63.209.70.133 [AS3356]  27.904 ms
5  4.68.102.30 [AS3356]  39.692 ms
6  4.69.137.33 [AS3356]  31.824 ms
7  4.69.132.82 [AS3356]  104.852 ms
8  4.69.134.186 [AS3356]  98.615 ms
9  4.68.17.6 [AS3356]  100.826 ms
10  4.68.63.2 [AS3356]  102.223 ms
11  216.140.9.9 [AS3356]  105.364 ms
12  65.88.122.190 [AS3356]  110.851 ms
13  *

As much as I'd like to be able to say that none of these hosts or providers could ever be compromised, they can be compromised and have been in the past. All it takes is one and your bank data, juicy IM conversations or scathing gmail messages are exposed.

At the same time, the world of security is not always about the bad guys out to get your data. In this day and age, it is safe to assume that every bit of data you transmit from your computer is monitored in some way. This monitoring could be fairly innocent in the form performance monitoring or debugging by a network provider along the path of your packets. Or, it could be a true "Big Brother" type situation where a government is monitoring, logging and analyzing all traffic. You will be sorely mistaken if you trust that your data captured during monitoring will never suffer any ill will.

Just as you should fully trust that if you hurl stacks of money into the streets that people will do whatever it takes to get that money, you should also assume that when you send personal data unencrypted over the Internet people will make every effort to intercept it. That is, of course, if they don't already have it.

DNS

The first thing that happens when you attempt to connect to a system by name is that your DNS resolver must do a forward DNS lookup to resolve that name to an internet protocol (IP) address. At that point, a connection is established to the IP address identified in the previous step, and eventually data is transfered back and forth and life is good.

Or not.

Since DNS is (generally) UDP based, forging responses is fairly easy. All you have to know is the source port of the original request and with darn near any programming language you could forge enough fake traffic to eventually accomplish your goal. Fortunately, the designers of DNS implemented a 16-bit transaction ID in order to allow resolvers to match responses with requests. While not cryptographically secure, these 16 bits can make brute force approaches a real bear.

Or not.

Unfortunately, far too many implementations of transaction IDs within DNS resolvers are not only random, but predictable in numerous implementations.

The situation is totally shot to hell if an attacker has the ability to intercept traffic between a DNS client and server. At that point, a transacation id -- cryptographically secure or otherwise -- does not matter. Intercept the original request and forge a response back to the client in the attacker's favor before the legitimate DNS server does and then the sky is the limit.

Dug Song's venerable dnsspoof (part of dsniff) has been exploiting this "flaw" for nearly 10 years now. I wrote an improved knock-off version a while back called dns_spoof that picks up where Dug's tool left off. On a system that has ability to intercept (legitimately or otherwise) DNS requests from your target:

dns_spoof -p "A . A 1.2.3.4"

This will intercept all requests that are A record requests for any name and will return an A record of 1.2.3.4. On the victim system which now attempts to connect to their bank:

$  telnet nomansbank.com 80
Trying 1.2.3.4...

Now if you were in control of 1.2.3.4, you are free to intercept and molest the traffic that the victim believes to be going to nomansbank.com.

ARP

In the previous section, I said that one of the first steps in a connection to host by name is for DNS to resolve that name. This is true, but a bit misleading. The true next step is for the DNS client to arrange for its DNS request to be delivered to the DNS server. In the case where the DNS client and server are on the same broadcast domain, a simple ARP (address resolution protocol) request will return the layer 2 address of the layer 3 address belonging to the requested name server. If the DNS client and server do not have direct layer 2 connectivity, the DNS request must be properly routed. In this case, the ARP request is made by the DNS client for the appropriate next hop router.

If you've done any serious network security work, or this article is tickling the right brain cells, hopefully you already see where this is going. Just as DNS requests can be intercepted and molested, so can ARP requests, except in the case of ARP the situation is much worse. Or better, depending on whose side you are on. If an attacker can forge and ARP response that includes a layer 3 address that they control before the legitimate system(s) respond, then similar attacks to those described above can continue.

So. Again. Why SSL?

Take the super paranoid approach and always assume that the attacks on your unencrypted data are always under way. Protect your communications with SSL and the risks are largely eliminated. This is what other articles or security practitioners will preach. Unfortunately, it is only partially true.

The title of this entry should really be "Why proper SSL?". SSL is nearly useless if the PKI upon which it is built is not properly implemented. I'd actually argue that SSL that is deployed as part of a sloppy PKI implementation is less secure than not implementing SSL at all. Why, you ask? If you train your users to not send sensitive data over unencrypted channels, they become lulled into a false sense of security when they encounter a connection secured with SSL. They then inherently trust SSL connections and send sensitive data over said connection and have little basis on which to judge whether or not that SSL connection is secure aside from the little lock in their browser or the fact that the "S" in "SSL" stands for "Secure".

The power and security of a PKI comes largely from the trusted third party that is the CA (certificate authority). The lack of a CA or lack of a properly implemented CA is exactly what can jack up what would otherwise be a secure and properly implemented PKI.

A PKI that is implemented without a CA isn't really a PKI at all. This results in what is typically called "self signed certificates". This is the security equivalent of a kid getting a report card and, instead of bringing it home to have his parents sign it, simply putting his own signature on it. As I said before, much of the security in a PKI comes from the CA, but how is that? When an X.509 certificate is issued, it is issued by the CA on behalf of the particular participant in the PKI that wants to be uniquely and securely identified. In plain terms, the certificate issued by the CA says that anyone who can display this certificate is in fact who they claim to be. The catch is that merely possessing the certificate is not enough. You must also possess the private key that was used when requesting the certificate in the first place.

OK, so great. You've got an SSL certificate that was given to you by a CA. All is well, right? Sadly, no. The problem all comes back to that little word we call trust. The lack of a trusted CA comes in two forms.

The simplest is that if the CA is not trusted, there is no telling how their ship is run. They could be issuing certificates to any old bloke that wants one without actually validating their identity. They could be issuing certificates that lack some of the most basic X.509 attributes that can tarnish any PKI.

The worst thing that can happen as a result of an improperly implemented PKI is not a technological one, but people problem. We've all seen a browser message warning that a certificate is signed by a CA that we don't trust, or the host that we are connecting to does not match the hostname that the certificate claims to be for. The result is that users get into this habit of blatantly clicking "OK" when their browser warns them of this security issue.

Oh, it gets worse. Recall all those attacks regarding traffic interception and deception using DNS and ARP tricks when trying to intercept unencrypted data? Once your users get into the habit of blindly clicking through browser warnings, these attacks are perfectly valid ways of intercepting data encrypted using an insecure PKI. ARP spoof or forge DNS requests into getting a victim to connect to a host you control, and then the fact that you throw up an invalid or otherwise insecure X.509 certificate is not a problem because these users will blindly click through most any warning message and then their data is yours.

Game. Over.

Sunday, March 2, 2008

Racket -- Ruby Raw Packet Fun

This is one of those projects that I've been sitting on for a good 6+ months. Only over the last 2-3 have things really started to come together. I am happy to release Racket, a Ruby gem designed for crafting and analyzing raw packets.

Towards the end of the initial development of Racket, I caught wind of Scruby because that is what Metasploit 3 is using for much (most?) of its raw packet duties. In the TMTOWTDI spirit, I kept up development and actually think that Racket's purpose is a bit different than that of Scruby.

Installation is fairly simple:

gem install --source http://spoofed.org/files/racket racket

Documentation and examples are published but need some touching up. Among some of the more amusing/useful examples are:

  • cdp-spew: exactly what it sounds like. Creates and floods the network with random Cisco Discovery Protocol (CDP) packets
  • hsrp_takeover: passively listens for and actively performs "takeovers" for all discovered Hot Standby Router Protocol (HSRP) instances
  • tcp2udp: Listens for any tcp traffic and turns the packet back around, sending it back at the source as a UDP datagram. No point

Racket requires that you have Joel VanderWerf's BitStruct and Marshall Beddoe's PcapRub installed.

Enjoy! Comments or suggestions are welcomed.

Tuesday, January 29, 2008

Things that keep me up at night..

I've tried to cut back on the amount of personal stuff that I publish here, and limit it just to professional topics. So, no more discussions about politics, what I ate for breakfast or drunken debauchery.

This, however, rides the fine line between personal and professional. The following three things make me lose sleep

  1. Everyone's favorite layer 2 protocol, ethernet, has the destination MAC address before the source MAC address. I've heard and speculated that this is for speed. Instead of having to jump ahead another 6 bytes, small devices that have limited processing power can read in just 6 bytes and be able to make forwarding decisions accordingly. Why, though, does this practice only exist at this layer? Sure, the "source then destination" thing is burned into our brain, but what performance gains could be obtained had, say, destination IP addresses been placed earlier in IPv4?
  2. Why are UDP and TCP checksums so crazily different than, say, IP or ICMP checksums? A checksum exists so that protocols that decide to implement it can be sanity checked -- essentially, has this layer and the stuff it is responsible for been damaged in transit? IPv4, not caring about the payload, only checksums its header values. It is up to higher level protocols to check themselves before they wreck themselves. ICMP takes a slightly more polite approach and checksums its header values and payload. UDP and TCP, however, take this grossly different approach -- in addition to computing a checksum that uses their respective header values and payloads, these two decide to also include the source and destination IP addresses in the checksum. WTF? A protocol should be able to compute its own checksum by utilizing whatever information is available to it. This includes its header values and whatever is in its payload, if anything. Data lower in the stack is simply not available in any sort of simplistic fashion, yet TCP and UDP decide to be special.

Sunday, January 20, 2008

Hawler, the Ruby crawler

Over the years I've thrown together various bits of code that have crawling functionality built into them. There was termite, used to find backup copies, renames or common temporary locations of your entire web site. There was indexfinder, used to crawl your site and find anything that looked like a directory listing. There was also htcomment, used to ferret out all of the comments found in your html.

These tools were all fairly well tested and worked quite well, but everytime I dusted off the code and fixed a bug or added functionality, my CS books would scowl at me. The core crawling code was literally cut and pasted between tools. The problem with this is obvious -- a bug or missing bit of functionality in the core crawler code had to be fixed in numerous places. Design at its worst.

Starting maybe a month ago I decided to fix this problem. The result is Hawler, a Ruby gem that encapsulates all of what I deem to be core web crawling functionality into an easy to use package. The result is that I can now focus more on writing the code that is unique to each particular task and not have to worry as much about the crawler bits. Its usage is quite simple, as described in the README.

As an example of Hawler's usage, I've put together two tools that I've found quite useful so far. First is htgrep. It is exactly what it sounds like: grep for the web. How many times does the word shot occur within 1 hop of www.latimes.com? Lets find out, but sleep 1 second between each request (got to play nice) and utilize HEAD (-p) to only harvest links from pages that are likely to have them in the first place:

$  htgrep shot www.latimes.com -r 1 -s 1 -p  |wc -l  
43

Only 43? A peaceful day in LA! What about the distribution of HTTP error codes on spoofed.org? Use htcodemap:

$  htcodemap spoofed.org -r
Done -- codemap is spoofed.org-codemap.png

The result? Not too shabby:

What about drawing rediculous maps of relationships within a website? Well, assuming you have enough RAM (blame graphviz/dot, not me!), enjoy htmap. An example, here is a fairly deep crawl and mapping of spoofed.org:

$ htmap spoofed.org -r 2 Done, map is spoofed.org-map.png, spoofed.org-map.dot

The image is here.

I expect that a lot of cool tools will be born from Hawler, and I'll be sure to link and post them as they turn up. Until then, enjoy!

Comments and suggestions are very much welcome.