Sunday, December 9, 2007

OpenBSD on Soekris -- A Cheater's Guide

I've been using Soekris devices for quite some time. Basically any time I need to get some routing, firewalling, skullduggery, etc, that doesn't require serious CPU, I toss a Soekris box at the problem. They are great little devices -- low power, dead quiet and rock solid.

The obvious downside of a system like the Soekris is the wimpy CPU. This really is only an issue during installation and during the initial system configuration. After that, the box is a real work horse.

Below are the steps I recently used to get my NET4801 running OpenBSD 4.2 -current. The difference here is that I use qemu to make use of the considerably faster CPU on my desktop to breeze through the install and initial configuration.

  1. Download install42.iso from your local mirror
  2. Plug in your CF card that you'll use in your Soekris. Take note of what device it gets assigned
  3. Start qemu, replacing /dev/sdb with whatever device your CF is:
    qemu -hda /dev/sdb -cdrom install42.iso -boot d
  4. Install as usual. Configure your interface to use DHCP, as anything else won't work inside qemu. Set default console to com0 and set the speed to match your Soekris (9600)
  5. Finish installation. Halt. Stop qemu. Restart without the iso:
    qemu -hda /dev/sdb
  6. Once booted, edit /etc/fstab so that / is mounted with noatime, read-only. My /etc/fstab looks like this:
    /dev/wd0a / ffs ro,noatime 1 1
  7. Now put the volatile stuff into MFS so you won't wear out your CF too fast. Create an MFS directory for /var:
    mkdir /mfs
    cp -rp /var /mfs/var
  8. Similarly for /dev:
    mkdir /mfs/dev
    cp /dev/MAKEDEV /mfs/dev
    cd /mfs/dev
    ./MAKEDEV all
  9. Add the appropriate lines to /etc/fstab to ensure that /dev and /var get mounted as MFS at boot. Change values for -s and -i as you feel necessary. This works for me a on 1G CF:
    swap /var mfs rw,-P=/mfs/var,-s=32768,noexec,nosuid 0 0
    swap /dev mfs rw,-P=/mfs/dev,-s=8192,-i=128,noexec,nosuid 0 0
  10. Now symlink /tmp to /var/tmp so that temporary files can be written to:
    rm -Rf /tmp
    ln -s /var/tmp /tmp
  11. Install rsync to handle synchronizing /var. This assumes you've set $PKG_PATH to your favorite local mirror:
    pkg_add rsync
  12. Add a cronjob to periodically sync any changes to /var. I prefer a weekly job. Add something like the following to root's crontab:
    1  0  */7  *  *  /usr/local/bin/rsync -az --delete /var/ /mfs/var/
  13. Finally, edit the shutdown script to sync any unsynchronized changes at shutdown time. Add the following to the end of /etc/rc.shutdown:
    /usr/local/bin/rsync -vaz --delete /var/ /mfs/var/

Thats it. Halt your OpenBSD installation, stop qemu and install the CF in your Soekris. Any further configuration can be done by way of sshd or the serial console, but don't forget the / is mounted read-only, so don't forget to mount it read-write if you need to change something.


Saturday, December 1, 2007

Demystifying Craigslist Anonymization

Craigslist is one of those services that many people could not live without. Where else can you go to get free palm trees, 40 cubic yards of broken concrete sidewalk, AND get rid of that ugly couch and pick up a date all in one visit?

When Craigslist started, if I had to guess there was little expectation of privacy. When you posted, you entered your "real" email address and your dirty laundry was now in the public eye. At one point they added functionality whereby you could anonymize your posting if you so desired. The functionality was quite simple. At the time of your posting, if you opted to remain anonymous, an email address within craigslist was created -- it took the format of Emails to this address would get relayed to your email address of choice. At some point within the last year or so, the options have changed. Previously, you could chose to be anonymous or not, or even not post any email related contact information whatsoever. You now only have two options -- anonymous or none.

As an example of how this anonymization works, I've posted to the Los Angeles Craigslist "items wanted" section seeking the much desired left handed smoke shifter. The email address will accept and relay messages to my Gmail account which I keep for these purposes. If you email and I reply, by default you would see my Gmail address, thereby ruining my anonymity. Many Craigslisters, however, are savvy enough to properly set their From: when replying to continue to mask their true identity. For example, in my .muttrc, I have the following:

alternates = .*@spoofed\.org|.*@craigslist\.org

This tells mutt that if I get email to either of those domains, it should set the From: to that of the original To:. You can accomplish something similar in Gmail with the "send mail as" setting.

Unfortunately, Craigslist anonymization only provides a minimal amount of anonymity, but I suspect it serves its original purpose -- to protect the addresses of posters from being harvested by spammers. This should not come as a surprise to anyone who is familiar with how SMTP works, but aside from front-line anonymity, this service is rather trivial to abuse.

For example, if you respond to my posting about the left-handed smoke shifter, I see the following in Gmail:

Date: Sat, 1 Dec 2007 12:46:24 -0800
From: Jon Hart 
Subject: shifter?

That address forwards all correspondence to my Gmail address. When I reply, the untrained eye will see:

Date: Sat, 1 Dec 2007 12:51:33 -0800
From: Test 
To: Jon Hart 
Subject: Re: shifter?

However, with the exception of pretty much all email services except one that is configured exactly for this purpose, the headers will give away my true identity:

Date: Sat, 1 Dec 2007 12:51:33 -0800
From: Test 
To: Jon Hart 
Subject: Re: shifter?

As you can see, if you view the full, unmolested headers of my supposedly anonymous response, the From: is my craigslist relayer, but Return-Path: and Sender: give me away. There are other headers that can give away, most notably X-Original-From:.

I have to stress that this is not really anyone's fault. Craigslist did what you asked -- it masked your email address. Gmail and other services did what you asked -- they set your From: to your craigslist address. When you combine these two services, however, your anonymity is broken.

The lesson here is that if you are a disgruntled employee ranting about your boss, a SWF BBW ISO NSA BDSM from a generous SBM, or other forms of depravity, either create a dedicated email address that cannot be trivially traced to your true identity, or simply don't respond to any emails sent to your supposedly anonymous craigslist email.

Sunday, November 25, 2007

Event Correlation on a Budget

Log management and its wiser, old brother, event correlation, are processes that anyone in the security space is likely very familiar with. I've been dealing with them since day 0, but in the past year or more things have taken a more serious turn. Previously, logs had been used as a last resort and the people capable of wrangling them were much revered. Now there are plenty of standards, books, products and companies that attempt to make sense of your logs, and for good reason -- they are important. Logs will alert you to situations that most traditional monitoring systems would be blind to. Proper log management is necessary if legal action is necessary. There is interesting shit in logs. Really. Look some time.

Lets be honest, though. Even wrangling the logs from your little desktop can be a complicated process -- they'll generate hundreds of logs per day. A relatively unused server will generate upwards of a megabyte of logs per day. An active web, mail or shell server? Millions of entries, several gigabytes of logs in a single day. Now combine the logs from across your entire organization. Information overload.

There are plenty of products you can drop a pretty penny on that will, without a doubt, bring you leaps and bounds from where you very likely sit right now. Some organizations have no log management. Some have centralized logging, but very few have anything further. If you are lucky, some hotshot has a script that tails the logs and looks for very specific error messages which will save your tail.

I am a firm believer in the school of thought that before you go out and drop any sort of significant cash on a security product, you have to go out and really get your hands dirty. For me, that often means seeing what free solutions currently exist, or, worst case, roll your own.

In terms of free (as in beer) solutions, swatch, logwatch, SEC and OSSEC are among the top, the later two being the most powerful. Swatch suffers from lacking any real correlation abilities. Logwatch has some of these capabilities but suffers from essentially using a pile of large, horrifically ugly perl scripts to parse the logs. I've written many ugly perl scripts, and I fear for anyone who is not perl savvy and has to maintain a logwatch setup. SEC and OSSEC have very similar capabilities, though OSSEC is more targeted towards host-based intrusion detection (HIDS) by way of correlating security events within logs. It is a great approach, it is just not the solution that I decided to write about.

What follows is an abridged example of how I used SEC to get some very much needed event correlation up and running in an environment that has anywhere between 500M and 50G of logs per day, depending on how you look at things and who you ask :). I say "abridged" because this ruleset is far from complete. In fact, if you take it as is and set it loose on your logs, you will get a metric crapload of emails about things you probably already know of or otherwise don't care about. The reason here is two-fold. One, I don't want to give away all of my secrets. Two, I cannot tell you what log messages you should or should not care about. That is up for you to learn and decide accordingly.

Save the snippet below as you SEC configuration file and then point SEC at some of the logs you are concerned with. It will give you a base from which you can:

  • Explicitly ignore certain messages
  • Alert on certain messages
  • Do minimal correlation on a per-host, per-service basis

Good luck and enjoy!

# ignore events that SEC generates internally

# ignore syslog-ng "MARK"s
pattern=^.{14,15}\s+(\S+)\s+-- MARK --

# ignore cron,ssh session open/close
# Nov 23 00:17:01 dirtbag CRON[26568]: pam_unix(cron:session): session opened for user root by (uid=0)
# Nov 23 00:17:01 dirtbag CRON[26568]: pam_unix(cron:session): session closed for user root
# Nov 25 16:19:30 dirtbag sshd[13072]: pam_unix(ssh:session): session opened for user warchild by (uid=0)
# Nov 25 16:19:30 dirtbag sshd[13072]: pam_unix(ssh:session): session closed for user warchild
pattern=^.{14,15}\s+(\S+)\s+(cron|CRON|sshd|SSHD)\[\d+\]: .*session (opened|closed) .*

# alert on root ssh
pattern=^.{14,15}\s+(\S+)\s+(sshd|SSHD)\[\d+\]: Accept (password|publickey) for root from (\S+) .*
action=pipe '$0' /usr/bin/mail -s '[SEC] root $3 from $4 on $1' jhart

# ignore ssh passwd/pubkey success
# Nov 24 17:09:22 dirtbag sshd[8819]: Accepted password for warchild from port 53686 ssh2
# Nov 25 16:19:30 dirtbag sshd[13070]: Accepted publickey for warchild from port 57051 ssh2
pattern=^.{14,15}\s+(\S+)\s+(sshd|SSHD)\[\d+\]: Accepted (password|publickey) .*

# pile up all the su, sudo and ssh messages, alert when we see an error
# stock-pile all messages on a per-pid basis...
# create a session on the first one only, and pass it on
action=create $2_SESSION_$1_$3 10;

# add it to the context
action=add $2_SESSION_$1_$3 $0;

# check for failures.  if we catch one, set the timeout to 30 seconds from now,
# and set the timeout action to report everything from this PID
action=set $2_SESSION_$1_$3 15 (report $2_SESSION_$1_$3 /usr/bin/mail -s '[SEC] $2 Failure on $1' jhart)

# These two rules lump together otherwise uncaught messages on a per-host,
# per-message type basis.  The first rule creates the context which is set
# to expire and email its contents after 30 seconds.  The second rule simply
# catches all of the messages that match a given pattern and appropriately
# adds them to the context.
desc=perhost catchall starter for $1 $2
action=create perhost_$1_$2 30 (report perhost_$1_$2 /usr/bin/mail -s '[SEC] Uncaught $2 messages for $1' jhart)

desc=perhost catchall lumper for $1 $2
action=add perhost_$1_$2 $0

# These two rules catch all otherwise uncaught messages on a per-host basis. 
# The first rule creates the context which is set to expire and email its
# contents after 30 seconds.  The second rule simpy catches all of the messages
# that match a given pattern and appropriately adds them to the context.
desc=perhost catchall starter for $1
action=create perhost_$1 30 (report perhost_$1 /usr/bin/mail -s '[SEC] Uncaught messages for $1' jhart)

desc=perhost catchall lumper for $1
action=add perhost_$1 $0

# These last two rules act simlar to the above sets, the only exception being that
# they are designed to catch bogus syslog messages.
desc=catchall starter
action=create catchall 30 (report catchall /usr/bin/mail -s '[SEC] Unknown syslog message(s)' jhart)

desc=catchall lumper
action=add catchall $0

Thursday, November 15, 2007

Comcast Information Stupidhighway

Over the past day or two I've been receiving an increasing amount of attention from the suit against Comcast that names their bittorrent throttling and forgery, among other things, as being against Federal computer fraud laws.

As much as I'd like to be getting this sort of publicity, I'll just come right out and say it. I, Jon Hart, am not the Jon Hart that is currently suing Comcast. In fact, I have never been a customer of Comcast's ISP business, nor do I have any intention of doing so. Furthermore, I'd become homeless on the beach here in sunny Santa Monica and just steal wireless from one of the hundreds of homes that leak 802.11 out to the sand prior to ever stooping so low as to become a Comcast customer. I've seen it done. Heck, I don't discriminate -- AT&T/Timewarner, RoadRunner, and Verizon, you know you are not innocent either.

These ISPs make a healthy profit, and can we blame them for some of their practices? Yes and no.

ISPs make money because they oversubscribe. This is the practice of selling more bandwidth than they have available. Even if you ignore the theoretical limits of the physical medium over which Comcast's cable travels, there are some obvious problems. Comcast offers download speeds from 1-16Mbs. If you combine all of Comcast's customers in a given city or region, if they were all pulling 8-16Mbs, Comcast would become a pool of molten silicon fairly quickly. They rely on the fact that most consumers will never push anything near their capacity, much less for an extended period of time. However, once you get enough customers doing enough ungodly things online at 2am, the chances of Comcast's backbone running into issues skyrocket. The right solution to ensure that you still make your profit is just degrade the "offending" customer's services enough that the probability of exceeding capacity is low enough that the risk is acceptable.

Comcast and other ISPs take things to a new level. Its one thing to rate limit, as this just has to happen for a business based on oversubscription to profit. Forging TCP RST packets or, worse yet, injecting fake messages to tear down or prematurely terminate traffic (primarily P2P) is just downright dirty. Their use of Sandvine allows them to continue to profit while continually bringing on new customers at the expense of a healthy, unadulterated Internet experience. Think thats the worst of it? It isn't. There have been plenty of rumors and various bits of proof that Comcast has been deploying its own nasty brand of DNS redirection, a la SIte Finder, over the past year or so.

Are there other options out there? Sure. DSLExtreme and Speakeasy offer no-bullshit connectivity. They give you bandwidth and leave you the hell alone. In fact, they encourage you to share your connectivity with others, run servers off of you DSL connection and filter none of your inbound or outbound traffic. Its like the Wild West and it is great.

Why do Comcast and others continue to exist? Because far too many Americans are easily duped by all of this "triple-play powerplay", "powerboost", "the fastest Internet", candy coated, re-branded bullshit. Aside from the molestation that happens on Comcast, you still connect to the same Internet that I do. None of the features are unique or better than what already exists everywhere else. Comcast's "The Fan"? Its called Youtube. Sharing Photos? Flickr. And for the love of God, if I see one more "portal" that reminds me of AOL's purple-panel-of-doom, I'm going to hurl.

Do yourself a favor. Support lawsuits like this. Voice your concern and rage to your ISP and talk to others about it. If all else fails, ditch Comcast.

Sunday, November 11, 2007

SSL Certificates on Sourcefire DC/3D Systems

I'm in the process of getting an internal CA off the ground, and on Thursday I found myself totally stumped for a good 6 hours with the Sourcefire systems.

I called support and asked if there was a proper, supported way to get a more official SSL certificate on the devices. To my surprise, there isn't. If you are stuck using a self-signed certificate from a CA you have no business trusting, its almost not even worth using SSL to protect the traffic on the web-based management interface. Since its just all Apache under the hood anyway, I asked if I could do it myself. After several minutes I was told that this is fine, but they could not support anything related to SSL after I went down that road.

Easy enough, I thought. Generate a large private key, generate a CSR, submit the CSR, sign the CSR and drop the new key and certificate on the Sourcefire device in /etc/ssl/server.key and /etc/ssl/server.crt, respectively. Restart apache.

To my total surprise this did not work. Re-copy the correct key and certificate, just to be sure. Same deal. Run the usual certificate and key verification:

openssl x509 -noout -modulus -in server.crt | openssl md5
openssl rsa -noout -modulus -in server.key | openssl md5

Sure enough, the MD5s are the same. What gives? Tweaking the log levels indicated that I had I definitely had the wrong key, but my commands above proved otherwise. I continued to get the following error message:

x509 certificate routines:X509_check_private_key:key values mismatch
unable to set private key


What could it be? A private key that was too large? I tried 4096, 2048 and 1024, and oddly enough 1024 seemed to work. I was furious. Did Sourcefire really configure and ship a system that, for whatever reason, would only function with 1024 bit private keys? I took a coffee break, as I could not fathom how this was possible.

Freshly caffeinated, I took another stab at it. I put my original, 4096 bit key and corresponding certificate back in place and then started disabling the various SSL options that Sourcefire had enabled in Apache's config until I had more information. At one point I screwed up the gcache settings sufficiently enough to kill Apache. When I fixed that and started Apache, things mysteriously started working. This was the key clue. Looking at the init script that ships with Sourcefire, a 'restart' simply sends a HUP, whereas on most other systems it does a synced 'stop' followed by a 'start'.

I have not been able to prove this, but I imagine that either Apache or gcache is caching either the private key or the certificate from its last successful start, but not caching the other. The result is that, despite what you see on disk, the key and certificate that are being used do not match. Believe what you are eyes are showing you, but do not believe what Apache is telling you. A stop and a start are needed.

This is not a Sourcefire specific problem, so I hope someone stumbles upon this and it fixes their problem.

Thursday, November 1, 2007

Splunk 0+1 Day -- Good vendor relationships

A few days ago as part of my many responsibilities I stumbled upon what appeared to be a directory traversal. At first I did not believe it, partially because the specific class of vulnerability is quite dated and partially because I couldn't believe that I had been working with this product for so long and hadn't stumbled upon it.

Unfortunately, Splunk is about 170Mb of python, shell scripts and stripped binaries, so debugging this was no easy task. I actually thought I got lucky and traced the problem to an old version of TwistedWeb that Splunk uses to power the web server, but it turns out I was wrong. Furthermore, finding a python HTTP project that used an equally old version of TwistedWeb was basically impossible, so I was stranded.

At around the same time I opened an FYI ticket with Splunk giving them a heads up. Not only did they appreciate the information, they followed the TwistedWeb ticket and quickly determined that this was their bug, not Twisted's. A patch was quickly published and it is an easy update for anyone who is crazy enough to expose their Splunk server to the outside world.

This a great example of two things.

First, old bugs die hard. This was a classic example of a URI encoded directory traversal (%2e%2e%2f %2e%2e%2f %2e%2e%2f, aka '../../../../'), which Wikipedia describes fairly well. Exploiting this requires no authentication and simply requires that you have an HTTP client and the ability to reach the Splunk server.

Second, it is important to have a mechanism within your organization that allows security information to be channeled to the correct people in a timely manner. How many times can you recall where you call or email vendor XYZ and they basically refuse to speak to you unless you are a paying customer? Not addressing security issues in your products not only makes you look bad in the eyes of the security community, it also damages your brand and puts your paying customers at risk. Splunk is a great example of process done right.

Security is you friend.

Tuesday, September 18, 2007

USPS -- the biggest spammers of all

Not too long ago I moved into a new apartment. In my previous digs I had lucked out and befriended the daily postman who volunteered to filter out the junkmail and circulars. While I got the occasional flyer from the local supermarket, my mailbox was more or less unmolested.

Fast forward today. I received the following junkmail:

  • EZ Lube (West Coast Oil change / service facility)
  • Party America. I forgot. It is now September which means send me some crap about Halloween.
  • Circulars from Albertsons, Vons, Ralphs, Wild Oats, Smart and Final, and Pavilions. 6 grocery stores.

This happens several times per week and the contents vary. Those large krinkley envelopes with 800 coupons inside are a common occurrence as are things for elderly citizens. Today's bundle was the recycled equivalent of 36 8-1/2"x11" sheets of paper. If you take the bundle, stack it neatly and roll it tightly, it is the rough equivalent of a 12" long, ~1/4-1/2" diameter piece of wood. If I saw some guy walking down the street throwing 3 dozen sheets of paper or lopping a branch off of every tree at every house, it would take about half a dozen houses before I'd bust out some Chuck Norris moves and do the world a favor by running him down in my Jeep.

This has been going on basically since I moved in, and this is not the first place I've experienced this sort of blatant spamming. All it takes is one resident at some point in time to give their address, and all other future tenants at or near that address will be bombarded with every piece of imaginable snail mail spam, 5-6 days per week rain or shine.

I've actually considered submitting a change of address form for "Resident" at my current address, as most of this garbage seems to come addressed to something like that. "Valued Customer", even. I'm not sure where I'd send it to, but I think the local recycling facility would be the equivalent of /dev/null and all of this shit would go directly back to recycling.

"Just unsubscribe", you say. Right. There are dozens of companies contracted to gather data from the USPS almost entirely for the purpose of targeted mailings. Want to send mail to every postal customer in 90210? Sure, the USPS will let you do that, but it'll cost you. All postal addresses with a likely Latino name ending in "ez"? Sure. That'll be $99.95 per 500 addresses. It is a losing battle for those on the receiving end.

Imagine if something similar happened in the electronic world. The offending party would quickly be blacklisted, banned, dropped, slapped about, or otherwise publicly embarrassed within hours of the incident. Why doesn't this happen in the physical mail delivery world? At least in the United States, this is because the USPS has complete control over all aspects of mail delivery. A monopoly, if you will. Checkmate.

The strange thing, at least to me, is that there is no program that I know of that the USPS offers that allows you to enable junkmail filtering. Lets say such a mythical beast cost $5 month. Would you pay it? I would. In a heartbeat. Turn that $5 on its heels -- does the USPS gain $5 from those who send the flyers? Over the course of a month for a single postal address, I'm going to say it cost considerably less than that to send flyers. I would be very surprised if the USPS could not make a considerable profit by enabling customers to filter their incoming snail mail.

Are things in other countries any different? I'd like to think so. Even within the US, you can see vast differences in postal vs. package delivery. When was the last time you received snail mail spam from... DHL, UPS or FedEx? Probably never.

In the short term, I have deployed greylisting on my USPS approved mailbox. Several strips of duct-tape fastened over the entrance with a slit sufficient for your average business letter envelope to be crammed through. Anything else, tough.

Wednesday, August 8, 2007

Post BlackHat 2007 / DefCon 15

This is the trendy thing to do. Its the first full week of August and everyone is just getting back from BlackHat and DefCon -- its blogging time. A race to see who gets up the most informative, complete synopsis of the festivities. Me? There is no race because its the same old shit.

OK, thats not completely fair. There was some new and cool things presented at the aforementioned cons, but I've got two gripes.

One, much of the security conference space is very, very incestuous to the point where it is reasonable to assume that 25+% of the presentations that occur at cons in the mainland United States, or make slashdot, have already been presented somewhere else within the past 6 months or will be recycled again before the year is up. If a given presentation doesn't strictly fall within that 25% bracket, there is a very good chance that it is just a modified or updated talk from the past year. I didn't drive 4 hours across the 110 degree desert for you to take $2k of my money only to chew my ear off for approximately an hour on a subject that is, at least among the clued in the security space, almost common knowledge. NAC is broken?! If you transmit cookies in the clear but content encrypted, you will be embarrassed?! Say it ain't so!

Secondly, this presents a unique situation for me with my employer and others like me. Sure, there were more than a few "new things" that we have to worry about in our day to day jobs, but in all honesty, many shops still have yet to master the basics. Many places that I know of or have been in are still struggling with the concept of why "security by obscurity" is a bad idea and plain-text protocols. While this adds more proverbial fuel to the proverbial fire, it does little to help us improve the security of our organization.

Don't get me wrong, I was very pleased with Blackhat this year. DefCon too. 9 days in Vegas can really start to get to you, but that is why god invented Shark Week. I ended up bailing early in the hopes of re-acclimating to Los Angeles before starting work again. To those that I did not see because of my anti-social behavior, I'll make it up to you.

Saturday, June 16, 2007

Privacy Guard -- Not so private

I've been a member of Privacy Guard for longer than I can remember. While some may argue that the $100-200 year fee is too much, what is your privacy and identity worth to you? The biggest value to me has been to track my credit score and do "what if" simulations to see how I can financially better myself.

Over the past several months Privacy Guard has really started to irritate me. Every month or so I'll get something that appears to be a check from Privacy Guard issued to me for some amount, typically $10-20. Upon reading the fine print you'll see that by signing and cashing the check you automatically sign yourself up for an additional year of Privacy Guard service. So instead of paying $199.99 willingly for an additional year of service, you cash the check thinking (stupidly) that it is $10 free, when it actually just gets you $10 off your service.

This is not behavior I would expect from a company that is supposedly dedicated to protecting the privacy and security of one's identity.

Just when I thought things couldn't get any more ridiculous, I get another "check" for $10 with an enclosed "2007 Opinion Poll". This poll blatantly says:

"You are being asked to participate in this opinion poll. Won't you please take a moment to share your opinions? A few minutes of your time WILL ASSIST IN PROVIDING SERVICES THAT MEET YOUR NEEDS AND DESIRES" (emphasis added)

OK, seriously, what gives? Privacy Guard is coming right out and saying that they are going to sell your information.

Some sample poll questions:

  • How often do you shop online?
  • How do you usually purchase flowers?
  • Where do you purchase your music?
  • What time of year do you prefer to vacation?
  • What's the MOST you would pay for a hotel room?
  • How often do you fly?
  • Do you purchase items from airport gift shops?
  • Which of the following activities do you enjoy?
  • Which cuisines do you enjoy?
  • How much do you pay for a bottle of wine?
  • About how many hours of television do you watch each day?
  • What is the model year of your car?
  • About how many miles do you put on your car each year?
  • Do you own your home or rent?
  • Have you ever hired a housekeeper?
  • Do you have a cell phone?
  • Do you have a computer?
  • How many pets do you have?
  • Do you play golf?
  • Which area of the country do you consider most desirable for retirement?
This makes me sick. Privacy Guard is not only obviously selling their customer's data, but is also gathering further personal information to allow the people buying my information to target their spam emails, snail mail and phone calls for things I am specifically interested in.

I am officially canceling Privacy Guard today and will be sending a hand written letter to them describing my disgust. I advise others to do the same. That is, of course, if you value the security of your identity. If not, by all means fill this out. Heck, while you are at it, attach a piece of paper with your Mother's maiden name, name of your first pet, year of High School and College Graduation and City in which you were born.

Friday, May 18, 2007

Vegas plans, ruby.

My plans for Vegas are settled -- I'll be at BlackHat USA 2007 for the week and then DefCon 15 the weekend following. If you want to meet up for consumption, imbibing or conversing, or all three, drop me a line.

In other news, I've been insanely busy with work but I've got two very cool projects that are well underway. The first is a ruby morph of various projects in other languages -- dpkt (python), NetPacket (perl) and libnet (C). I've taken the functionality of all of these and made some big improvements (IMHO). The second is a project that everyone seems to need but nobody has -- SSL certificate lifecycle management.

More news when I have time.

Wednesday, April 18, 2007

Attack the source

The Mac has started to become a "supported" platform at work, so a few months back I put in a request to get my own Mac. If I'm expected to help protect them, I figured I might as well know what I'm getting in to. Late last week and a few hours over the weekend I took some time to get more familiar with my new silver-clad friend.

The Mac was, at least from the network point of view, locked down right out of the box. There were plenty of things for me to do in the UI to tighten the screws. Most of it was obvious and found within minutes of poking around in the preferences. The rest can be had from the bounty that is the Internet. As it turns out, Pete has some of the same information that I used.

In short order I started to get the feeling that something was not quite right. This dirty feeling is one that I've felt before.

I'll explain my uneasiness with a question. What do the following have in common?

The answer may be obvious to some, but it is important to point out that the result of the above actions is that you'll be installing software on your system from a source that is untrusted in one way or another. I look at this particular security-oriented trust issue in a few ways.

Is the host holding the software secured against attack? In far too many cases, I'd say no -- the most popular ssh-agent for the Mac is hosted in a user's home directory at the University of Utrecht. I've worked in a University setting, and I can tell you that while it is one of the most diverse environments, from a security point of view it is a complete nightmare. Picture the worst shared-hosting facility situation that is chock-full of politics and pet projects, add a heaping scoop of financial problems, and you've got an environment that is ripe for hacking. The software hasn't been updated since 2005 and there are no MD5s, so my guess is that modification of that file would probably go unnoticed. Compromise that source and, within hours, you'd have a backdoor on machines that are otherwise fairly secure.

Is there any reason for the source itself to be malicious? I'd love for the answer to be "no", but there is a considerable market in shady business like this. If you are a singular entity distributing software, there are rarely any checks and balances in place to ensure that you play nice. Whats to prevent the person who distributes the software from tacking on other malicious software to the installer, or somehow putting a backdoor in the program itself? Considering the fact that ~90% of Windows users are either using the Administrator account for day-to-day acitivities, or have a user account who has equivalent privileges, I'd say that the chance for profit is quite good. Even if the user isn't running with elevated privileges, the chances that the installer will required them for installation are quite good. This isn't a Windows specific problem either, as much as many people in the security space like to bash it. Something similar can be said for system "administrators" rushing to get a system installed so that the latest whiz-bang Web 2.5 startup can profit. Because of the rush or because of his sloppy practices, he tromps around on his RedHat system as root and then downloads a binary RPM for one of the dozens of pieces of software that RedHat proper doesn't support. If the backdoor doesn't kill him, the lack of pre/post installation scripts or dependency tracking will.

The distribution for many pieces of software that fall outside the realm of "supported" is quite different. In the open source world, if you are an uber paranoid and don't trust the packages that come from Debian (why should you? ;)), the ports from FreeBSd, etc, you can download the source and perform your own checks. The code used to do the installation is an easy read, and if you are so inclined, you can read the source to the software itself and do a 5-minute sanity check. Foolproof? Hell no, but the piece of mind goes a long way. In many other circles, you don't get the source to the installer or the program itself. Instead, it is bundled as a single binary package and its up to you to trust that the stand-alone executable that you just downloaded from a host in the Czech Rebulic isn't going to turn your poor Windows machine into swiss cheese.

When I'm on the dirty streets of Hollywood, I don't put my ATM card into just anything that says "ATM" on it. I don't put my personal information, especially my credit card number of god forbid my SSN, into any website unless it uses SSL from a "trusted" CA and looks reasonably well put together. I don't discuss personal matters when I get a phone call from a number a don't recognize and the voice on the other end identifies itself with simply a name. For the same reasons, I don't install or use software from untrusted sources unless appropriate precautionary measures are taken on my part.

Thursday, April 12, 2007

Open Source: A hackers dream, a vendor's worst nightmare

I've been involved in the open source community in one way or another to varying degrees for basically my entire career, and I can say with a high degree of certainty that if it weren't for open source, my career would've probably gone a completely different path.

"Open" has grown to mean more than just access to the code, and now often implies the sense of community that comes with using open source software. Need some software that will cook your breakfast for you? If such a beast doesn't already exist, you can safely assume that someone out there has written most if not all of the supporting sub-functionatlity, and then its just up to you to cobble it together. Support? Sure, 24x7x365x$forseeablefuture by people around the globe who aren't doing this for the money, but rather because they are passionate and knowledgeable about the project. AI::MakeMyBreakfast doesn't do exactly what you need? No problem -- add the functionality, make improvements, and the project will happilly accept your work.

Like every story, there is a dark side to open source. The dark side that I speak of are vendors that speak badly of open source. Their reasoning is never clear, and as a wise man once said -- "The proof is in the pudding". If your product is truly better than a competitor's, even if that competitor is an open source offering, prove it using solid, no-bullshit technical facts and a real-world bakeoff. The real reason they bash open source, in my opinion? Fear.

Friday, March 23, 2007

"Etherleak" -- Old dog, old tricks

On 04/27/2002, I disclosed on the Linux Kernel Mailing list, a vulnerability that would be come known as the 'etherleak' bug. In various situations an ethernet frame must be padded to reach a specific size or fall on a certain boundary. This task is left up to the driver for the ethernet device. The RFCs state that this padding must consist of NULLs. The bug is that at the time and still to this day, many device drivers do not pad will NULLs, but rather pad with unsanitized portions of kernel memory, oftentimes exposing sensitive information to remote systems or those savvy enough to coerce their targets to do so.

Proof of this can be found by googling for "remote memory reading using arp", or by visiting the original posting. Admittedly, I did not realize at the time the exact cause of the bug nor the scope, but to my knowledge this was the first public disclosure.

This was ultimately fixed in the Linux kernel, but over time this vulnerability reared its head numerous times, but at the core the vulnerability was the same as the one I originally published. The most public of these was CVE-2003-0001, which was assigned to address an official @stake advisory which did not credit my findings.

So, now, nearly 5 years later, I'm publishing the POC exploit code I've recently revamped that can demonstrate the issue. The good news is that the bug has been addressed by numerous vendors, but is still rampant and should be considered a threat.


Wednesday, March 14, 2007

Akamai -- A blessing in disguise?

For the past week or so I've been looped into various projects that, for one reason or another, required analysis of web logs. This little side trip has (re)taught me a number of things about large, production websites and how attackers work.

Perhaps the most intruiging was an analysis of HTTP return codes from our production web tier as compared to what Akamai had logged for the same time period. I had initially assumed that, for the really strange error codes, the numbers between what Akamai had recorded and what we saw should match. Well, I was wrong. For example, for HTTP 500 ("Internal Server Error"), despite the fact that our production web tier received 25% less traffic than what Akamai handled for us, it saw 10% more HTTP 500 errors than what Akamai saw. The reverse was true for 503s and 504s, which are arguably worse than 500s. Production web tier recorded no HTTP 504s, whereas Akamai showed roughly 3500. For 503s, the production web tier recorded roughly 10% more than what Akamai had logged.

This got me to thinking about why the error code distribution varied so widely. I could explain away some of the HTTP 503 and 504s based on global network wonkiness and with some handwaving, but the 500s and most of the 40x series codes eluded me temporarily. On the way home, it hit me and I spent the rest of my drive in a dead stare thinking about this.

The variations in HTTP error codes on Akamaized sites can be largely explained by the nature of the way most web-based attacks work. I'd venture to say that 90% of the HTTP attacks that any publicly exposed web server faces are not targeted at their host in particular, but rather that their IP address(es) were simply part of a massive attack that someone somewhere had launched in the hopes of getting lucky. So, in theory, for Akamaized sites no "valid" traffic should ever trickle its way to your web tier unless it comes through Akamai. Anything that bypasses Akamai is likely malicious and should be dropped, or at least screwy enough to warrant investigation. Preventing such attacks should be fairly easy, as Akamai clearly identifies itself by way of Akamai-Origin-Hop, Via and X-Forwarded-For headers. Sure, those headers can be forged, but we are not looking to hinder attackers that are intent on attacking our specific web presence. The downside is that you must do L7 inspection for this to work, so processing time and power can become an issue.

If anyone has considered this approach and/or actually implemented it, I'd like your input -- leave a comment or drop me an email.

Monday, March 5, 2007

Cisco -- Making log analysis more difficult?

Much of my work at my current job as of late has been in a consulting capacity more than anything. Instead of driving security projects or being the primary workhorse behind a given project, my role has been to provide security advice, help the given business unit prioritize a risk in the grand security scheme, and do some shepherding along its path to completion.

Not too long ago we had a fairly visible outage in one of our environments, and some groups were left scrambing in an effort to try and explain what happened and how it could be prevented in the future. Unfortunately, there was minimal logging going on, no correlation, no alerting, and little if any monitoring in place. I was asked to address these issues, and within a few hours had replicated the SEC log analysis and correlation setup we have in other environments.

In my haste I appear to have partially botched the configuration, and I was ignoring large swaths of FWSM syslog messages. Fortunately, none of those syslog message types have occured since my initial screw-up, so my bacon was saved. When I discovered this earlier today, I figured now was a good time to do some more detailed filtering on the FWSM messages we get alerted on.

After many difficult google searches and trudging through Cisco's site, I found this link. "Great!", I thought, learning that FWSM (and PIX) messages are prefixed by their severity. I was this close to implementing something that would immediately alert on severity 1 and 2 messages, and pile up larger amounts of 3 and 4 for bulk alerting, and can the rest. Well, Cisco apparently thought better than to make my life that easy, and tossed the notion of "severity" right out the window.

As an example of some of Cisco's severity ratings, behold the following gems:

The good news here is that I was able to warp my mind in such a way that I too could understand the logic here, and accurate alerting is now in place.

Golf clap.

Sunday, March 4, 2007

Saving bandwidth by ... removing comments?

I fixed up a number of shortcomings in htcomment the other day. As part of the development of this tool, I regularly run it against sites I frequent. I started to notice what appeared to be an inordinate amount of comments on some sites, so that got me thinking -- on average, what percentage of a site's web content is comments?

Some quick analysis gave some interesting results:

$  for site in; do

full=`lynx -source http://$site | wc -c`    
comments=`./htcomment -q http://$site |wc -c`                               
echo "$site is"`echo "scale=4; ($comments/$full)*100" | bc`"% comments"   
done                                         is 16.9600 % comments is 5.5300 % comments is 1.6200 % comments is 15.2600 % comments is 2.6600 % comments is 5.8500 % comments is 4.5900 % comments is .7300 % comments is 3.7300 % comments is 0 % comments is 0 % comments is 0 % comments is 2.7500 % comments is 17.0700 % comments is .2600 % comments is 0 % comments is .3400 % comments

Unfortunately these numbers are not 100% accurate -- htcomment can't differentiate between "kjdflakjfdaf " and just "", so the numbers for the sites that do have comments can be a bit skewed in some respects, but it is a good first order approximation. It is no coincidence, in my opinion, that google, w3c and craigslist have 0 comments on their frontpage. For sites that have >5% comments on their frontpage alone, you can't help but wonder how the behavior of their site or their bandwidth expenses would change if those comments were filtered out at their edge, or never put there in the first place.

The Art of Software Security Assessment -- Page corruption

I've been reading The Art of Software Security Assessment for quite some time now. I had originally picked a particular track through the book, but when I finished that I went back through to read the remaining chapters. A week or two back I was in what I thought was an interesting chapter -- 14. Network Protocols. All of the sudden the content started to not fit together and I felt like I had already read tthis stuff before.

Well, sure enough, I had. My book has been owned. Pages 843 - 890 were replaced with pages 795 - 842. OMG, arbitrary memory overwrite of 47 pages!:

Friday, January 26, 2007

MySpace, GoDaddy, nmap and snails

I woke up a bit early this morning and began my usual routine of feeding while I checked email, read various news bits, etc. I ran into an article on reddit about GoDaddy having to shut down a site after complained. This alone didn't catch my attention as this sort of thing happens probably hundreds of times per day.

As the caffeine started to flow, I read further into the article. I saw that the site that got shutdown was That site rang a bell. I read further and realized that seclists is owned by Fyodor, the author of nmap, which is arguably one of the most important utilities in a security geek's arsenal. I nearly choked on my granola when everything finally clicked.

I don't use GoDaddy, but it made me realize that I should probably avoid GoDaddy in the future, and that many people in similarly edgy shoes will probably do the same.

What gets me most about this whole fiasco is not the fact that MySpace can throw its weight around and get a site axed, nor the fact that it happened to a fairly well-known individual. Its the fact that the data in question was first made public on another site over a week ago. My cat, who is barely 10 months old, has ADHD, is a vicious killer, and knows nothing other than how to sleep, eat, shit, cause trouble and get attention, could've addressed a security issue like this quicker than MySpace did. We aren't talking about a few passwords here, either. We are talking about over 56,000 myspace credentials and a password stealing scheme that was extremely effective.

The security staff (if any :P) at myspace should be ashamed.

Monday, January 15, 2007

Dropped mail?

I figured I'd throw this out there hoping that some of the 3.4 people that read this blog may have some insight.

I've owned the domain since 2001. While I can't guarantee it, I have every reason to believe that the machine(s) used to host it have never been involved in any sort of mass-mailing, virus spewing, or other things that tend to cause mail to get dropped.

Over the past, oh... 6 months, I've seen a number of instances where mail coming from has been dropped for one reason or another. In some cases, my email was in reply to one that someone else sent. When they never got my response, they asked why, but I've got logs saying that the email was successfully delivered. In some cases ( and being good examples), mail would get delivered successfully but dropped in a spam folder, which in the end means it'll go unread and eventually be deleted. In other cases, I'd initiate or respond to an email and never get a response but I know the person on the other end would've responded had they received my email.

Like anyone thats ever posted to a public mailing list, my email address(es) and domain have been used plenty of times as part of forged spam or other nastiness. Thats a risk I accept, as really its just life on the Internet. But I can't help but wonder why my mail gets treated like garbage and all too often dropped.

I know at one point and were making heavy use of SPF, so I recently configured an SPF record for

$  host -t txt          TXT "v=spf1 a mx  ptr -all"

If anyone has any suggestions for things I could do (or not do...) to prevent my mail from getting dropped or otherwise demoted, please leave a comment here. Feel free to poke around, too. Examine my DNS setup and my postfix configuration. I believe it all to be sane, but I could be wrong and would love to have that proven.

Thanks in advance!

Tuesday, January 2, 2007

Can't see the forest for the trees

Yesterday was perhaps the laziest day I've had in quite some time, but it was worth it. I woke up at 10ish and spent the remainder of the day planted in front of my laptop hacking. Come midnight I realized that the ROI on my time was diminishing.

My challenge for the day was much like many of the other security adventures I've been on in the past few years -- something catches my eye, and I pursue it. This particular trait has proven quite useful in my pursuit of security success, but it is actually one that I acquired long before even owned my own computer -- and I mean "own" as in "it belonds to me", not "0wn". Growing up I spent the bulk of my time outdoors doing various things -- riding my bikes, camping, fishing, hiking, etc. For whatever reason, I had this ability to, without even trying, discover misplaced/lost belongings. As an example, on camping trips with the scouts, I'd routinely be walking along, doing whatever it was needed to be done at the time, and I'd see a silouhette or something that otherwise stood out ever so slightly. It would usually turn out to be a watch, a flashlight, or other outdoor goody. Eventually it got to the point that other people would actually suspect I was stealing these things or they'd get pissed off at me because I would always find these treasures that would've otherwise gone unnoticed.

Yesterdays challenge was something that I had casually noticed in a packet capture several days earlier -- a DNS request for the hostname 'netmask'.

The first several hours were spent verifying that the request had, in fact, come from what application I thought it came from, and then determining what, in particular, had made that request. Eventually I realized that what I was up against was a call to system("/sbin/route somestuffhere"), where somestuffhere was taken in part from a DNS response.

After a quick POC with Dug Song's dnsspoof, I knew I could manipulate that call, but dnsspoof only does simple A record responses, so all I could wind up doing was getting /sbin/route to do something funky with this particular call. While that is certainly a vulnerability all by itself, it wasn't quite what I was looking for. Its like fishing -- it was a nice catch, but there is something bigger/better out there. I debated hacking up dnsspoof to do my bidding, but that quickly proved to be more C than I had time for. I whipped up some perl that did exactly what dnsspoof did, and then modified it to do my bidding. It will respond to regex-matched DNS A record lookups with arbitrary records -- A, CNAME, MX, etc, that you define. Doing an nslookup lookup on and getting back a TXT record of `/bin/id` is quite amusing.

Anyway, I'm finalizing this code and hope to make it available very soon.

Happy 0x7d7

So, rumor has it that it is now the year twenty o-seven. Personally, I'm still not even sure Christmast happened, because I know that I spent all of yesterday in just shorts and the heat wasn't on, and on what was supposedly the last day of 2006, I rode my bike outside and sweat my ass off. On the East Coast, that just doesn't happen.

A funny thing happened at work today. I was filling out paperwork for vacation and mistakenly put today as 1/2/2006, and all of my other dates were also off by one year. This sort of thing happens to just about everyone at least once this time of year. In thinking about why it happens, I realized that a large part of this is due to the fact in today's society, you rarely actually write the date down. At least 75% of the time its just automatically entered for you -- email, calendars, etc. Plus, when you actually need to know what today's date is, how often do you actually think it out instead of asking someone/something for the answer. In the computer world, the date is automatically printed in my zsh prompt, and if I need to know a specific date, cal tells me the rest. In the real world, say at the grocery store, you always ask the cashier what the date is.

Anyway, here's hoping 2007 is as eventfully uneventful as 2006.