Mad, Beautiful Ideas

Search

About this Archive

This page is an archive of recent entries in the Security category.

Science is the previous category.

Television is the next category.

Find recent content on the main index or look in the archives to find all content.

JS Array lastIndexOf

Categories

Recent Entries

Recently in Security Category

Notes on the Terry Childs Case

This week, Terry Childs, the [San Francisco Sys Admin who refused to turn over passwords to the city network to his superiors because he felt they were incapable of properly managing it[(http://www.cio.com.au/article/255165/sortingfactsterrychildscase/?fp=&fpid=&pf=1), was found guilty of felony denial of computer services. First off, I think Mr. Childs is absolutely guilty of his crime. I’ve never worked in a place where I was the only one with key access to core systems. Hell, I usually insisted on a password sheet (or a USB key with password data) stored in the company safety deposit box. Of course, I still wanted to know what the definitions in the case were, so I had to go to California’s Penal Code, who’s website is pretty awful, so I’m just going to quote here:

502. (a) It is the intent of the Legislature in enacting this section to expand the degree of protection afforded to individuals, businesses, and governmental agencies from tampering, interference, damage, and unauthorized access to lawfully created computer data and computer systems. The Legislature finds and declares that the proliferation of computer technology has resulted in a concomitant proliferation of computer crime and other forms of unauthorized access to computers, computer systems, and computer data.

The Legislature further finds and declares that protection of the integrity of all types and forms of lawfully created computers, computer systems, and computer data is vital to the protection of the privacy of individuals as well as to the well-being of financial institutions, business concerns, governmental agencies, and others within this state that lawfully utilize those computers, computer systems, and data.

(c) Except as provided in subdivision (h), any person who commits any of the following acts is guilty of a public offense:

(5) Knowingly and without permission disrupts or causes the disruption of computer services or denies or causes the denial of computer services to an authorized user of a computer, computer system, or computer network.

(d) (1) Any person who violates any of the provisions of paragraph (1), (2), (4), or (5) of subdivision (c) is punishable by a fine not exceeding ten thousand dollars ($10,000), or by imprisonment in the state prison for 16 months, or two or three years, or by both that fine and imprisonment, or by a fine not exceeding five thousand dollars ($5,000), or by imprisonment in a county jail not exceeding one year, or by both that fine and imprisonment.

Okay, whew. Section 502 in the California Penal Code is kind of interesting, though I hate that computer crime has a seperate set of statutes than other crimes. Is ‘denial of computer access’ really any different than, say, boarding up all the doors and windows of a persons home (or place of business, so there is a more obvious financial burden), denying them access to that resource? I don’t think so, and I don’t think computer crime should be treated differently. Mr. Childs absolutely denied access to his superiors, who absolutely (though the Jury did debate this) was authorized to use the system.

My issues with this case, aside from the double standard of computer versus physical crime, is with how it was handled. Mr. Childs was held on $5 million in bail. Five million dollars. Incidentally, child molestors, arsonists and kidnappers in San Francisco, have their bail based at a mere one million dollars, at least in San Francisco in 2008, the time at which Mr. Childs bail was set. Of course, since he’s been in Jail for almost two years, his sentencing should offer credit for time served, and his additional time in jail should be very short, if any at all. And given that he’s been imprisoned since 2008, I would be surprised if, under Section 502, he recieved an additional fine.

But we’ll see. I have no idea what the legal basis for $5,000,000 in bail was, though this is certainly not something that was on the bail schedule. I just fear that the over-reaction in setting his bail is going to translate into an over-reaction in his sentencing.

Sys-Admins have a tendency to feel like gods of their own little domains, and Childs’ actions are highly indicative of that. I do have a small amount of respect for the sheer level of conviction displayed by the man, but it was misplaced. And now, he’s lost nearly two years of his life to that misplaced conviction. Does he really need to lose any more?

Poor Practice: E-Mailing Passwords

A couple of weeks ago, I sent out the following Tweet:

Tweet about Emailing Passwords

This prompted a short conversation with Marc Hoffman about how it can seek to balance between security and convenience, the convenience factor being that if a user has their password in their e-mail, it can limit the necessity of password reset requests, and that sort of thing.

In my opinion, there is no reasonable argument for convenience. Most users utilize a very small number of passwords anyway. Those who don’t, usually take advantage of a password-safe application, in order to keep things straight. Which is fine. I don’t consider it the real answer to the password problem (that would be OAuth), but it allows you to securely store passwords (though if this is better security than in index card in your wallet is debatable), and manage the complexity.

E-mailing a copy of the user’s password, when you’ve already required them to enter their password into your site twice, does not help either of these use cases. Safe-users will have already saved their database. Password-repeaters already know their password.

Plus, it always gives me a nagging feeling that my password is going into their database the same way it came out of that e-mail. Plain text. That may not be true (and I always pray it is), but if they’re already expressing (what I consider) a lackadaisical attitude about using my password, it doesn’t give me a whole lot of hope.

Marc feels it’s an act of balancing between convenience and security, and certainly, all security advice is a trade-off. However, I don’t see any benefit to this. The odds of a user typing in their password wrong twice and needing to be reminded of it in an immediate e-mail. The odds of a user using an unfamiliar password, and not storing it somewhere secure. The odds of these things seem very high against, and they seem to send a strong message that the password isn’t something of any value. For many sites, it isn’t, but given the tendency of users to reuse passwords…

If anyone can provide me a strong use case for e-mailing a user the password they just entered into your site to create an account, I’d love to hear it.

The Problems with Responsible Disclosure

For a long time the security community has been talking about responsible disclosure, however, while the Wikipedia entry describes the process as one that depends on stakeholders agreeing to certain period of time before a vulnerability is released, this can still lead to months of a vulnerability going unpatched, and with ‘responsible’ disclosure, users usually have no idea, even if the vulnerability is being exploited in the wild.

Now, in some cases, it makes sense. Dan Kaminsky’s DNS Cache Poisoning discovery had not been seen in the wild at all, and the community was able to use the opportunity to upgrade almost all DNS servers in one fell swoop. The vendors were very willing in this case.

I’m not advocating that a security researcher immediately post their findings publically (aka, full disclosure), though I’m not opposed to that. I think that sometimes there is value to the pain some researchers express at doing the responsible thing. The vendor should absolutely be the first one notified, but in my opinion, public disclosure, needs to happen faster than it tends to. If vulnerabilities aren’t posted to sites like The Open Source Vulnerability Database, then a lot of work is duplicated, but furthermore, customers aren’t even able to properly protect themselves.

Some developers will simply not attempt to fix security issues, or will propose workarounds that aren’t acceptable to all users, as in the rant linked above.

The real reason I don’t think responsible disclosure works, is that many times vulnerabilities that are already in the wild aren’t being publicized properly. With full disclosure, customers can help prioritize fixes. Customers can institute work arounds that might provide them the temporary security they need. Intrusion Detection Systems can be outfitted with signatures that can help prevent live compromises. A lot of things can happen that are likely to make us safer.

Then, there is the side of the coin, that disclosure doesn’t make vendors any better at software development. It doesn’t make them any less prone to the same old mistakes. At the Pwn2Own 2010 competition at CanSecWest this year, the researcher who exploiting Safari on the Mac to root a Mac did so for the third year in a row. In minutes. Using the exact same class of exploit that he’s been using all along. The same mistake just keeps happening.

This year, he choose not to reveal the specific exploit, instead running a session on how he scans for the vulnerabilities, with the hope that vendors will start doing this themselves, since it’s mostly done via automated fuzzing. While I’m not arguing for no disclosure, as was done in this case, at least Mr. Miller presented on his techniques, such that Apple and others can finally get their act together on this all too common class of errors.

ChromaHash: Not As Dangerous As You Think!

The ChromaHash module I’ve submitted to the YUI3-Gallery got hit up on Reddit this week, which incidentally is the second time ChromaHash has been discussed there. And this time around, the discussion was just as negative this time around.

First, a lot of people focused too heavily on the fact that the demo screen is a confirm password box, commenting that a simple checkbox confirming the password was the same would be sufficient. Of those who did recognize that this was meant to be used on a login page as well, we got the general reaction we get that since this gives any information about the password, then it’s wholly unacceptable, and it completely compromises the password security.

Now, Mattt Thompson, creator of ChromaHash for jQuery (and who’s module mine is based off of), has written a pretty good post outlining why this implementation isn’t as bad as the reaction we’re getting off of certain information security folks, and I’m not planning to reiterate his points (at least, not entirely), since I, as a person with great interest in information security, think that Mattt’s post is more than sufficient at making the point.

Instead, I’m going to talk a bit about some of the suggestions that have come out of the Reddit threads.

  1. Salt Password with Username

Actually, this is a reasonable option, since it would ensure that users with the same password don’t discover that. There are a few implementation-level details, since we’ll have to tell chroma-hash where to find the salting value. The initial thought I have is making the salt object optionally take an array, where any element of the array that is a valid CSS selector, then I’ll take the value property of the node referred to by the selector and append it to the other elements of the array (as strings) to get the salt value. This does mean I’ll be recomputing the salt periodically, but I think there are ways around that (subsribing to the ‘change’ event for the node, for instance). This suggestion, I think, warrants some more consideration. Though really, password collisions should be pretty rare.

  1. Could reconstruct password as user typed it.

This is really only an issue for slow typists, since in my implementation the color shifts take half a second. This ought to be user configurable, and it will be in a new release soon-ish. An alternative, that I’m not as convinced to it’s usefulness, would be to set a delay between the last keypress and the animation beginning. I might do this one, but again, I’m not convinced it’s useful.

  1. Randomly salt password on pageload

Umm, no. This would make this tool completely worthless.

  1. People are colorblind.

Yes, approximately 14.5% of the population has some color-related vision problems. But that means 85.5% of the population isn’t. And even the people who are color-blind can still glean information from chromahash, even if they’re more likely to encounter collisions. Plus, this is a non-essential tool, so it’s not like using chromahash prevents the color blind for interacting with your site.

  1. Any information about the password is TOO MUCH INFORMATION

Actually, my favorite one of these is a guy who made a bruteforce for chroma-hash. Now, his example is kind of bullshit because he uses a crap password, so of course it’s fast, but it completely fails to take into account a few notes:

  1. We’re using MD5 on the backend, which outputs 32 hexadecimal digits, of which we’re only using 6 to 24 (which is configurable by the ‘bars’ option). The collision space, particularly if you only use one or two bars, is non-trivial.
  2. There are very few circumstances where an attacker could get the exact hex values for a chromahash, when they wouldn’t have better mechanisms to steal your password (ie, keyloggers). And for those cases where it could be, disabling chromahash (at least, the YUI3 version I wrote) isn’t very difficult, and could be wired up to a key event handler, an example of which I’ll probably have later.

It’s highly unlikely that someone will be able to get enough immediate data from this system to be able to make a reasonable attack on a password, certainly not when there are so many other, easier ways to perform that attack. And ChromaHash is configurable, based on when it goes color as well as how many bars it displays, which both would help this situation.

Ultimately, however, passwords are a failure as a security mechanism. Most people use the same password (or small set of passwords) everywhere, they don’t change them very often. Not to mention the fact that a lot of people storing password are doing a poor job of it. I worked at an e-commerce company not too long ago that when I took over their web presence, the passwords were in the database in plaintext, access rights were driven by a cookie. Hacking this site was trivial until I rewrote it, and even then, there are a few things that I didn’t do correctly right away, like not salting my MD5 hashes I was storing in the database (or using MD5 as my hashing algorithm in the first place…).

I believe that ChromaHash can improve usability, since it provides immediate feedback to the user that their password is accurate, and given that many people are visual, I think that they’ll develop, rather quickly, a gut reaction to the hash colors if they’re wrong or not. Will it work for everyone? No, but it could help most people.

We need to move beyond passwords as an authentication mechanism. I’m a big fan of the Yubikey, particualrly when paired with OpenID, though just migrating toward OpenID is a huge improvement. But ChromaHash, as it stands, does not significantly weaken the nature of passwords.

Undoing a Lockout on an Android Phone

Over the weekend I ran into a major problem with my Android-based phone. While we were moving into our new condo, I had the phone in my pocket, and had somehow triggered the touch screen pattern-unlock mechanism, and proceeded to accidentally lock out my phone badly enough that it was demanding my username and password to unlock the device.

Unfortunately, I apparently can’t remember that password.

Fortunately, if you have debugging over USB enabled, you can hack your way into the phone.

Yes, this is a security vulnerability, but most any device has some security problems when you have physical access to it, and at least in this case, you have to enable USB Debugging, which is only enabled by default on developer firmwares.

To perform this fix, you need to have the Android SDK, and your phone needs to appear on the output of adb devices. Google provides a Windows Driver, and most Linux distributions should work fine. Once you have the device attached, just execute the following.

$ adb -d shell
# sqlite3 data/data/com.android.providers.settings/databases/settings.db    
sqlite> update system set value=0 where name='lockscreen.lockedoutpermanently';
sqlite> .exit
# exit

The effect should be almost immediate. Press the ‘Menu’ button on your phone, and you’ll be prompted for your pattern. If you’ve, for some reason, forgotten what your pattern is, you can follow these instructions to disable the pattern prompt, which was the basis for my solution.

SSL Weaknesses Shown

This last week at the Chaos Communication Congress put on in Berlin, a group of international security researchers revealed some research exposing some interesting flaws in the Secure Socket Layers Certificate system we use to validate that a connection is secure.

The attack was very specific, and was based on an attack against the MD5 digest algorithm first discussed in 2004. This attack basically states that it is possible on commodity hardware today to compute two blocks of data which, though different, share the same MD5 signature. This is a huge problem in cryptography, because when a file is cryptographically signed, the private key doesn’t actually sign the file (because that would be encrypting the file with the private key), but rather they encrypt a signature of the file. This means that if you have two-plaintexts that share the same signature, signing one plaintext essentially signs both.

Now, even though MD5 has been known for nearly four years to not be cryptographically secure for generating signatures, it’s still been in relatively high use, and the researchers found several companies who were generating SSL certificates which would be trusted by default in IE and Firefox, but signing them with using MD5. Most certificate vendors have migrated toward SHA-1, and we’re beginning to see migration to SHA-2, but the holdouts saw little reason to update, until recently, I suppose.

The attack involves generating two certificate requests that will generate the same MD5 signature, so that when the Certificate Authority signs the legitimate request, they can substitute in the illegitimate one as valid as well. There were a few problems with this. There were two parts of the signed certificate they didn’t control. First, the serial number, and second the time the certificate started being valid. The serial number turned out to be easy, as RapidSSL, the provider they were using, assigns serial numbers incrementally, and they’re certificates become valid exactly six seconds after submission. With that in mind, they were able to guess what the serial number and validity time would be, given that they checked the serial number counter a few days before the attack.

They would then use 200 PlayStation 3 consoles to generate a colliding request which was itself an intermediate CA (meaning the ‘rouge’ certificate could sign valid-looking certificates). There was a lot of work these researchers did to make the process of generating the collision faster, but they apparently don’t plan to release any details of that work. I understand their decision, but I’m not sure I agree with it. Perhaps in a year or so, when more people have migrated away from MD5.

So what’s this mean? Well, paired with a DNS spoofing attack it means that the attackers could redirect your banks website from your banks servers to their own, all with a valid SSL certificate. It means that a man-in-the-middle attack could be performed over a secure connection, while looking 100% valid the entire time.

Ultimately, it doesn’t change anything for most users. Most users click right through invalid security certificates, blame for which I place on the high cost of SSL certificates. Perhaps with the new EV certificates we’ll see prices on the less expensive certificates drop, but I doubt it. For those users who do pay attention, it means that they could be effectively tricked.

Luckily, this research has convinced those few MD5 hold-outs to switch to SHA-1, which will effectively render this attack impossible (at least until SHA-1 is broken), but the last part that the research revealed was the problems with revoking SSL certificates. The browser depends on the certificate to tell it where to look to see if the certificate has been revoked, but the rogue certificates don’t supply that information, they’ve had to overwrite it was random data. The solution involves running your own revocation server, but that is just not reasonable for most users.

SSL is still a good thing, and something that we should be careful to know we have, but it seems that it may require some fundemental changes, not only in how it’s used, but in the specification itself.

Security Conference Wrap-Up

Summer is here and with it are a variety of hacker conferences. We’ve got Defcon, Black Hat, and my favorite The Last HOPE (Hackers on Planet Earth, run by 2600).

Defcon is the longest running of the conferences, having been in Vegas since 1993, and having long been an interesting mix of the Hacker community and Law Enforcement. It’s three days of intense learning, hacking contests, games, all sorts of hacking related stuff, and that’s just the advertised events. I’ve heard a lot of stories of people going to Def Con and seeing things like cell-phone scanning going on behind closed doors. And it’s only $120 for the conference. Cheap as shit. I’m going to have to try to go next year.

Black Hat bills itself at “The World’s Premiere Techincal Security Conference”, and I’ll be honest there are some pretty intense sessions. Like the FasTrak system I discussed last week. My big problem with Black Hat is that it’s gotten to be too damn commercial, or maybe it always sort of was. It costs a few thousand to go, and that’s before Vegas hotel rates. Plus, they actually kicked out reporters for allegedly hacking. At a hacking conference. This would never happen at a real hackers conference. Might as well go to RSA, if you’re looking for such a watered down hacker environment.

Which brings me to HOPE. I talked about The Last HOPE a while back, expressing my dismay at possibly missing the last HOPE conference ever. Luckily, the owners of the Hotel Pennsylvania have been convinced not to raze the hotel, and The Next HOPE has been scheduled for 2010. My only complaint is that they didn’t make the obvious Star Wars joke.

Even better though, is that 2600 has made the audio of all the talks from The Last HOPE available for free download. I’m working my way through them, all 2.4 GiB (my ISP is going to be so pissed). But you can easily just pick and choose. When the video comes available, I’ll have to buy some of my favorite sessions.

This is why I love HOPE and Def Con. They’re more open than anything else, they exist to share knowledge, and they try to do it at as low a cost as possible. They’re about teaching and they’re about knowledge. I encourage everyone to download the talks from The Last HOPE. You’re bound to learn something, and that’s ultimately the whole point.

FasTrak Easily Ruined

        <p>The <a href="http://www.blackhat.com/">Blackhat</a> conference was running this week, and a large number of interesting security issues were raised (even if <a href="http://www.fiercecio.com/story/black-hat-presentation-apple-cancelled-last-minute/2008-08-05">Apple wouldn&#8217;t let their devs talk</a>), but one that I found interesting was the discussion of the FasTrak system. FasTrak is a automated Toll paying system used California&#8217;s large cities that have toll booths on their major motorways. Researcher <a href="http://www.root.org/~nate/">Nate Lawson of Root Labs</a> discovered that the FastTrak, which I suspect is very similar to New York City&#8217;s FastPass system, uses no Authentication, and simply replies with it&#8217;s RFID signal to anyone who scan it.</p>

Anyone who’s read Cory Doctorow’s Little Brother will find this familiar. Especially when matched with the next step. Unauthenticated over-the-air upgrading. That’s right, you can change the value of the chip without actually handling the chip. Awesome.

So, what’s this mean? Well, the unauthenticated read allows anyone with a reasonably powerful RFID reader to track anyone with a FasTrak in their car from any location. In Little Brother, the Department of Homeland Security (DHS) uses this system to track people all over the streets of San Francisco. And as bad as it would be for the Government to do something that broad, this system allows anyone who wants to track individual vehicles easily throughout California.

And the unauthenticated update? This makes it trivial to travel for free, as you can easily steal a valid FasTrak code, and re-flash your own FasTrak and travel on someone elses dime. This allows people who have interest in masking their movements to change their FasTrak codes frequently, so that they can not be tracked via FasTrak. Really want to create mayhem? Do what Marcus and the other Little Brothers did, and start just randomly flashing people’s FasTraks.

RFID is an inherently untrusted protocol. It gladly responds to anyone who asks for it’s code, and by default it doesn’t have any method to authenticate even for writes. Over-the-air writes are a dangerous idea in the first place. If someone really needs to recode their pass, they should have no problem taking it somewhere to be safely re-written over a wire, preferably using encryption to verify that the new code was authorized. Over-the-air reads, a fantastically useful thing, should require a strong challenge. This is much harder, though it could be implemented using something like a simple counter and encryption so that the signal is encrypted and can only be decrypted by the software with the other half of the key. It’s harder, and it’s more expensive, but it’s far far safer.

In addition to FasTrak falling apart, the Mifare cards created by NXP Semiconductors, and used for London’s transit among many other systems, has been found to have similar exploits. Bruce Schneier already has a fantastic write up on this on his blog, particularly NXP’s attempt to suppress the researchers who uncovered the flaws.

Security is hard, really hard. It constantly needs to be fixed and updated, but there are certain things that should be so obviously wrong, like RFID update over-the-air, that I can’t believe people base entire businesses on obviously flawed systems. Still, consumers have a right to know, and researchers have a right to research. Plus, by the time the researchers have figured it out and published, there is always a good chance that someone else has already figured it out to, and has been exploiting it for their own gain.

Firefox 3 and Self-Signed SSL Certificates

There has been a debate ensuing on Debian Planet since last week about Firefox 3’s new behavior for what it views as invalid SSL certificates. Having upgraded to Ubuntu 8.04 back in February, I’ve been using Firefox 3 since it hit rc1, so I can definitely relate to the problems that people are having. I completely agree with the sentiment of those who view the new behavior as a necessary evil. Unsigned SSL Certificates are a potentially huge security risk. Unfortunately, they’re common as spit and most people just click right past them because they’re getting in the way of the user doing what they want.

Firefox’s new approach is pretty heavy handed. So much so, in fact, that it appears you can’t work around it without some non-trivial changes to Gekco. This probably wouldn’t be so bad, except that most users have absolutely no idea what to do when confronted with this:

Firefox 3 Invalid SSL Cert Display

I know that my wife didn’t when the wireless network of the hotel we stayed at following our wedding redirected us to a site with an Invalid SSL Certificate. Hell, it threw me for a loop the first time I saw it. Other people have, of course, reported similar experiences.

In reality, I blame the insane cost of SSL Certificates. Partially, this is due to the standard for SSL security in web browsers is an all-or-nothing deal. You’re either signed by a Certificate Authority (CA) in the browsers certificate file, or you’re not. Because of this, CA’s have no incentive to change the way that they offer Certificates, you pay through the nose for a ‘valid’ one, or your don’t and use a self-signed ‘invalid’ one. The absolute cheapest you can get a Web-enabled certificate from Thawte, is $150/year, and in that case they only identify the domain, not the user. Want your company identified for better security? That’ll be an extra $100/year. Not that most users will notice. Want the fancy Green Address bar (at least in newer browsers)? Be prepared to spend a whopping $800/year.

Actually, I fully support this sort of pricing model (though I think that $150/year for a domain-only SSL certificate is ridiculous), but we need better mechanisms to communicate how much the key should be trusted. The Extended Validation Certificate (EV) is a huge step forward in this, but it’s still not very fine-grained, especially when many sites who need, or require like Microsoft Office SharePoint Server, encryption simply can’t justify that sort of expenditure for a signed SSL certificate.

Admittedly, organizations can create their own CA’s for internal use, and sign certificates all they want. This becomes impractical at some point, however, because you need to make sure that every user in your organization has the CA certificate installed. Washington State University has a CA certificate, that I suspect is installed in almost every departmental computer on campus, but most organizations simply don’t use it. This is likely due, in part, to the number of off-campus users, and the freedom which we provide users to bring their own hardware. My Eee PC spends quite a bit of time on the WSU network, but I don’t have the WSU CA certificate. Still, I would prefer that a lot of these self-signed sites were using the WSU certificate, as then I could install that cert and have them just work. As it stands, I have no reason to really even consider that course of action.

What we really need, is for the web to be tied into a true Web of Trust. I choose the Root CAs I want to honor, but signing their key with my own, and I can assign trust to other user’s signatures, so that I can opt to trust them simply because someone I trust trusts them. Since most Trust applications allow you to specify differing levels of trust, this is practically built into the encryption scheme. And I can explicitly set my trust on the Firefox key, so that I accept keys that Firefox trusts, and amazingly, my situation doesn’t really change much.

Of course, the above paragraph is a pipe-dream. The majority of encryption software is too difficult for the average user to use, and most users simply don’t care to learn. But as I’m a huge advocate for large-scale public-key encryption, I’m going to keep dreaming. In the meantime, we need a trusted Root CA who sells discounted certificates so that non-commercial entities who want (or need, which isn’t always the same thing), can have valid one’s without inconveniencing their users significantly.

There is the other side of this, that perhaps Firefox is trying to annoy users, to force web developers to do what they feel is right. Microsoft did the same thing with the UAC in Vista, after all. However, if this is the case, Mozilla has made an enormous mistake. For Windows Vista, redesigning the application just a little bit, can get rid of those annoying UAC boxes, and actually result in a net-increase in application security. Requiring signed certificates makes the web more secure, without a doubt, but the cost involved for many organizations seems prohibitive, especially for Open Source projects that feel that they’re doing their users a favor by encrypting logins to web-based systems.

I’m glad the Mozilla is trying to do something, but I agree with those who feel that they’ve gone too far. I’d be happy if, on the first alert screen, there was a button that allowed me to trivially accept the key on a temporary basis, while still requiring the full process to add the key permanently. And Ideally, I wouldn’t have to click on the “Or you can add an exception…” link to see the actualy options.

Firefox 3 SSL Options Buttons

Border Laptop Searches Case Heats Up

Michael Timothy Arnold, an US Citizen who was recently arrested when a search of his laptop as he reentered the country from the Philippines turned up several images of child pornography. He was able to get the lower courts to honor a motion to suppress, arguing that the search was unlawful. Part of his argument was that the in depth search of his laptop was triggered by the discovery of legal pornography in a cursory search of the system. The existence of any pornography on a digital system should not serve as probably cause for an in-depth search, but the laws regarding border searches are somewhat messy.

In a 9th Circuit Court of Appeals judgement on the case, the court accounts on several cases dating from as far back as the early 1970s which account for the powers that Border Control Agents have to conduct searches. In the 1973 case United States v. Ramsey (431 U.S. 606, 616), it was decided that “searches made at the border… are reasonable simply by virtue of the fact that they occur at the border….” This particular interpretation of the Law does bother me at a pretty fundamental level, but it is the accepted case law, and as such, is important context for the analysis of the this decision. At least the Courts have acknowledged, in 1982’s United States v. Ross (456 U.S. 798, 823), that you have the same expectation of privacy at the borders whether your luggage is a handkerchief on a stick or a locked attaché case.

In fact, Case Law to date has basically held that Border Control Agents have rights to intrude “beyond the body’s surface,” without probable cause. However, the Supreme Court has left open the possibility that “some searches of property are so destructive that they require particularized suspicion.” And such lies the basis of Mr. Arnold’s defense.

His claim, is that the laptop and its contents are more analogous to the contents of the owner’s home (due to the amount of data it can store), or the users own brain (since it holds ideas, conversations, and data regarding habits). The home claim is completely false, in my opinion. Yes, the laptop can store an amazing amount of data, but it is clearly a portable closed container, more analogous to the locked attaché case mentioned above than a home. In my non-legal opinion, is that under current law, the Border Agents were completely legitimized in the initial search. Whether or not the existence of the easily found pornographic images should have triggered a full search is another issue, but the search was, under current interpretations of the law completely justified. The 9th Circuit agrees.

As a response, the Electronic Frontier Foundation (EFF) with the Association of Corporate Travel Executives (ACTE) filed a brief of amice curiae with the 9th Circuit, trying to get Mr. Arnold another appellate hearing. Given the nature of the case, one of privacy at border crossings, it makes perfect sense that these associations are filing as amice. The basis of the brief is that the searching of the laptop, by definition, is a direct search into personal information, which is different than flipping through the pages of a diary which is an ‘incidental’ revelation of personal information. They base their argument against viewing a laptop as a traditional closed container, against the fact that the device cannot be used to smuggle physical contraband into the country.

However, digital images of child pornography are still illegal. If you were to carry a stash of printed documents detailing a terrorist plot, that would be reason for detainment and serviceable evidence in court. The data on the laptop is little different than the data on paper, it is merely a different representation of such data. And while such searches may require reasonable suspicion under the 4th Amendment, more than three decades of decisions hold that the 4th Amendment simply doesn’t apply to Border Searches.

The Digital Age has changed things. Most people are not aware of their digital footprint, the claims in Argument B1 of the amicus brief only go to show how little people think of their privacy in the digital age. Most people only think of the ease at which data can be copied when they’re creating those copies themselves, not when considering how easily someone in control of a system for even a short period of time can copy all the data which is contained within it. I agree with the EFF’s claim that copying that data does constitute a ‘seizure’, because while the government has not necessarily denied me access to it, they have taken a copy that I did not expressly authorize them to take.

Just because a laptop can contain an enormous amount of personal data, does not make it inherently unique from other containers. I could fill a shipping crate with personal, confidential information, and I would not have any reasonable expectation that customs wouldn’t go through it. What needs to be analyzed to determine the legality of the search is the inherent nature of the container, and not of the it’s potential contents. A laptop does not, by strict definition, contain large volumes of personal information. It usually does, but it doesn’t always. A notebook that I always carry with me can contain a lot of information that I may feel is somewhat private, but it is not special or unique from my Eee PC. The best argument that the EFF uses in the entire brief is that the existence of data on the laptop only proves that the machine was used for such activities, not that the person in question was responsible for that activity.

I agree with the EFFs goal here, I really, really do. I just think that claiming that 4th Amendment rights are being violated in a circumstance where the courts have long held that the 4th Amendment doesn’t apply is foolish. As long as the border search doctrine is held, as it relates to US Citizens at least, there is no method to correct this problem. We should be lobbying Congress, not the Courts, to ensure that measures are taken to ensure that the 4th Amendment is made to apply, at least in some degree, to searches of US Citizens.

Acknowledging that, under the current rule of law, things are unlikely to improve, Bruce Schneier has offered his advice, that we need to be more proactive in ensuring that we don’t take confidential or incriminating data across the borders. This can be accomplished several ways. By making sure that the system is clean before you cross the border, and by transferring anything you don’t want taken over a secure link back into the country. For business, this is easy. Set up a VPN, and make available ‘travel’ laptops to people who need to travel, which contains only the software they require to do their work. Any data that is required for work is taken via the VPN, and secure erasure tools are used to remove the data from the laptop. For the personal user, similar actions can be taken, by taking advantage of services that allow the storage of data (and the secure retrieval) on the Internet.

The law needs to change. It is highly unlikely that the 9th Circuit Court is going to overturn 30 years of case law, so we need to be approaching this battle from a different angle. I can see no reason myself why the 4th Amendment shouldn’t apply to US Citizens entering the country. Once I’ve proved my citizenship, I should be afforded all the rights that that citizenship guarantees me. Unfortunately, until the law is changed, I don’t see that happening.