June 2008 Archives

Whole Food Adventures: Eggs

Whole Food Adventures is a weekly discussion of the new dietary strategy that Catherine and I have been embarking on, by attempting to eat (as much as possible) foods with a minimal amount of processing and refinement. The theory of this is largely based on the work of the Weston A. Price Foundation, who promote health by returning our eating habits to more traditional foods.

Eggs have traditionally been a staple of many people’s diets. In 1950, it was estimated that the average person ate 389 eggs per year, however that number has been in steady decline ever since, due to the cholesterol content of the eggs. Nowadays, the average egg consumption is less than 180 eggs per person per year, and frankly, I think that number is likely skewed by people who don’t eat eggs and people who eat a lot of them.

The Weston A. Price foundation argues that the research that links cholesterol to heart disease is deeply flawed, but given that the popular knowledge is that cholesterol is bad (and I’m not sure I’m willing to accept that cholesterol shouldn’t be restricted), why should the humble egg be so villianized?

Eggs (specifically egg yolks) are one of the most vitamin rich foods available in nature, particularly Vitamin’s A and D. Furthermore, they contain fatty acids vital to proper brain function. In China, for generations, eggs have been considered such a powerful brain food, that pregnant women who could afford to do so have been known to eat ten eggs a day. Given that the Chinese unlikely have some sort of inborn knowledge of neurology, it seems likely that there was some form of positive correlation between egg consumption and brain function observed in historic China.

Eggs are such a vitamin rich food, that it is generally regarded that the only food that has as much Vitamin D as the average egg yolk, is cod liver oil, and at least eggs are far more palatable. However, there is a caveat to this. The humble egg is only as healthy as the chicken who laid it, and like pretty much every animal that means that they require a few things, like being able to go outside and forage.

Alas, the average chicken is not so lucky, being instead kept in a dark room, where they only interaction they have is food being dropped in front of them, and their eggs being pulled from behind them. Raising chickens in confined quarters is incredibly bad for the birds, as without exposure to sunlight (or at minimum a UV-B lamp) our humble chickens can’t produce any Vitamin D, and hence we can’t get any from them.

Partially because of this, many egg aficionados have made their way to buying ‘free-range’ and ‘organic’ eggs. Ironically, a ‘free-range’ chicken won’t necessarily ever go outside. ‘Free-range’ only requires that the bird have room to move around, and they don’t live in cages. Luckily, most free-range birds have better access to light, and the eggs from these birds are considered to be vastly healthier than their caged counterparts.

However, they’re still raised indoors, and they’re kept from eating the things that chickens have traditionally eaten, like bugs. In fact, in some jurisdictions, organic poultry must not eat any meat at all (the USDA has no rules for ‘certified organic’), which is just plain unnatural for the birds.

The best eggs, if you can find them, are from pasture raised. You can raise them yourself, if you’re so inclined, or you can buy them direct from someone else. The trick is, of course, finding them for a reasonable price. At the Moscow Farmers Market, a vendor was selling them for $5 a dozen, but I know you can do better than that.

Eggs, with their richness in vitamins, really are good for you. You owe it to yourself, and frankly the chickens, to find the best source of this goodness you can. And for health’s sake, don’t forget the yolk. That’s where all the goodness is.

Firefox 3 and Self-Signed SSL Certificates

There has been a debate ensuing on Debian Planet since last week about Firefox 3’s new behavior for what it views as invalid SSL certificates. Having upgraded to Ubuntu 8.04 back in February, I’ve been using Firefox 3 since it hit rc1, so I can definitely relate to the problems that people are having. I completely agree with the sentiment of those who view the new behavior as a necessary evil. Unsigned SSL Certificates are a potentially huge security risk. Unfortunately, they’re common as spit and most people just click right past them because they’re getting in the way of the user doing what they want.

Firefox’s new approach is pretty heavy handed. So much so, in fact, that it appears you can’t work around it without some non-trivial changes to Gekco. This probably wouldn’t be so bad, except that most users have absolutely no idea what to do when confronted with this:

Firefox 3 Invalid SSL Cert Display

I know that my wife didn’t when the wireless network of the hotel we stayed at following our wedding redirected us to a site with an Invalid SSL Certificate. Hell, it threw me for a loop the first time I saw it. Other people have, of course, reported similar experiences.

In reality, I blame the insane cost of SSL Certificates. Partially, this is due to the standard for SSL security in web browsers is an all-or-nothing deal. You’re either signed by a Certificate Authority (CA) in the browsers certificate file, or you’re not. Because of this, CA’s have no incentive to change the way that they offer Certificates, you pay through the nose for a ‘valid’ one, or your don’t and use a self-signed ‘invalid’ one. The absolute cheapest you can get a Web-enabled certificate from Thawte, is $150/year, and in that case they only identify the domain, not the user. Want your company identified for better security? That’ll be an extra $100/year. Not that most users will notice. Want the fancy Green Address bar (at least in newer browsers)? Be prepared to spend a whopping $800/year.

Actually, I fully support this sort of pricing model (though I think that $150/year for a domain-only SSL certificate is ridiculous), but we need better mechanisms to communicate how much the key should be trusted. The Extended Validation Certificate (EV) is a huge step forward in this, but it’s still not very fine-grained, especially when many sites who need, or require like Microsoft Office SharePoint Server, encryption simply can’t justify that sort of expenditure for a signed SSL certificate.

Admittedly, organizations can create their own CA’s for internal use, and sign certificates all they want. This becomes impractical at some point, however, because you need to make sure that every user in your organization has the CA certificate installed. Washington State University has a CA certificate, that I suspect is installed in almost every departmental computer on campus, but most organizations simply don’t use it. This is likely due, in part, to the number of off-campus users, and the freedom which we provide users to bring their own hardware. My Eee PC spends quite a bit of time on the WSU network, but I don’t have the WSU CA certificate. Still, I would prefer that a lot of these self-signed sites were using the WSU certificate, as then I could install that cert and have them just work. As it stands, I have no reason to really even consider that course of action.

What we really need, is for the web to be tied into a true Web of Trust. I choose the Root CAs I want to honor, but signing their key with my own, and I can assign trust to other user’s signatures, so that I can opt to trust them simply because someone I trust trusts them. Since most Trust applications allow you to specify differing levels of trust, this is practically built into the encryption scheme. And I can explicitly set my trust on the Firefox key, so that I accept keys that Firefox trusts, and amazingly, my situation doesn’t really change much.

Of course, the above paragraph is a pipe-dream. The majority of encryption software is too difficult for the average user to use, and most users simply don’t care to learn. But as I’m a huge advocate for large-scale public-key encryption, I’m going to keep dreaming. In the meantime, we need a trusted Root CA who sells discounted certificates so that non-commercial entities who want (or need, which isn’t always the same thing), can have valid one’s without inconveniencing their users significantly.

There is the other side of this, that perhaps Firefox is trying to annoy users, to force web developers to do what they feel is right. Microsoft did the same thing with the UAC in Vista, after all. However, if this is the case, Mozilla has made an enormous mistake. For Windows Vista, redesigning the application just a little bit, can get rid of those annoying UAC boxes, and actually result in a net-increase in application security. Requiring signed certificates makes the web more secure, without a doubt, but the cost involved for many organizations seems prohibitive, especially for Open Source projects that feel that they’re doing their users a favor by encrypting logins to web-based systems.

I’m glad the Mozilla is trying to do something, but I agree with those who feel that they’ve gone too far. I’d be happy if, on the first alert screen, there was a button that allowed me to trivially accept the key on a temporary basis, while still requiring the full process to add the key permanently. And Ideally, I wouldn’t have to click on the “Or you can add an exception…” link to see the actualy options.

Firefox 3 SSL Options Buttons

LinuxGames and Copy Protection

I hate Copy Protection. I hate it. It’s typically easy to bypass, and the only ways to make it hard to bypass, are such a pain in the ass of the users, that you end up just hating the entire process. In the early days, you had to keep the manual of the game handy so you could look up a certain word on a certain page. Easy to bypass, people just built up lists of the answers.

Then the CD revolution came, and the developers checked for the disc. Easily thwarted as CD Burners because more common, and the purposeful errors used a Copy Protection, such as SafeDisc, could be cracked with relative ease. And with the new world of Digital Distribution, CD-based copy protection schemes are unfeasible.

Enter the world of Internet-based copy-protection schemes. There is the question of how often you need to connect to the Internet, once or multiple times. XBox Live and Steam both use sort of a hybrid model. XBox Live saves it’s downloads with an encrypted signature for both the user who purchased it, and the ‘home’ system of the user. This encrypted signature is the key. If the signature of the user (or the XBox in question), doesn’t match, the user can’t play. Steam is basically the same, just minus the Host key. Incidentally, the Nintendo Wii is similar, except that it’s locked to the System.

The XBox, Wii, Steam can cache user Credentials, so that they don’t need to be always on the Internet in order to authenicate the games. Several on-line activated games (like On the Rainslick Precipice of Darkness or Defcon, only need to connect once, and cache a key which is delivered by the server to verify that the copy of the game is indeed legitimate. The nice part about Online verification, is that you can opt to check the codes periodically (Defcon does every time you go to play online), so if you have reason to believe that a serial number is being shared, you can kill it. The bad part is that if someone makes a Keygen for your game, you may end up killing legitimate users ability to use your software.

Ultimately, I’ve tended to believe that such things are a waste of time. People will still crack the games, and occasionally these systems are designed such to be more of a pain to legitimate users than illegitimate ones. When I heard of [Linux Game Publishing’s recent plans to integrate Copy Protection[(http://www.phoronix.com/scan.php?page=article&item=lgpcopyprotection&num=1) into their newest games, particularly a scheme that needs to validate on each load, I was concerned.

Back in the early days of Loki, it was a non-issue. None of the CD copy-protection schemes were ever going to be ported to Linux, so Loki couldn’t even consider it. Now that most people have always-on Internet connections, online verification is quickly becoming the norm. LGP, feeling that it they needed to explain themselves, posted their rebuttal](http://www.linuxgamepublishing.com/press_releases/200806241.txt).

I like Michael Simms, the owner of LGP, and my respect for the decisions that he’s made with LGP caused me to wait for his response on this Copy Protection debacle. Having read the press release, I must unfortunately accept LGPs decision. They performed a reasonable study, and I accept that Michael has good data backing this move.

I do want more information however. What exactly does “Contingencies are made so that if no internet connection is available, the game will never lock out legitimate customers” mean? Do I need to have an Internet connection available every so many times I start the game? What are the contingencies?

Overall, that’s really the only complaint I have with what I’ve seen so far. I like that I can install on multiple machines. I don’t believe that I should be required to buy two copies of a game so that my wife can play too. Incidentally, I do scan games’ EULAs for this these days. I like that if LGP ever goes out of business, contingencies are in place to remove the copy protection, even if LGP is not in a position to do so. I like that I always have the option to redownload a game that I’ve licensed, if my CD stops working. I wish I could tap into this now, as my copy of Majesty is scratched bad enough that I can’t install it anymore.

I hate copy protection. I hate any policy that treats legitimate customers like criminals. However, understanding the ease at which many people will copy software (particularly games), I don’t necessarily blame people for trying to protect it. My only requirement is that the method used be mostly transparent. I’ve had that with Defcon and the Penny Arcade Adventures. I’ll give LGP’s scheme a shot, as much as it saddens me that they’ve felt forced down this path.

We as consumers need to be willing to pay for entertainment. Software protection doesn’t bother me as much as media protection since the software inherently has limits on my use of it, so I’ve been more willing to accept it’s integration into my computing life. Ideally, we wouldn’t have to worry about Copy Protection, but until users decide to either pay the asking price or go without, I’m not sure it’s going anywhere. Just make sure that I don’t notice your copy protection after I activate it.

Cory Doctorow's Little Brother

Author Cory Doctorow is apparently fairly well known in the Tech arena for his fiction. Frankly, until the last month or so, I’d never heard of him, until I’d heard some people talking about his latest novel, Little Brother, a novel targeted toward young adults. Of course, what really caught my eye, was that the book was being distributed under the Creative Commons, as are all of Doctorow’s novels.

This book is a cry that many in the Technology and Security arenas have been making for years. This book is a statement against those who would have us give up liberty, with the illusion that they are making us safer.

The story takes place in the near future, whatever future doesn’t really matter, as the story begins in a world very much like ours. It focuses on a 17-year old boy, Marcus Yallow, who takes part in the Hacker Underground, his innate understanding of security making him feel superior to those around him, who simply give into the increased surveillance and suspicion leveled at them. Young people in particular.

He’s a pretty standard hacker geek. Likes taking things apart, learning how they tick, modifying them to his bidding. And he’s into ARGs, which places him out of school on one fateful day when his hometown of San Francisco is bombed, the Bay Bridge (and the BART tunnel under it) being destroyed, leaving thousands dead. He and his friends are too close, and after nearly dying in the crush of the BART station, they break onto the surface, only to be arrested and detained by the Department of Homeland Security, simply because of where they were found.

Marcus quickly finds himself in deep trouble, because he refuses to give up access to his phone to the DHS agent questioning him, resulting in five days of detainment, interrogation, and humiliation until he broke and gave them everything. His friends, for the most part, made it out easier, as they tried to hide less than he had. Eventually, they decided to let Marcus go, though not without threats that he was under suspicion. However, one of Marcus’ friends, his best friend, didn’t get out of the prison. He’d been injured, and wasn’t ready for release. By the time he’d healed, he’d been detained so long that the DHS wouldn’t dare releasing him.

The set up is a little over-the-top, but it’s not outside the realm of possibility. Look at the stories out of Guantanamo Bay, the in depth discussions of whether or not certain activities constitute torture.

I’ve made no secret of the fact that I supported the election of George W. Bush in 2000, and I voted for him again in 2004. Given the choices, I still believe that it was the best choice. However, I’ve never supported the actions taken after 9/11 in the name of National Security. The PATRIOT Act, warrantless wiretapping, increases in Data Mining, the list of activities done in the name of National Security which chip away at our liberties is long, and in the half a decade since September 11, 2001, it has only gotten longer.

Marcus Yallow’s San Francisco quickly becomes a intense caricature of our country today. People get picked up off the street simply because their BART travels don’t match the ‘normal’ patterns, camera’s show up in classrooms, teachers lose their jobs for being ‘dissidents’, everyone is under suspicion all the time. And most people, unsurprisingly, are so scared that they let it happen.

Marcus Yallow, having kept his incarceration secret, doesn’t. He starts up a network within the Internet, where people can post anonymously, and operate in a much more difficult to track manner. Through this, Marcus and the others begin making things difficult on the citizens of San Francisco by exposing the weaknesses in the DHS’ methods, and causing the DHS to greatly inconvenience thousands of innocent people. The DHS identifies him as a Terrorist, Marcus feels that they’re the Terrorists.

The book isn’t the best writing I’ve ever read. And the audience is clearly younger than I am. However, it is still a good book. The story is entertaining, well told, and most of all relevant. I would argue that it is the most relevant book written in the last few years. I don’t necessarily agree with all of the politics, but the core message, that Freedom is more important than anything, and particularly that trading Freedom for Security is dangerous, and doesn’t work.

Read this book. It’s available for free. I even have it mirrored. But after you finish, if you’re anything like me at least, you’ll be looking for a place to buy it. This book should be required reading in Middle School, as we need to teach our young people the value of Freedom. Freedom is what has made this country what it is, and for the last seven years that Freedom has been under attack in an incredibly number of ways. You may not agree with everything in it, but please, read this book. Pass it around. It’s that important.

Whole Food Adventures: Planting a Garden

As I’ve mentioned before, Catherine and I have rented a 400 sq ft plot down at the Pullman Community Gardens. Two weekends ago, we spent a good four hours down there planting. As a certified plant-killer who simply married a Green Thumb, I’m going to keep things pretty basic for a description of what we’ve done, and my understanding of why what we did was important.

We began by sub-dividing the plot into a set of smaller, more easily managed patches. We went with one 20’ long, ~5’ wide patch that ran along one of the paths, which we filled with tomatoes and peppers, with a bit of squash and cucumber in the free space at the end. We didn’t do a lot of measuring of spacing between plants, but we did plant 3 plants wide on the tomatoes and the peppers, so I’d guess we had a good 9” between plants. We ensured that the plants were buried just beyond their lowest stems, evidently the branching stems that are buried will begin to behave like roots. From what I understand, some people (who planted far sooner than us), will actually plant their tomatoes on their side, encouraging really extensive roots in the new plants.

One of the most important parts of planting a new garden is trying to avoid walking on the places that you’re planting. This should be pretty obvious, actually, as if you’ve ever seen a walking path where a lot of people walk, it’s pretty much always heavily impacted soil, and usually nothing is growing there. It doesn’t take very long for soil to become so hard to it’s difficult to grow things there, so it’s very, very important to minimize the amount you want on the softer beds for the soil. Given that even now, two weeks later, the paths are incredibly solid while the beds are soft and spongy, it’s amazing how fast the soil hardened up. Even given the relatively high clay content, and low traffic that our plot sees, it’s still amazing how much hardened the paths are.

So, understanding that it’s important to keep the beds soft, and burying your plants just a bit deeper than their lowest stems will help with root development, that’ll take you most of the way to getting a garden planted, especially starting later with greenhouse started plants. For seeds, of which we planted some beets and carrots, follow the instructions on the package as far as depth of planting goes, and make sure you dig little furrows. Here’s the interest part though, while you’re going to have little hills, you plant in the hills. This helps make sure that the seeds don’t drown, but more importantly that the deeper parts of the soil can get nice and wet. Incidentally, this is important for all your plants, and some people suggest digging trenches around everything you plant. We’ve opted not to do that this year.

So, once everything is planted, it’s important to make sure that it doesn’t dry out. That clay content I’ve mentioned before is a blessing here, because while the surface of the soil may dry out, there is still usually plenty of moisture beginning a half inch down. This has been largely due to Catherine going to the plot daily, dumping plenty of water over the plot. This is particular important the first few weeks after planting, as the plants haven’t adapted their root structure and expanded it to be able to really effectively pull in moisture and nutrients.

In all, we’ve got a lot of plants planted, and should have a good harvest this year. I was a bit afraid of the amount of work we were going to need to perform, but aside from maybe 30 minutes a day, including travel time, and the less than twenty hours that we spent preparing the plot from a weed-filled patch of dirt to planting all our stuff, it wasn’t nearly as much work as I thought. Of course, we haven’t gotten any spoils of that work yet, which is disappointing, but the bounty should be great, and the layout of time wasn’t as severe as I’d feared.

The Underhanded C Contest

The Underhanded C Contest is a yearly contest which has a simple goal: write an innocent looking program in C that hides malicious behavior. Bonus points if it looks like a legitimate bug. Immediately, I can hear the cries: “WHY? Why would you write code that hides what it’s really doing?” The answer to that is simple: To become better programmers.

C can be a dangerous language. It’s great for bit-twiddling, it can let you overwrite the execution stack, among of horde of other dangerous things. C is literally one step removed from assembler. Of course, all these dangers are also the power of C. It’s very to write code that can effectively talk to hardware in many languages that are viewed as more advanced than C, particularly these days as more and more languages utilize specialized runtimes which typically have low-level functions implemented in the Runtime in C anyway.

The challenge grew out of the 2004 Obfuscated V Contest, a contest started by a disgruntled American voter who was upset at George Bush’s re-election. This challenge was to write a vote tabulation program that looked correct, but secretely favored one candidate. Such is the basis for all the contests. Write innocent looking code, that does something evil.

These challenges have ranged from image processing to encryption tasks, with this years being to redact part of an image in a recoverable fashion. They provide a PPM library, conveniently enough, so all you need to worry about is making the image look redacted, but having the data still recoverable, all while making sure the code looks completely innocent. I’m going to be looking at competing, hell there is a $100 Thinkgeek gift certificate on the line, but I haven’t done any C in a year or so. Needless to say, I’m not sure I’ll do well. Should be an entertaining challenge though.


As part of their new MobileMe online platform announced at the 2008 World Wide Developer Conference, Apple has recently released a web-applications framework which they’ve put their hands in with. SproutCore was originally designed by the SproutIt company for use in their Mailroom help-desk application.

Please note, that while Apple has chosen SproutCore, and they’ve gained it a lot of attention, it is not an Apple technology. It’s not really a Google technology either, the project just happens to be hosted on Google Code. Hell, even on MacBreak Weekly this week, Leo and the rest of the MacBreakers get that wrong. Apple has hired the developer, and they’ve been extending it, but the project is so far beyond just Apple. Google’s involvement is basically just providing hosting, but Google’s interest is similar to Apple’s. Both companies are interested in keeping the Web open using Open Standards. Google, so they can sell ads. Apple, so that no one can lock their platform out of the market. I may not care much about the motives, but I definitely support the potential results.

SproutCore is marketed as an advanced JavaScript framework which seeks to make Flash and Silverlight unnecessary because “users hate to have to download plugins before using your application.” Frankly, I think that’s bullshit, as 95% of users already have Flash installed, and will have Silverlight installed soon. No, the real reason to avoid Flash and Silverlight is that if you’re not a Windows user, the plugins typically are really awful. Of course, this remains to be seen with Silverlight, as it’s still in Beta, and Microsoft hasn’t released a Mac beta (to my knowledge at least), and the Moonlight project is still working hard to support Linux and other unixes.

My first impression of SproutCore may have been unfairly negative. Developing for SproutCore depends on Ruby, a language which I’ve felt no inclination to learn. This really has nothing to do with Ruby, which I applaud for successfully bringing the excellent MVC Design Pattern to the Web via the Rail framework, but mostly that I don’t want to go through the effort of learning the language without requiring it for a job. I know that i could learn the language pretty quickly,if necessary. Part of my problem with SproutCore was that it appeared to me to be little more than an extension on top of Rails.

This doesn’t appear to be the case upon further inspection. Upon running the ‘sc-build’ command on a project, it will output pure HTML, JavaScript, and CSS. Which is good, because the output of the debug SproutCore is really, really inefficient.

SproutCore can use a large amount of cool technology. Including HTML5 Client-Side Storage (which only Safari can use at this time) and some advanced JavaScript stuff. It provides an MVC framework in JavaScript, which many people are touting as bringing “Cocoa to the web.” Not being a Cocoa programmer, I can’t speak to that, but I do acknowledge and agree with the power of MVC, and apparently Cocoa has an excellent Internationalization Framework which has been extended to SproutCore.

My problem with SproutCore is that it feels to me that it forces you into a particular development model. Unlike other frameworks like Google Gears and the Objective-J framework, SproutCore allows you to write real JavaScript to accomplish what you’re doing. However, I’m still a fan of YUI, which allows me to embed their controls in any development system, and it’s better at designing applications that degrade in a clean fashion that many of these other systems. Admittedly, this isn’t as important anymore, particularly to Apple with Mobile Safari, but even recently I had a user who was still using Firefox 1. While I don’t intend to offer a rich experience to that user, I still want them to be able to use my Application. YUI can enable this, SproutCore probably can’t.

Firefox 3 Download Available Today!

Today marks the Official Release of Firefox 3.0, the next generation Firefox browser. As a Linux user, and web developer, Firefox has been my platform of choice for years, particularly after the discovery of such excellent extensions as FireBug.

Firefox 3 sports many new features, including cross-site XMLHttpRequests, a faster JavaScript interpreter, a bookmarks/history search interface, better memory management, and a lot more. Having been using the Beta’s at home for the last few months, I certainly appreciate the improved memory usage, and the stability has been pretty solid for me in RC3.

The downloads go live at 10 am Pacific Time, so head on over to the Download Page, and help the Mozilla Foundation set a world record for downloads.

Whole Food Adventures: Soap!

Okay, so this isn’t exactly edible, but it falls into the same vein of interest for me, in many ways. At this weekend’s Farmer’s Market, Catherine and I decided to buy some soaps from the one of the natural soap vendors. This was mainly driven by the fact that I was almost out of shaving foam, and had been intending to begin using shaving soap as part of my intention to eventually shave with a straight-edge razor.

What we ended up getting was a Tea Tree Oil Shampoo Soap and another Shaving Soap, a scrubbing soap called “Pesto”, and a Rosemary Pumice soap. We paid about $16 for the four soaps, but these soaps are made with really good materials. Jojoba Oil and Castor Oil, both of which are really good for your skin, as significant ingredients. Even after a single use of these soaps, my skin already feels better, and that’s after a day of sun damage from planting the garden.

However, my favorite part of the new soap is the shaving soap. I had to buy a badger-hair brush, which are unfortunately hard to find these days. Luckily, I knew that Spartan Cutlery, in Spokane’s Malls carried boars-hair brushes. Unfortunately, they were only available in shaving kits that included a twin blade safety razor. Luckily, I’d been planning on buying one of those anyway, as the replacement blades are at least half the price of the modern Gillette replacement blades. Still, aside from using a fresh blade, I used my old Gillette Mach3 along with my boars hair brush and the Tea Tree Oil Shaving Soap.

Shaving soap is a little different to use. As I don’t have a good mug for the soap, I’m just using one of our smaller ice cream bowls, but putting a bit of warm water in with the soap is all it really takes to build a nice solid lather on the brush. Brush that one your face, same way you would the foam, except you don’t end up with this thick goop all over your face. From there, shave normally. Cold water over the blade to wash it out, and a cold water rinse when you’re done. This was honestly the best shave I’ve had in years. Hell, I shaved last night, and as I write this more than twelve hours later, I still am less shaggy than I normally am after this amount of time.

Natural Soaps aren’t necessarily very expensive, and just the way that you feel once you’re done using them makes it completely worth the slightly greater investment.

Border Laptop Searches Case Heats Up

Michael Timothy Arnold, an US Citizen who was recently arrested when a search of his laptop as he reentered the country from the Philippines turned up several images of child pornography. He was able to get the lower courts to honor a motion to suppress, arguing that the search was unlawful. Part of his argument was that the in depth search of his laptop was triggered by the discovery of legal pornography in a cursory search of the system. The existence of any pornography on a digital system should not serve as probably cause for an in-depth search, but the laws regarding border searches are somewhat messy.

In a 9th Circuit Court of Appeals judgement on the case, the court accounts on several cases dating from as far back as the early 1970s which account for the powers that Border Control Agents have to conduct searches. In the 1973 case United States v. Ramsey (431 U.S. 606, 616), it was decided that “searches made at the border… are reasonable simply by virtue of the fact that they occur at the border….” This particular interpretation of the Law does bother me at a pretty fundamental level, but it is the accepted case law, and as such, is important context for the analysis of the this decision. At least the Courts have acknowledged, in 1982’s United States v. Ross (456 U.S. 798, 823), that you have the same expectation of privacy at the borders whether your luggage is a handkerchief on a stick or a locked attaché case.

In fact, Case Law to date has basically held that Border Control Agents have rights to intrude “beyond the body’s surface,” without probable cause. However, the Supreme Court has left open the possibility that “some searches of property are so destructive that they require particularized suspicion.” And such lies the basis of Mr. Arnold’s defense.

His claim, is that the laptop and its contents are more analogous to the contents of the owner’s home (due to the amount of data it can store), or the users own brain (since it holds ideas, conversations, and data regarding habits). The home claim is completely false, in my opinion. Yes, the laptop can store an amazing amount of data, but it is clearly a portable closed container, more analogous to the locked attaché case mentioned above than a home. In my non-legal opinion, is that under current law, the Border Agents were completely legitimized in the initial search. Whether or not the existence of the easily found pornographic images should have triggered a full search is another issue, but the search was, under current interpretations of the law completely justified. The 9th Circuit agrees.

As a response, the Electronic Frontier Foundation (EFF) with the Association of Corporate Travel Executives (ACTE) filed a brief of amice curiae with the 9th Circuit, trying to get Mr. Arnold another appellate hearing. Given the nature of the case, one of privacy at border crossings, it makes perfect sense that these associations are filing as amice. The basis of the brief is that the searching of the laptop, by definition, is a direct search into personal information, which is different than flipping through the pages of a diary which is an ‘incidental’ revelation of personal information. They base their argument against viewing a laptop as a traditional closed container, against the fact that the device cannot be used to smuggle physical contraband into the country.

However, digital images of child pornography are still illegal. If you were to carry a stash of printed documents detailing a terrorist plot, that would be reason for detainment and serviceable evidence in court. The data on the laptop is little different than the data on paper, it is merely a different representation of such data. And while such searches may require reasonable suspicion under the 4th Amendment, more than three decades of decisions hold that the 4th Amendment simply doesn’t apply to Border Searches.

The Digital Age has changed things. Most people are not aware of their digital footprint, the claims in Argument B1 of the amicus brief only go to show how little people think of their privacy in the digital age. Most people only think of the ease at which data can be copied when they’re creating those copies themselves, not when considering how easily someone in control of a system for even a short period of time can copy all the data which is contained within it. I agree with the EFF’s claim that copying that data does constitute a ‘seizure’, because while the government has not necessarily denied me access to it, they have taken a copy that I did not expressly authorize them to take.

Just because a laptop can contain an enormous amount of personal data, does not make it inherently unique from other containers. I could fill a shipping crate with personal, confidential information, and I would not have any reasonable expectation that customs wouldn’t go through it. What needs to be analyzed to determine the legality of the search is the inherent nature of the container, and not of the it’s potential contents. A laptop does not, by strict definition, contain large volumes of personal information. It usually does, but it doesn’t always. A notebook that I always carry with me can contain a lot of information that I may feel is somewhat private, but it is not special or unique from my Eee PC. The best argument that the EFF uses in the entire brief is that the existence of data on the laptop only proves that the machine was used for such activities, not that the person in question was responsible for that activity.

I agree with the EFFs goal here, I really, really do. I just think that claiming that 4th Amendment rights are being violated in a circumstance where the courts have long held that the 4th Amendment doesn’t apply is foolish. As long as the border search doctrine is held, as it relates to US Citizens at least, there is no method to correct this problem. We should be lobbying Congress, not the Courts, to ensure that measures are taken to ensure that the 4th Amendment is made to apply, at least in some degree, to searches of US Citizens.

Acknowledging that, under the current rule of law, things are unlikely to improve, Bruce Schneier has offered his advice, that we need to be more proactive in ensuring that we don’t take confidential or incriminating data across the borders. This can be accomplished several ways. By making sure that the system is clean before you cross the border, and by transferring anything you don’t want taken over a secure link back into the country. For business, this is easy. Set up a VPN, and make available ‘travel’ laptops to people who need to travel, which contains only the software they require to do their work. Any data that is required for work is taken via the VPN, and secure erasure tools are used to remove the data from the laptop. For the personal user, similar actions can be taken, by taking advantage of services that allow the storage of data (and the secure retrieval) on the Internet.

The law needs to change. It is highly unlikely that the 9th Circuit Court is going to overturn 30 years of case law, so we need to be approaching this battle from a different angle. I can see no reason myself why the 4th Amendment shouldn’t apply to US Citizens entering the country. Once I’ve proved my citizenship, I should be afforded all the rights that that citizenship guarantees me. Unfortunately, until the law is changed, I don’t see that happening.

Apple Attempts to Corner Smartphone Market

At the World Wide Developers Conference, Steve Jobs announced a new iPhone with 3G support and GPS. Neither of these were particularly surprising. After all, people have been upset about the lack of these technologies since day one. Not that that stopped a large number of people from shelling out $600 for the phone in the first few months, but everyone was upset about the missing technology.

No, what was surprising, was the announced pricing. An 8 GiB phone for a mere $199 (with a two-year contract through AT&T), or the 16 GiB for $299. This is a price point that has finally brought a few people who were still unwilling to take the iPhone leap seriously considering taking the plunge. Now, it’s no longer the price of the hardware, but the $70/month cell phone plan that has to go with it.

So, why the price drop? Well, in addition to the features that matter to the Enterprise, like Exchange Integration, the iPhone was still a little expensive. By allowing the providers to subsidize the cost, more people are likely to buy iPhones, further eating into the share of Blackberries and Palms. With the recent release of the SDK, and the launch of the iPhone Apps store, Apple stands to make ~30% off the sales of every application delivered to an iPhone. And developers seem to be excited about it. Not me, so much, since the only Mac I own is PowerPC, and there is a good chance Apple is preparing to give me the finger anyway.

No, I think the real reason is that Apple is trying to cement their position in the market before the Open Handset Alliance can get a single Android phone on the market. HTC has promised a few phones by the end of the year, but as it stands, Android needs some movement.

In my opinion, Android is the superior developer’s platform. It’s open, all applications are created equal, and unlike the iPhone it allows more than one application to be memory resident at a time. Plus, it doesn’t have the same restrictions on it as the iPhone. I fully believe that Android, despite it’s insistence on Java, is the more developer friendly platform. And the Java insistence is not so important in the mobile realm, which is mostly dominated by J2ME developers.

Apple may be the purveyors of all that is hip and modern in the modern realm, but without developers, a platform can’t really succeed. It was the adoption of a Unix core for Mac OS X, and a more advanced UI toolkit has been largely responsible for the growing success of the Mac platform. However, if you’ve got the majority share, people won’t have a choice but to target your platform.

The End of HOPE

Hackers On Planet Earth, 2600 Magazine’s periodic Hacker conference is preparing for it’s final gathering. Luckily, this isn’t because 2600 doesn’t want to run the conference anymore, but rather the historic home of HOPE, New York’s Hotel Pennsylvania, was slated for demolition, to be replaced by an office high-rise. Luckily, the 89-year-old hotel still has some friends, who don’t intend to let the developers tear down this historic structure without a fight.

Though I’ve been to New York on multiple occasions, I’m not familiar with the Hotel Pennsylvania, so I can’t really make any comments about the significance of the place. However, it is distressing how often we tear down older buildings. If the Hotel truly has structural issues, that’s one thing, but it doesn’t sound like that is the case. Due to the potential destruction of the Hotel, 2600 has chosen to name this year’s HOPE as The Last HOPE, indicating that if the Hotel Pennsylvania is destroyed, any new conferences will not carry on that same name. That is, of course, if they can find another venue they can afford.

Hackers On Planet Earth is a conference aimed at the True Hackers, people for whom technology is a passion. People for whom the desire to dig deep into the guts of technology and figure out how it works. It’s this tendency to dig into the inner workings of systems that has helped get the word so vilified, particularly as companies have tried to commoditize their technology. Ultimately, though, it’s the Hackers who tend to drive technology to the next level, the people who push technology and develop new technologies just for the sake of the technology. Many of these are people who have become socially active, after years of persecution because of their misunderstood love of technology.

Not that some Hackers haven’t broken laws. Stolen information, trespassed in computer systems, and (inadvertently or otherwise) caused denials-of-service. However, the response has often been blown way out of proportion by a media, and a populace who doesn’t understand the technology that now pervades their lives. In this week’s TWiT, Randall Schwartz (author of O’Reilley’s Learning Perl, among others) comments that in the state of Oregon the law is written such that it’s basically illegal to use a Computer for anything. Of course the laws are intended to only be used against “bad” people, but that’s kind of distressing when one takes into account the average person’s desire to provide “bad” with the same definition as “different”.

HOPE this year has some really interesting sessions. Steven Levy, author of Hackers: Heroes of the Computer Revolution, is giving the Keynote. The nametags will have RFID tags, and giant screens will be present to indicate who is going where and with whom they’re socializing. Capture the Flag games, a Hackerspace Village, Segway Racing, and more. All of it promises to be an excellent time for those people who love to get their hands dirty on the guts of technology. Speakers will be covering topics from building an IDS, Botnets, Phishing, and a lot more.

I desperately want to go this year. I wanted to go last year, but it didn’t work out. This year, may well be my last chance. Had it not been for the wedding so recently, I’d be in a much better position financially to kip off to New York City for a few days, and attend. As it stands, I’m going to try to see what I can do to make this happen. With any luck, I’ll be seeing folks as HOPE.

Death and Family

Saturday, May 31, Virginia Waggoner, lifetime resident of Mattoon, IL, succumbed to her second bout of cancer. She was 86. Her son, my now father-in-law chose not to tell Catherine and I until June 2, so that Catherine could still be happy and enjoy the wedding. And so, less than 48 hours after exchanging vows, I was placing my new wife on a plane to return to her father’s homeland to attend the funeral.

It was an emotional time, as both death and marriage are wont to be. Catherine had wanted dearly for Virginia to be able to come to the wedding, and we were not aware to the extent at which her health had slipped, though cancer can often kill a person incredibly fast.

The funeral, I was told, was a nice ceremony, and a large amount of the Waggoner-clan, both those who’ve not left Illinois, and others traveled for the occasion. I’ve always found it interesting how death tends to bring together family members who often haven’t seen one another in decades.

I remember when my Grandfather died in 1996, his funeral had a large number of attendees that I have no memory of, but whom remembered me from when I was very, very young. However, it was never the memorial that stuck out in my mind, it was the time after the memorial, where quite a lot of us went to my grandmothers. The evening was interesting. We didn’t have a wake involving a lot of alcohol or anything, most of us were far to young for that, but everyone just sat around telling stories, laughing, bonding.

Being barely thirteen at the time, and never taking the time to get to know my grandfather as I should have, I had little to contribute. Still, it was the sense of family that I took away from that day that has struck me more meaningfully. I get the impression that Catherine’s experience of the last week was similar.

I come from a family where the default position on death is to joke about it. We did it at my Grandfather’s wake, for the ensuing eight years where his ashes spent much of their time in my Grandmother’s closet, and while I was not present at my Grandmother’s passing, I’ve no doubt that the family got together to celebrate her life through laughter.

Catherine and I have had terrible luck thus far with life events and Grandmothers. Mine passed within weeks of us getting to know one another, and hers passes just at the time of the wedding. Luckily, neither death was particularly tragic, as they were both older women in failing health, though it’s still strange to not have them around any longer.

One of the only regrets I have in my life, was not taking the time to get to know my grandfather better before his passing. I was young at the time, though that hardly feels like an excuse any longer. If you still have the opportunity, I encourage any of you to spend time with those people in your lives who simply feel like constant elements. They won’t last forever. And when they do pass, shed tears if you must, but don’t forget to laugh, at least a little.


JavaScript: The Good Parts

O’Reilley just published a new book by well-known JavaScript Guru Douglas Crockford. JavaScript: The Good Parts is a excellent introduction to using JavaScript in an effective, maintainable manner, that will be easy to use and maintain into the future.

Crockford pulls no punches about the JavaScript in this book. He freely admits that it was released before it was ready, that there are a number of poorly considered, and some just plain dangerous features, especially in the wrong hands. He describes JavaScript as “Lisp in C’s clothing,” which is an apt description. JavaScript has hallmarks of both a Functional and a Procedural language, but the biggest dangers come from trying to use the Functional elements as if they were Procedural. Something which is far too easy to do.

JavaScript: The Good Parts was written for your average Procedural programmer (ie, most of us). It isn’t about the DOM, it isn’t about writing Web Applications. It’s about using JavaScript as an effective programming environment. He begins by running through the grammar of the language, giving a thorough overview of all the features of the language.

This is followed with a discussion of JavaScript’s Object model. JavaScript’s objects are dangerous because it seems enough like a Object-Oriented language to be dangerous. The prototype system is unlike any other object system, and while it can be used as if it weren’t, doing so tends to be incredibly slow. Luckily, JavaScript objects, due to JavaScript’s functional nature, are incredibly powerful, allowing complex data models to be built, easily, and every object can be easily extended to add new functionality. In fact, a fair amount of the book are convenient methods to be added to the core objects to add functionality that probably should have been there in the first place.

Most useful to me, was a discussion on inheritance. In JavaScript everything is a global. Worse, members of a class are all public, so there isn’t any directly obvious means to keep variables private. Luckily, JavaScript has an answer for that. The closure. By writing a function that creates and returns an object, you can declare variables inside the function which will be accessible to functions inside the object you return. Better yet, they remain accessible as long as the object you return is in scope.

JavaScript has it’s weaknesses. But it is the language of the web. I don’t see that changing. Crockford’s book is a great introduction in how to use JavaScript effectively. Just because JavaScript has a reputation as a toy language, doesn’t mean that real engineering isn’t possible in it. The direction the web has taken in the last few years has really proved that JavaScript is a powerful language, with powerful tools. You just need to use it correctly.

If you’re touching the web at all, you owe it to yourself to buy this book and read it. Hell, read it a few times. It gets better each time you read it.

Just Got Married

Two days ago, Catherine and I swore vows to join each other in Marriage. We were joined by just over 120 of our family and friends.

It was a good party, and a really nice ceremony, and it was great to see everyone. We’re happy to be through it, not only to finally be married, but also to be done planning. The event was a year in planning, but it was great to see everyone and we had a lot of fun.

Regular posting will likely resume on Thursday.

Whole Food Adventures: Chicken Stock

Stock is a general term for any liquid made by boiling the bones of an animal for an extended period, to convince it to let go of it’s delicious gelatin. As we’d made a pair of Cornish Game Hens for dinner last week, we had plenty of bones to toss into a pot and boil. As a difference, Broth is the term typically used when only meat is boiled, however it is perfectly reasonable to use meat when making your stock.

Between the meat and fat bits left on the bones of our hens (and the half a hen that Catherine didn’t eat), we were probably a bit below the recommended 2-3 lbs of chicken (no guts, thank-you-very-much) to the 1 gallon of water that we started with, however the stock still smells absolutely delicious.

Making stock is dead simply. 1 gallon of water, a few pounds of chicken (a small Roaster would be ideal), a few carrots, some celery, and some onion. Toss all those in a pot, bring them to a boil, and turn down and simmer overnight. It’s that simple. The only downside, is that you wak up to a house smelling of chicken soup, which for me isn’t the most appetizing breakfast smell.

The next morning, or afternoon if you’d prefer, just scoop out as much of the chicken and vegetable bits you can remove with a slotted spoon, saving any chicken meat for any sort of shredded chicken application, pour the stock through a strainer into a large bowl, which you can put in the fridge. A few hours later, there should be a layer of scum at the top of the bowl, which can easily be taken off the top, before you separate out the stock into jars or other containers. Keep in the fridge for the short term, or in the freezer for the long term.

Chicken Stock is great for any chicken based dish that required a fair amount of liquid. Soup, Chicken and Dumplings, Stuffing, all of these can benefit from homemade stock, and, unlike most commercial stocks, you’ve get to choose how much sodium you have in your stock.

Sure, making stock takes a lot of time, but 90% of that time is completely unattended. It’s easy, useful, and ultimately healthier than the pre-packaged solutions.