May 2008 Archives

Indiana Jones and the Kingdom of the Crystal Skull

1 Comment

Hearing about a new Indiana Jones movie all those months ago, I was caught up in a strange mix of excitement and anticipating disappointment. Indiana Jones is the absolute best Pulp Adventure movies that have been made in recent history. Pulp had it’s heyday, but even in the 80s, when Indy was King, it was a fading art. Since The Last Crusade, we’ve seen basically nothing in the way of good Pulp movies.

Sure there was Sky Captain and the World of Tomorrow, but while a good time, the movie simply lacked the charm (or is that a respectful lack thereof?) of Indiana Jones. Henry “Indiana” Jones Junior is the ideal everyman. Strong, athletic, educated, eloquent, clever, and cunning. And yet he’s not unstoppable. Indy gets beat up. He loses. He makes mistakes. He’s a hero because he doesn’t give up, and through his perseverance he always manages to come out ahead.

Kingdom of the Crystal Skull is an excellent movie, with which Lucas and Spielberg have tried to lay the foundation for many more movies. Harrison Ford, who is himself nearly two decades older than he was in The Last Crusade, which works since the story takes place at least 15 years after said movie.

The plot is pure Indy. Indy begins the movie captured by the KGB, since the movie is set in 1957, the Nazi’s just aren’t a relevant enemy anymore, and they go to Warehouse 51, the mysterious Government Warehouse where we saw them stashing the Ark of the Covenant at the end of Raiders. Indy helps the Russians uncover a mysterious crate, which contains a highly metallic humanoid shape in a weird casing. Indy attempts to stop the Russians, but his friend turns on him and Indy is forced to let it go, though not without a great fight.

Due to a run-in with the J Edgar Hoover’s FBI, Indy ends up losing his job at the University, and prepares to head off to Europe to teach, only to be stopped by a young man named Mutt Williams, who seems to know Dr. Harold Oxley, a character Indy apparently knew around the time of Raiders, but who has never appeared in Film before.

Mutt, played by Shia LeBouf, convinces Indy to go to South America to try to save Oxley and Mutt’s mother, who happens to be Marion from Raider’s. That’s right, Indy finally gets reunited with the love of his life. Not surprisingly, Mutt turns out to be Indy’s son. That’s right, Shia LeBouf has been selected to take up the Fedora and the Bullwhip, in continuation of the storyline. I’m still not sure how I feel about that, but I did enjoy the Mutt character, and I know that I’ll see a movie centered around Mutt, if one is made.

As he has since 1981, Indiana Jones embodies the Pulp genre. There are big fights, strange enemies, and impossible challenges that our hero manages to overcome. It’s a good ride, and definitely worth the price of admission. I wasn’t a big fan of the climax of the movie, and the resolution, as I felt it stepped well outside what I felt was Indy’s penumbra, but I wasn’t irritated enough to not enjoy the movie.

So, if you’re looking for a big blockbuster to spend some time on, you could do a lot worse than Indiana Jones and the Kingdom of the Crystal Skull. For those comic book fans, Dark Horse comics is printing a comic version of the story in two parts, or the Trade Paperback was released with issue one, so if you prefer that, you can get it at your local comic shop. It follows the same basic story, but the telling is a bit different, and it looks to be like a good read.

Cool Projects: Open Street Map

Open Streetmap LogoA community mapping project began back in April of 2005, known as the Open Streetmap. They seek to create and provide a completely royalty free map for the use in GPS systems, and web-based mashups not unlike Google’s own.

Why are they bothering? Because pretty much every map available for projects like this are expensive, and often times have artificial errors. By utilizing continually more available GPS units, open standards, and a wiki-style interface, users can upload path data, and aid in the updating and creation of new maps.

The project was initially begun for the UK, for which virtually no free maps existed. However, they’re working on eventually mapping everywhere, having already imported large amounts of public domain US road data, as well as data for the Netherlands. Add that to the data being submitted every day by users, and the maps have great potential to be accurate, not only for driving, but footpaths as well.

Open Streetmap is doing well, and around it an infrastructure of map editing tools, and open source GPS and route planners is forming, projects which would have struggled greatly without the availability of such data. Wikipedia, despite it’s flaws, has proven to be a reasonably successful project. Admittedly, the information on it is sometimes flawed, and there is the occasional vandalism, but overall it’s a reasonable source of basic information, and locations of other sources. Open Streetmap, with it’s more easily vetted information, could prove to be a far more successful community project, particularly as GPS becomes more common.

OSM isn’t perfect, but I’ve found the maps to be reasonably accurate in my areas, and I’m planning to see if I can contribute.

Penny Arcade Adventures: On the Rainslick Precipice of Darkness

Penny Arcade Adventures: On the Rainslick Precipice of Darkness is a new Action RPG released by Hothouse Games in colloboration with the Penny Arcade webcomic guys. It takes place in the Penny Arcade Universe starring Tycho, Gabe, and a character of your own design.

Admittedly, I am not a big Penny Arcade fan. I purchased the game largely because it looked funny, it was based on a 1920s horror-pulp story line, and it has a Linux version. in fact, if you’re using Windows, Mac, or Linux, I’d suggest you head over to Play Greenhouse now and download the demo. XBox Live users should be able to find a demo there.

The story starts out simply: You character is standing outside his home when a mysterious voice begins talking to him (or her), and begs him to clean up his yard with the rake. While he does that, a GIANT ROBOT appears, and crushes his house, before walking away, quickly pursued by Gabe and Tycho, of the Startling Developments Detective Agency.

Your character pursues Gabe and Tycho, learning how to play the game, and fighting the beginnings of a horde of ‘Fruit Fucker’ robots, the name of which pretty clearly describes what you’re dealing with. When you finally catch up with Gabe and Tycho, you join the agency, to get revenge on the Robot, and your adventure sets off in earnest. On the way, you’ll uncover a horrible plot involving Pagan Mimes, fight a horde of filthy hobos, and try to find a place to live.

The game has a decent length for the $20 price tag. A straight play through took just under ten hours, but as I found virtually none of the special music tracks, artwork, and collectibles, I’m likely to go back and search for those things. The game is amusing, and well-polished, though it’s likely unlike any other action RPG you’ve played in it’s mechanics.

Those mechanics made the game feel kind of strange. Whenever battle ends, the entire party is fully healed, you can only have a single status effect on each party member at a time. One of my favorite features is the blocking mechanism, which allows you to time a button hit with the enemies health bar flashing to get either a Block, Partial Block, Missed Block, or the elusive Counterattack. The special attacks feature a similar mechanism, but I do wish there had been a bit more variety in these. Each character’s special attacks operate basically the same, becoming harder with better attacks only because of the time limits and increasing amount of things to accomplish.

The game is simply a good time, and I believe it is worth the $20 asking price. There are three more episodes, which I expect will each hit that price point, though I do wish that they’d decided to offer a “Season Pass” like Sam & Max at a slightly reduced rate. That probably won’t stop me from buying the rest of the series.

For a fun, humorous, adult-oriented adventure, I’d suggest buying Penny Arcade Adventures. Download the demo first, and it gives a pretty complete feel for the tone of the game. But you two, can be a mewling babe on the Rainslick Precipice of Darkness.

Whole Food Adventures: Preparing a Garden

Part of the idea of Whole Foods is trusting the source of your food to know that it has been handled in a responsible, and natural, manner from seed to stem. The best means to do this, by far, is probably just to grow the food yourself. Of course, some people simply like Gardening, and frankly that’s a better reason to embark on such an endeavor than any other.

With that in mind, Catherine and I have joined the Pullman Community Gardens, and got our very own 400 square foot plot to plant however we see fit, provided we don’t use any chemical herbicides or fertilizers. How do I wish we could have used herbicides…

We’d gotten kind of a late start on the process, as the Community Gardens website was last updated in 2006, and we didn’t know who to contact regarding getting ourselves a plot. Luckily, a few weeks ago there was a little event at the gardens, and the current plot coordinator, a gentleman by the name of Tobin, was there to talk to people who were interested in doing a bit of gardening. By the end of that week, we had ourselves a 20’x20’ plot that hadn’t been used in several years.

For those of you who’s knowledge of plants, like mine, ends around the point of “green is good”, let me describe what that means. We now had a 400 square foot patch of clay-like dirt that was covered in weeds, and had a pretty nasty grass infestation along two of the four edges. How, oh how, do I wish I’d have thought to take the camera.

Given that we lacked tools for attacking such a problem, we took ourselves to the Hardware Store on Friday, buying the necessary hand equipment to get our patch of dirt into shape. One pointed shovel (spade), one dirt rake, one spading fork. With these tools in hand, we spent the next several hours attacking the dirt, trying to manually extract the vile grass from our long neglected plot, because while we could happily till-under most of the weeds, the grass would be a constant problem if we didn’t dig it out (or so I was told).

This is because most grasses are what is known in the plant world as a Rhizome. Rhizomes are plants that send feed stems under the ground, in order to form a vast vile network of nasty tufts of grass, each tuft being thick and hard. Don’t pull the roots, and the rhizomes? That grass will be right back within a week. Incidentally, this is why many people put in underground plastic edging around garden plots, they stop the rhizome from invading, and therefore, no more grass problem. We haven’t done this yet, but that may change.

Anyway, after about two hours, we decided that we were done, as the mosquitoes had come down in force, and we looked at out plot to discover that the ground was still hard as bricks, and we’d cleared less than 10% of the unwanted vegetation. In all, a highly disappointing day. But, we’d resolved that on Saturday, we’d be prepared to take the rototiller to the patch, and prepare it for planting.

The next morning, we headed into Moscow to go to the market, where we got yet more plants for the garden, a list which I’ll address in a later post. On our way back to Pullman, we stopped at the Tri-State to buy ourselves a new weapon of clay destruction, a mattock. This combination garden hoe and pickaxe is probably my favorite piece of dirt destruction. Finally, we had a tool with which we could loosen the thick dirt and actually remove some weeds. Using a mattock is basically as simple as the description makes it. Raise overhead, and bring hard down to the ground, using the handle as a lever to break the dirt apart. Come behind that with the spading fork or a shovel, and the weeds practically pulled themselves.

With our new tool, we breezed through the rest of the plot, taking only another two hours to get almost all of the weeds out. At least taking things to a point where we decided to put the rototiller to the test. Heading over to the local machine rental shop, we grabbed a 5 HP rototiller, brought it back to the plot, and fired it up.

Remember how I’ve been stressing the clay like consistency of the soil? That’s because the rototiller was more inclined to skip over the surface of the dirt than actually dig in. That’s not entire true, as we’d softened the dirt up a bit with the mattock, but that was an uneven softening, resulting in me having to fight the tiller to try and keep it going in a straight line. Luckily, Catherine found a pick-axe in the tool shed, and so, between her softening the surface dirt a bit with the axe, and me straining to keep the tiller moving in a straight-line, we managed, within an hour, to go over the plot a single time.

In some places, I knew I’d barely cleared three inches of top soil (which I was assured was excellent). That was simply unacceptable. So, after taking a break due to my aching muscles from the bucking tiller, I fired it back up, and put it back to work. The second, and third passes were considerably easier, requiring almost no pick-axing, and considerably less fighting to keep the tiller in a straight line.

For those of you who’ve never handled a rotary tiller before, I can pretty much only describe it like this. Imagine holding a three or four big ass hungry dogs on leashes back from a pile of steaks. They want to go one direction, you want them to go a different, and they’re going to fight you every step of the way. Needless to say, I was, and still am, sore as hell. Admittedly, I’m not the pinnacle of human fitness, but even when I was working grounds at a county park, the tiller was always one of the most tiring jobs.

Since we’re leaving town for a week today, we decided to hold off planting. Instead, we covered the recently disturbed dirt with black plastic, hoping to keep the weeds from taking hold again, until we can plant next week. I’ll be writing up another post on that, detailing what we’re planting and how we’re doing it.

OpenID: Stop the Username/Password Glut

I’m going to admit it: I’m becoming an OpenID fanboy. OpenID, for those who haven’t heard, is a fairly new open-standards movement to help deal with the fact that we’re required to create new accounts for virtually every site on the Internet. I know that I was recently dissuaded from posting a comment on an Ars Technica comment because I simply didn’t want another damn ID.

I understand why people want login credentials. Hell, I require on this very blog that I vet every unauthenticated comment on this very blog, simply because if I didn’t, there would be a ton of random Spam all over the place if I didn’t. I experimented with reCAPTCHA support, but I’ve had a lot of trouble with the Movable Type plugin that embed the reCAPTCHA form, so I gave up on it. I plan to revisit the CAPTCA issue soon on this blog, but in the meantime Movable Type 4 ships with OpenID support built in, and commenters authenticated via OpenID get their comments posted immediately.

Jeff Atwood had a good post today on using OpenID for his forthcoming stackoverflow.com project. He has a great picture feed of what the OpenID login process looks like, but the basics are this:

  1. When you see a page letting you login with OpenID, you enter your OpenID URL.
  2. You get redirected to your OpenID provider, who takes your Password
  3. You can choose authorize the requesting site, to simplify future logins

And that’s it. Three steps, and you’re authenticated. Setting up the account with the OpenID Provider, can be a bit heavier, but it’s something that needs to be done once. And, if you don’t want to remember your OpenID? You can set a series of tags on your homepage, to tell OpenID requestors who to talk to to authenticate your. It’s simple, and as long as you can edit HTML, you can easily add this to the header. The following code is specific to myopenid.com, but other providers should offer similar functionality.

<meta http-equiv=”X-XRDS-Location” content=”http://www.myopenid.com/xrds?username=dinglebert.myopenid.com” />
<link rel=”openid.server” href=”http://www.myopenid.com/server” />
<link rel=”openid.delegate” href=”http://dinglebert.openid.com/” />
<link rel=”openid2.provider” href=”http://www.myopenid.com/server” />
<link rel=”openid.local_id” href=”http://dinglebert.openid.com/” />

In fact, if you were to visit http://dinglebert.openid.com/ (if dinglebert existed as a user that is), those links to the openid.server and openid2.provider appear on that page! All you’re really doing is cutting out the middleman, and providing the address to the webservice that your OpenID uses. Yahoo!, who provides openIDs to everyone with Yahoo! accounts, even goes so far as to allow it’s users to use yahoo.com as your OpenID address. Simply enter yahoo.com, and yahoo will verify your credentials, and approve access to the OpenID site. This is still technically a beta service, so Yahoo! does require you to opt-in, but the process is less painless than any other option, if you have a Yahoo! Id already.

Some people are claiming that OpenID isn’t secure. I argue that OpenId is only as secure as your OpenID provider. But let’s go through the main points in the linked blog post.

  1. Phising Weaknesses - The attack is basically identical to standard e-mail phising. Trick the user into going to a fake login page, that looks like the real deal, and steal their credentials. Since it is the responsibility of the page you’re trying to authenticate to to send you to your provider, this is actually trivial to accomplish. However, this is also trivial to avoid. Your provider should have an SSL login page, and if the certificate doesn’t match the page, don’t log into it. Easy. Admittedly, most users will click right through these security warnings, but that has been approving. Plus, Phising Filters on browsers have been improving, which provides a bit more aid in mitigating the damage.
  2. Privacy Issues - Some people are worried about having all of your online activities tied to a single ID. For one thing, if your username were ever recycled by your provider, and snatched up by somebody else, then the new person will have access to your old sites. Resolution is easy: don’t recycle IDs. Storage is cheap, and while I may be disappointed that I can’t be foxxtrot at every site I go to, I understand that IDs aren’t always unique. With OpenID, all I require is a URL, and that can be unique. And despite one blogger’s claims, OpenID will not destroy anonimity, certainly not anymore than it already is. Most people use the same usernames from site to site anyway, you can easily track someone by following their username. If the poster on a different site isn’t me, odds are they aren’t going to write as I do. There is credence to the fact that a compromised OpenID, via credentials stolen by a keylogger or whatnot, but I’ll address those shortly.
  3. Trust Issues - My provider is where the trust comes from. And with OpenID, you can choose to only accept certain providers. Of course, no other standard authentication answers this need very well either. E-mail is inherently insecure, and if my e-mail gets cracked, or can be observed en route, than all the mechanisms used today to validate identity based on ownership of an e-mail address go right out the window. However, there is an answer to this. One that can (and does!) utilize OpenID, more on that later.
  4. Too Complex - Okay, I’ll go with this one. It is harder to set up an OpenID the first time, and there are potential issues involved in using OpenID, particularly if your provider is having problems, or if there is a routing issue from the authorizing site to the provider. Unfortunately, I don’t think this one is answerable.
  5. Too Many Cooks - Another good point. More people want to provide OpenID than accept it. I can’t argue this one, but this is a social, not a technical flaw in OpenID.

The article needs to be taken with a grain of salt, however, as the writer is trying to sell people on his own Credentica U-Prove Service which provides a similar single sign-on type experience utilizing strong encryption. It looks like a fine product, but there is not reason to claim that something similar can’t be done with OpenID. Plus, I suspect that his system is susceptible of issues 4 and 5 above. There is one place where U-Prove is superior to OpenID, and that is where U-Prove has restrictions in it’s architecture that make it very difficult to determine that a single user logged in to two different sites are not actually two different people. I suspect that this can be determined without breaking their encryption by watching IPs that the Users are logging in from, which could lead to enough IP Address information eventually to make a strong guess, but it does offer that protection better, and you’d have to have access to multiple sites access logs to determine anything.

However, I don’t believe that most people are interested in keeping their identities completely anonymous online. Facebook and Myspace seem to be strong evidence for that idea. If you don’t want certain activity associated with your OpenID, don’t use your OpenID for that activity, it’s that simple. I will always allow anonymous commenting on this blog (I only filter for SPAM). Anonymous commenting is important, and OpenID in and of itself will not destroy that.

As for trust. If you can trust my provider to authenticate me correctly, you can trust that my ID is who I say I am. That accomplished via multi-factor authentication. On the weekly Security Now Podcast with Leo Laporte and Steve Gibson, Steve spent a few weeks fawning over the YubiKey. The Yubikey is a cryptographically secure authentication token generator, that can be used by an OpenID Provider to verify it’s users. How’s this work? You generate a public and private key pair for your Yubikey, the private key is stored in inaccessible memory on the key, and the public key is stored with your OpenID provider. When prompted for your password from your Yubikey provider, simply hit the button on the USB Yubikey, and it will output a cryptographically secure, one use only password, which your provider will then verify and approve.

That’s two-factors people. First the OpenID URL, which is easy to find, but also a non-repeatable Yubikey code, which only you should have. This system could be augmented further, as the RSA SecurID Fobs are, with a small bit of data that only you know, though that isn’t completely necessary. Lost your Yubikey? Have your OpenID provider revoke access to it, until you’ve replaced it. Now, it’s up to my Yubikey enabled provider to verify my identity, and you can trust my identity because of my provider.

OpenID isn’t a silver bullet. It isn’t appropriate for every situation, either. It is a good solution in the intermediate term, though. Particularly when tied with something like the Yubikey or an RSA SecurID. If providing authentication services via OpenID, I believe you will likely require the ability to accept only certain OpenID providers, but even then, you significantly reduce the number of usernames and passwords people need to know, and that alone is a goal worth seeking.

Microsoft: "We'll do ODF. OOXML, that'll have to wait...."

Microsoft announced yesterday plans to implement ODF support in Office 2007, which presumably will be ready for a SP2 release in at least a half a year. Interesting, but not surprising, was the other fact, where Microsoft said that they won’t be releasing an ISO-compliant version of the OOXML format until Office 14 ships, which I doubt we’ll see until at least 2010, if not later.

There is plenty of speculation at Microsoft’s decision on this. I tend to agree with the impression that the OOXML Spec finally passed by ISO is simply too different from what Microsoft offered in Office 2007, but they require the ability to save a standardized XML format to appease government’s, particularly in Europe. While this development makes me hopeful, first that the OOXML passed by ISO may actually be a decent format (unlikely), but also that this gives ODF a window to gain some mind share. ODF certainly won’t be the default, and the Slashdot crowd is positive Microsoft will gimp it somehow.

I’m not convinced Microsoft will. The clients who are demanding ODF would certainly notice, and if the ODF support in Office was lacking, they’d be far more likely to switch to Open Office (or something similar) than continue working within the confines of an application which didn’t meet their needs. With the number of governments beginning to require that documentation be stored in standardized, documented formats, Microsoft can’t afford to have a bad ODF implementation, if their implementation of OOXML is still years out. I would be apt to encourage people to look at Open Office anyway, as it does a great job of opening legacy MS Office files, and even does well with the the Office 2007 OOXML files in the more recent versions. The biggest hurdle has always been that Open Office ODF files (the default file type) wouldn’t open in MS Office, on my Eee PC Asus even went so far as to make MS Office formats the defaults for saving.

This is, in my mind, the best part about Microsoft implementing this. I’ll be able to send people ODF files, and have a reasonable expectation of them working. I don’t think this will necessarily lead to more people using Open Office, however. Open Office has always had support for MS Office formats (though that support has improved greatly over recent months), and most people stuck with Microsoft. As with Linux Users, Open Office users are a percentage of Office Suite users, and Open Office users outside of the Linux platform are a smaller percentage still.

Any ODF-mindshare gained by this move from Microsoft, will very likely be squashed in Office 14, with a compliant OOXML implementation that will undoubtedly be made the default format, and that most people will use simply because it’s the default. Still, it gives people a choice, an option, and this option allows the embracing of a truly open standard, which may help open the door to others. Here’s to waiting for Office 2007 SP2.

Pullman's Mandarin House

1 Comment

Catherine and I have been pretty disappointed by the Chinese Food in Pullman since we moved down here almost a year ago. There are four Chinese restaurants, but three of them are owned by the same family, and the food is really pretty bad. It’s just too Americanized, so it’s pretty sugary without much to balance it. We’d been meaning to try the last place in town for quite some time, but our bad experiences had made us not terrible excited to bother trying.

How foolish we were.

The Mandarin House in Downtown Pullman is a real Chinese Restaurant. It’s not as good as Spokane Valley’s Peking Palace, not much is, but it is easily the best Chinese in Pullman. The Almond Chicken I was given had this fantasitic crispy tempura which crunched nicely, and was smothered in this great gravy that was thick enough to easily cover all the chicken, but not the consistency of library paste, like some restaurant’s Almond Chicken. The Chow Mein and Egg Rolls were delicious, the egg rolls just crunchy enough to hold together nicely. Frankly, none of the food we had, from the Egg Foo Yong, to the Kung Pao Chicken was remotely disappointing. Not only was the food better than the alternatives, it was just plain good.

What I can’t figure out, is why the place is always so empty. They’ve got a $6.50 Lunch Special, and the food is at least as inexpensive as the alternatives, but it’s just so much better. There is a possibility that the food may be a bit more authentic than most people are accustomed to, but Mandarin Chinese is really nothing strange. Hell, mostly it’s meat and vegetables with gravy. Or perhaps it’s just a newer restaurant, and people haven’t learned yet.

If you’re looking for a great Chinese dinner, and you don’t want to spend too much for it, do not go to the Emerald. Do not go to New Garden. Go to the Mandarin House. Parking may not be quite as easy, but it’s worth walking a few blocks. I can’t say that about the alternatives here in Pullman.

Whole Food Adventures: Mayonnaise

Mayonnaise is a condiment most people have simply relegated to the nearly tasteless goo that they spread on their sandwiches. However, it doesn’t need to be this way. Homemade Mayonnaise can taste absolutely amazing, and is ridiculously easy to make.

You do need to be careful, however. While Mayonnaise can be made with either a food processor or blender, or a whisk, the recipes used for both methods differ slightly, but very importantly. I made the error of working a food processor recipe with a whisk, and while I’ve got a mayonnaise that is fantastic in Tuna, it’s not so great for normal sandwiches.

The recipe is simple. Take 1 full egg, and one egg yolk, add a pinch of salt, a tablespoon of lemon juice, and a tea spoon of mustard (preferably a Dijon-style) to your food processor or blender and blend them for just a moment to build a real simple emulsion. Then, slowly add 3/4 cup of good vegetable oil, olive oil or safflower please, while blending. The emulsion will continue to thicken, and should turn a nice white color.

Don’t do the blending at a very high speed, as your likely to over whip the mayonnaise, which can result in separation, which is not tasty. Proportions are important in this recipe, so please do take measurements, and don’t just guess, you will likely end up disappointed. If you want your mayonnaise to last a while, you can add a tablespoon of whey, which should allow you to refrigerate your mayo for months, useful if you make a lot at once.

As for last weeks Kombucha, our black tea kombucha was a bit tart and nicely carbonated. Really quite tasty. We’ve already started a half-gallon batch, and have a secondary zoogleal mat sitting in the fridge, though I don’t know what we’re going to do with it yet.

High School Arrest?

One school district in Texas has an interesting mechanism to combating chronic truancy among students: GPS Trackers on the offenders.

This really is a direct outgrowth out of the misguided “No Child Left Behind” act, which places increased pressures upon the schools to ensure that minors at least earn a high-school diploma, even if the reason the student is failing is that they really just don’t care. The programs used to date involve cheapening teaching methods by placing large amounts of focus on standardized tests, and bizarre programs like this new one in Texas that typically always fail to address the real problems plaguing America’s public education system. Namely that most students simply don’t care, and they lack the support infrastructure at home that they need to help them succeed. Note that I’m not blaming the schools for the average students failure, admittedly there are teachers that severely disadvantage their students by showing up to work every day, but they are, in my experience, the minority.

Most people don’t succeed because they don’t want to. Or rather, they don’t want to work hard enough to do what it will take for them to succeed. Myself, who is college educated and did well in High School without ever trying, did not do nearly as well as I could have simply because I didn’t put enough effort into it. It wasn’t until I was graduating from College with my Bachelor’s Degree in Computer Science that I started to actually feel ready to be a good student. Probably explains why my GPA didn’t quite reach that 3.0 level. I didn’t not want to succeed, but doing well in my classes wasn’t important enough to me early on to do a little more and ensure that success.

And that was in College. In High School, where any student with a decent capability for retention and comprehension is practically guaranteed to do well, it can be really hard to find the motivation to do any of that work. I was lucky, in that my High School offered plenty of honors programs and programs that stressed critical thinking and problem solving more than rote memorization. I’m not sure how those programs have fared since the introduction of the WASL.

As much as I was bored by High School, I still attended every day. I’m sure this was at least partly related to fear of reprisal from my parents, but regardless, I didn’t bother, though looking back I can completely understand why some people did. So Bryan Adams High, largely as an attempt to keep truants from failing out, have begun a program where they get court orders to issue GPS devices to habitual truants, so that the school can know where these kids are, and make sure they’re at school. I suspect that the choice from the judge was the GPS and School, or Juvenile Detention.

The reference in the title trying to liken this to House Arrest isn’t entirely fair, as the students can remove the GPS when school isn’t in session, so it’s not quite the same as the ankle bracelet house-arrest people are required to wear, it’s still has some interesting connotations. For one thing, I would definitely head straight home after school to get rid of the GPS, rather than risk having it on me, and all my movements in the free time being tracked. Apparently, an earlier version of the program used ankle-bracelets, actually, which a State Senator has now likened to “Slave-Chains” so the future of the program is definitely in question.

As with many such programs, the initial focus here, getting kids to go to school so that hopefully you can cram some knowledge in their skulls, is an admirable one. And the removable GPS is a reasonable compromise between giving the student their time, and making sure they’re in school. The problem I see is that many times program like these get expanded far beyond their original scope and intent. Truancy is a crime, though I don’t believe as serious a one as No Child Left Behind makes it, and there should be repercussions for truants. This approach has met with some success, as noted in the New York Times article linked above, and a GPS-based probation for truants is certainly preferable to incarceration. If the program remains a temporary probation, I think that good could come from it. However, someday someone will try to expand it. And even then, is the gain really worth the cost?

Whole Food Adventures: Kombucha

Kombucha is an ancient drink, made from fermenting tea with acetobacters and yeast. Most people refer to the culture as a “mushroom”, but biologically speaking, it’s actually a zoogleal mat. The process of making Kombucha has its roots traced back to China as early as the 250 CE (though this is only the first recorded reference), and the process is heavily akin to that which is used to make wine-based vinegars, though typically the tea is not allowed to ferment long enough to become full vinegar.

Making the tea is amazingly simple. You take a small sample of old Kombucha. The starter seems mostly to keep the culture alive between batches, but a lot of what I read suggests that at least a pint of starter should be used when you want a total of a gallon of Kombucha, others seem to suggest as much as a quart. Due to the small amount of starter culture that came with our mat, we’re beginning with a quart, also so we can determine if we actually want to continue to drink this.

So, why are we drinking Kombucha? Well, there are claims that it aids in Liver Detoxification, but virtually no scientific study of the drink has been done. They know it hasn’t caused toxicity problems in rats, though researches have expressed that care should be taken with Kombucha and other medications, though frankly anything that you consume could potentially mess with medication. Ultimately, though, we’re looking at it as a nice tart, carbonated drink, that doesn’t contain a large amount of sugar. Modern soft-drinks contain some pretty nasty stuff, so having a nice refreshing alternative can’t possibly hurt.

I didn’t get any pictures of the making, but literally, it involved adding some water and sugar to pot, bringing it to a boil, and then adding tea bags (we used four tea bags for a quart). We used a basic Lipton black tea, and there was only about a third of a cup of sugar in the water as it boiled. Once the tea was near room temperature, we added it to a quart mason-jar, and poured in the culture we bought off the Internet. Within 24-hours, the culture has sunk near the bottom of the jar, and I suspect in a day or two we’ll see a new culture forming on the surface. This is one of the cooler parts of Kombucha, we’ll always have a culture since we get a new one for each batch we brew.

Feelings on the brew will be next week.

As for last week’s post, on the Buttermilk, I believe we made an error when we made the Buttermilk the first time. I think we over-filled our Mason jar, so we weren’t able to see when the Buttermilk had begun to “separate” as the instructions said. Because of this, we left it on the counter for almost four days. It still was usable, but it was considerably thicker than I believe it was intended to be. As an explanation of its consistency, it poured like a thick, viscous ooze, and if I put a knife into it, the knife had virtually no residue on it when I pulled it out.

That’s not to say it wasn’t good stuff. We made Buttermilk Biscuits and Pancakes over the last week, and while soaking the flour in the buttermilk was basically a lost cause, since the buttermilk didn’t flow and penetrate the flour, they were still really tasty, and the slight tartness of the buttermilk was a fantastic flavor in these baked goods. We’re ready to start a new batch, and this time, I think things will go much better.

Increasing Data Effectiveness

Scott McPherson, CIO to Florida’s House of Representatives and former CIO of Florida’s Department of Corrections, has a new post on Computer World, where he addresses the continued need for data integration for law enforcement in this country. While I am generally against the comprising of privacy for gains in security, I believe that most of what Mr. McPherson is talking about are perfectly reasonable changes that we should be calling upon the government to finance, as they are our best chance at true security, and for the most part require virtually no sacrifices on our part.

To sell his point McPherson tells the story of Mohammad Atta, one of the 9/11 hijackers, and his run-ins with the law shortly before that infamous attack. Basically, he’d been cited, failed to appear in court, got a warrant issued in his name. got pulled over again, and because he was in the next county received another citation because the officer had no way to find out about the warrant which was filed in a neighboring county. If Atta had been pulled over in a neighboring state, I think we’d like the officer to know about outstanding warrants, but the next county? Certainly an arrest should have been made.

Had Atta been arrested, strong evidence of the plot may have been discovered that would have potentially allowed the entire plot to be foiled. Of course, further communication problems within the Intelligence Community (namely the CIA and the FBI) likely would have served as a further detriment to anything actually begin caught, but that is for another post, at another time.

So what would this require? A fairly basic, standardized architecture of services which would allow queries to be made by officers, even if they were in the field, which would return any relevant information. What does that require? A standard method of communication, which luckily the Department of Justice has created. And companies are already working on solutions in this space. ANalyze Soft did a presentation at Boise Code Camp this year about their project that is providing consitent data integration for the Idaho Dept. of Corrections. McPherson has wonderful things to say about Appriss’ JusticeXchange products, including how much it helped the Florida DoC track repeat offenders and probation violations. Using JusticeXChange, corrections officers can be notified if an absent parolee gets arrested across the state or across the county, and take action on the issue before the parolee ever sees a Judge.

Most people don’t commit crimes. We just don’t. But there is a lot of evidence to suggest that the people who do tend to commit more than one. Perhaps laws or policies will become necessary that limit how long infractions show up on criminal records like this, particularly as that information becomes more easily shared. However, the benefits to sharing this data are enormous, and we need to be funding these sorts of projects. Small towns don’t need riot gear, nearly as bad as they need good means to communicate crime data with other policing agencies across the nation.

Interesting, but more frightening, was the discussion on Hank Asher’s MATRIX system. Now, Hank Asher is a big name in the Data Mining industry, earning quite a fortune by examining personal information for patterns. He chose, in 1999, to turn this focus toward law enforcement, which is where MATRIX was born. MATRIX analyzed criminal and other governmental records (Licensing, etc) looking for ‘anomalous’ behaviors that could potentially be related to terrorist activity. Incidentally this is fairly similar to what Visa and the other Credit card providers do to try to find fraudulent activity on your credit card.

It is unclear whether or not MATRIX was tapped into commercial data, though it almost certainly could have been. The [ACLU did a lot of research on MATRIX[(http://www.aclu.org/Privacy/Privacy.cfm?ID=14240&c=130) trying to determine what information it was pulling it. Ultimately, these probes led to MATRIX losing it’s funding becuase of it’s similarities to DARPAs Total Information Awareness project, which sought to perform an amazing amount of discovery on hundreds of millions of Americans, the vast majority of which were not engaged in anything resembling terrorist activity. Congress ended up killing TIA, and MATRIX followed, due to it’s similarities in functions.

So, how do I feel about MATRIX, and technologies like it? In general, I think it almost certainly going to be too broad in scope. I don’t really need anything keeping track of my purchasing habits, correlating that data with my phone records and browsing habits. These are the reasons I’m generally against data mining. Visa isn’t watching my transaction history for my own good, I can always report fraud myself, they’re doing it so that they can better target their ads at people like me, or to sell advertising information to others. The sort of data mining the first part of this post was okay. It was tracking criminals engaged in criminal activity. It wasn’t even tracking criminals on everything else we’re doing.

Associating data across many, many databases can be a very, very powerful tool, but I disagree with Mr. McPherson that just because people are data mining means that we need to just suck it up and deal, as he basically says in the comments. Large scale data-mining like MATRIX had the possibility to prevent terrorist attacks, Asher demonstrated that after the fact with the 9/11 attackers. However, is the potential loss of liberty, and additional harassment that could come to honest American’s who happened to have fit the profile MATRIX was looking for worth it? I’m not convinced.

Developers Tiring of Windows?

I’ve not much cared for Windows Programming for years. I read Charles Petzold’s Programming Windows, 4th Edition years ago, and I was absolutely astounded at the large number of inconsistencies and bizarre API decisions that appear to have been made for the API. Even as a beginning programmer, which I was at the time, I knew that something about the Win32 API was seriously messed up.

For me, my first glimpse of true computing nirvana was the first time I installed Linux. I was a subscriber to the Maximum PC/Boot magazine, which sent out monthly CDs with software. One month, they included Debian 1.2 (Rex) on the CD, which happened to just about coincide with the building of my first Personal Computer, one that I didn’t need to share with the rest of the family. I’d already partitioned my massive 6.2 GiB hard-drive for a secondary OS, and I proceeded to install Debian. This was before apt, when the only option was dselect. And dselect would easily let you force it to do system-breaking things. I installed all the software off the disk, including dozens of conflicting packages that couldn’t operate together. A book on Linux, and a brief stint on Redhat later, I was back on Debian by ‘99, and went Linux full time in 2001 when I reached College.

The birthing pains on Linux were rough, but the idea of a completely free OS was really interesting to me, though I was definitely more interested in the as-in-beer aspect at the time. However, I suddenly found myself at a system with a full development environment and really excellent documentation available. Once I discovered the joy of the GTK+/GNOME GUI, and how superior Widget Packing is to Canvas-Based layout for GUIs, I was forever against the Windows API. I’ve used it, certainly, but I’ve always kept in mind that there is something better.

On Ars Technica, Peter Bright has begun posting a series of articles claiming that Windows is going to fade away, because applications on Windows Suck, largely because developing applications for Windows sucks. His first article, in which he lays out why he feels that Mac OS X is the second coming, and many of the reasons he presents from a developers perspective, I can completely relate to, though I’ve done no programming on Mac OS X yet. Windows simply isn’t fun to develop for. For the majority of developers, for whom programming is simply a job, this really isn’t a problem. However, the people for whom this is a problem, tend to be the people leading the industry, and as better and better alternatives to Windows arise, and market penetration of those alternatives increase (which is slowly, but surely is), then developers will begin to be lulled away.

Mr. Bright’s argument amounts to the fact that a very large portion of the Windows API is still tainted by decisions made inside of Windows 25 years ago, that at the time may have been perfectly reasonable, but today impose bizarre and unnecessary restrictions on the users. Additionally, Microsoft is so concerned with backwards-compatibility, that the system has almost as many workarounds as it does actual rules. Working at an institution with more special-cases than actual rules, I am particularly pained by this method of doing things. It’s a difference in culture though. In the Windows World, a program written for Windows 1.0 has a decent chance of compiling and running on Windows Vista. In the Mac World, a fair number of applications developed for Mac OS X 10.4 required refactoring to make work on Mac OS X 10.5. Apple wants to provide the best API possible, and if doing so breaks an application which depends on broken behavior, then so be it. For Microsoft, this was simply unacceptable.

In my opinion, many Linux libraries have reached a better compromise for this problem. The API should be stable between patch-level releases, and only minor refactoring will possibly be necessary for minor-releases. If the major version changes, odds are a significant amount of the application will need to be reinstalled, but it is easy to have multiple versions of a library available to be linked against. And developers can be as specific or general about which version they’re linking against as they want. Do you require a specific patch-level? Link against that a file matching that patch-level (if you don’t fix your app, which you should). Only require a certain minor release? Just link against that, and all will be well, regardless of how many patches are released for it. I still have a couple of apps on my computer which link against GTK+ 1.2 instead of the 2.x series, and while I can kind of tell that they’re different, they run perfectly fine.

Microsoft’s best chance to correct the mistakes of the past and forge a new way forward was squandered in the .NET Framework. Windows.Forms, in particular, was a huge let down for me, as it’s little more than an incremental update to Windows GUI programming. Yes, I do like .NET in many respects, but the Microsoft APIs are often very frustrating to use. And any .NET bindings outside of the core .NET framework (in other words, most of the Microsoft namespace) is remarkably poorly documented. Want to know how to access the permission model in Sharepoint? Prepare to spend the next several hours digging through painfully inadequate documentation.

Bright [really dislikes .NET[(http://arstechnica.com/articles/culture/microsoft-learn-from-apple-II.ars). I tend to disagree with him about most of his complaints. I think .NET offers a lot, and is quite a step up from the C and C++ APIs. It’s got a fast VM, the specs were open, the API began pretty clean and clear. For the most part, the core API has remained pretty solid, though the Microsoft extension APIs can be pretty weak.I still can’t quite figure out why they didn’t rename the DTS namespace SSIS when they changed the product name.

There is a forthcoming article which should explain what Bright thinks Microsoft needs to do to protect their ecosystem, but I think he may be a bit premature in how dire he behaves like the situation is. Users really fear change, and there are many examples of superior technology failing once turned over to consumers. Pressure on Microsoft is good, though. It will take pressure to force Microsoft to improve, but also a product needs to be good to apply that pressure. The industry is changing, and I’m not sure Microsoft can change with it, though I’m not willing to call them out of the game.

Continued Adventures in Whole Food: Buttermilk

Last week Catherine and I made home-made yogurt, which I’ve been using primarily as a topping for my cereal in the morning. it’s just a bit tart, but still really tasty, and the best part of yogurt, is that to make more, it’s basically as simple as adding more milk. I’ll talk about yogurt soon.

This week, however, i’m going to be talking about Culture Buttermilk. We bought a buttermilk culture off of Etsy, and making the buttermilk is painfully simply. We took a mason jar, filled it with a quart of whole milk, and added the culture, which amounted to about a 1/4 cup of culture. As we opened the culture, Catherine’s first reaction was that it ‘smelt like feet’. Unfortunately, I agreed with the sentiment. Fortunately, the smell does not carry.

Since it is still relatively cool here in Pullman, we left the cultured milk on the counter, covered with a flat dishtowel to stuff out, but to ensure that pressure wouldn’t build in the jar, as the process was basically bacteria which ate lactose to create lactic acid. We ended up leaving the culture on the counter for the next four days, though in retrospect, the culture probably began to separate sooner, and we could have (possibly should have) put it in the fridge (with a real lid) a day or two earlier. The important part of this fermentation period is that you don’t let the culture exceed ~80 degrees Fahrenheit, though around here in the summer, this might be hard.

So, why bother? Well, the unique acidity of Buttermilk really lends itself well to baking. There is nothing quite like the taste of real buttermilk pancakes, or biscuits for that matter. It really seems like Buttermilk in lieu of normal milk will pretty much always improve a baked good, though I haven’t fully tested this theory yet. I suppose Cookies may not always benefit, but most breads and other leavened baked goods will likely benefit.

We’re planning to do some baking this week, so I should be able to report back soon with the results.

Adobe Fires Salvo in Web Applications Battle

The battle between Adobe’s Flash and Microsoft’s Silverlight for the control of Rich Internet Applications (RIA) had an interesting development yesterday in the foundation of Adobe’s Open Screen Project. It seems that Adobe is taking an Open Source stance, which should be interesting to see how that stands up against the Mono Project’s Moonlight.

In the Flash corner, we have a mature product, which has existed on the Internet for 10+ years. It’s a well-known programming model, with an install base of ~98% of computers on the Internet. However, in that time, it’s developed a bad reputation due to numerous abuses of the technology, it’s history of accessibility issues, and the tendency of web developers to use flash elements which add little to the overall application. This was the Flash of yesterday, however. Now, Flash is used largely for video (Youtube, for instance), and application development. And with AIR, Flash applications can be run on the Desktop, complete with a sqlite database backend.

Silverlight is the new game in town. It does the same sorts of activities, Video and Application development. The great part about Silverlight, is that you can use any .NET language, while Flash is limited to Actionscript, and you have access to a large part of the .NET library. If you know .NET, picking up Silverlight should be easy. Unfortunately, you won’t be able to run the exact same code base on the Desktop. The Mono Project has handled Moonlight such that they can run them as desktop applets, which Mono seems to plan to use primarily for desktop widgets.

Typically, I’ve been more excited about Silverlight. This largely has been due to the fact that I could write code for it using C#, and now through the DLR, I can use a whole slew of other languages as well. Looking at the two now, I still like the .NET roots of Silverlight, but I also like how Adobe is making Flash more open. Still, most of my time in .NET is spent using Mono, so the freedom of Flash almost seems moot.

I think this is a great step for Abode and Flash, but frankly their real saving grace, if they have one, is in AIR. Of course, since Silverlight is built around the Windows Presentation Foundation and Windows Communication Foundation, Silverlight code can also be easily converted to run on the Desktop. I haven’t used Flash, and my Silverlight experience is minimal, but these two technologies are pretty evenly matched. Silverlight is a bit ahead in my mind because of it’s ability to use more than a single language, but Adobe isn’t in a position to be called out of this game just yet.

Who's Afraid of a Bit of Information Disclosure?

Per Enrico Zini, the Italian Tax Men and Women decided it would be a wonderful idea to post every single Italian’s reported Income, allegedly with their name and address as well! Apparently the site was having connection issues due to the flood of people trying to verify this after the Reuters story, but for the Italians, it looks like the best we can hope for is that the people trying to see it were just curious, and weren’t making any plans for that data.

At least that data has been taken down now, but the Minister of the department tried to somehow blame his decision on Americans. Admittedly, our government has made some pretty gross violations of personal privacy in the last seven years, and some of that was people clamoring for those liberties to be taken in the name of safety. Still, nothing that has happened in the US has been anywhere near this level of disclosure. In this country, it’s bad form to even ask what a person earns. Apparently, in Italy, for at least a few hours, you could simply go look. Companies must have loved dealing with the fallout from that.

Enrico’s comments about the plaintext CAPTCHA were pretty amusing. Typically, when trying to meet accessibility requirements, most people use an Audio CAPTCHA. I’m not even sure why they tried to obfuscate the text using extra <span> tags, since any screenreader would need to be able to read the text in order for it to be “accessible”, so clearly the text is already machine-readable. Periodic exercises in futility seem to be common in both IT and Government worldwide, I suppose.

The fact that the Italian Government would provide a functional road map to identity thieves, kidnappers, burglars, and other detestable portions of society is appalling. The closest thing to this level of purposeful information disclosure I can think of in the US is the Social Security Death Index, which at least has a reasonable reason behind it. It allows people to verify an SSN as not belonging to a dead person before allowing it’s use. Unfortunately, many lenders don’t even bother checking. At least they haven’t, perhaps that’s about to change.

We are forced to give a lot of personal information to the government and companies all the time. Had an Italian Corporation done this with their customer files, than that company would be facing enormous civil and likely governmental fines and other punishments. The Director who approved this disclosure deserves, at the very least, to lose his position in a disgraceful manner. He violated the trust of every single Italian, who implicitly trusted their government to treat their tax information confidentially. Hopefully, the Italian people take action.