August 2008 Archives

Cool Projects: Laconica

By now, most people have probably heard of Twitter. And if you pay much attention to the stuff on this page, you’re aware that I’m a member of twitter, and that I tweet occasionally. Twitter is what’s known as a Microblogging platform, micro because you only get ~140 characters to put your post in. What’s interesting about Twitter is that it’s evolved into more than just a blogging platform. In fact, many poeple use it almost more like a chat room, with threads of conversation being woven among each other. Plus, Twitter is a real Web 2.0 sort of technology, with a plethora of APIs to put data in and get data out of Twitter. Most of my experience with Twitter never even touches the Twitter website.

But some people haven’t been happy with Twitter. Some because Twitter has had stability issues. Some because features, like IM, that Twitter pushed heavily, have been disabled for some time. For others, it’s simply the Tiwtter, for all it’s openness, is a closed system.

Enter Laconica.

Laconica is an open-source microblogging platform, but it’s more than that. Laconica is a framework to build a network of microblogging communities. What’s that mean? Well, Twitter is monolithic. Twitter, even though they don’t appear to have a business model, wants everyone to have to go to Twitter in order to microblog. This means that Twitter has thousands of users with no common thread to tie them together. Functionally, Twitter has become a set of communities within the larger Twitter community, which due to the sheer size of the system, is probably related to why Twitter has had stability issues.

Laconica tries a different approach. With Laconica, many sites can run Laconica communities. This makes the Public Tracker (a particularly useless feature of Twitter) actually useful, becuase the people on the Laconica server I have chosen to be involved in have some sort of a shared interest. Currently there are a bunch of Laconica servers, the biggest of which being Identi.ca. I myself am a member of the TWiT Army, being a fan of the TWiT Netcast Network, and liking the technical bent of many of the members of the Army.

I hear you asking, “But won’t that just create a lot of Microblog Islands?” Absolutely not, I say. The best feature about Laconica is the ability to subscribe to users who aren’t actually on the same server as you are. If I see a Identi.ca user I want to follow (like say, the guy responsible for Laconica), I just subscribe, give the site my profile address, and immediate the Laconica server I’m a member of will pull that person’s updates into my friend feed. If I decide to move Laconica servers, I can take all my friends and contacts with me, something Twitter would make very difficult, but is a key part of the design of Laconica.

Laconica is about Microblogging in a free and open way. Is it going to kill Twitter overnight? No, but I do suspect that many people are going to shift as individual communities come up that are more interesting to them. The ability to follow the public feed in a meaningful way has caused me to cut my Twitter usage significantly, and I’ve heard the same from quite a few others. Plus, the code is open, so if a feature isn’t available (like, say, the ability to specify a JavaScript callback like Twitter allows for the sidebar tweets on this site), I can hack it in and make it available to everyone else.

What’s most important though isn’t the openness of the code, as it is the openness of the data. Since I can take my Laconica friends with me whenever I leave a particular site, I never really lose anything if I choose to move sites, and my friends can continue to follow me even after I’ve left that particular community for a different one. It’s about freedom. It’s about community.

I’d seriously suggest listening to the Laconica episode of FLOSS Weekly. If you don’t get excited about OpenMicroblogging from that, then you probably aren’t going to get excited about Microblogging in General. Oh, and if you’re interested, follow me.

Dynamic Loading of Print Stylesheets with YUI

        <p>By and large, people don&#8217;t want to send to the printer the exact same thing as they see on the screen.  The screen display is often full of ads is difficult locations, has sidebar and navigation information that isn&#8217;t relevant to the printed page.  For this reason alone, it makes a lot of sense to offer a print-optimized stylesheet for users who opt to print your content, and luckily this has become pretty easy.  Virtually every browser in use today supports the &#8216;media&#8217; attribute on the link tag (the tag used to include remote stylesheets), which, if set to &#8216;print&#8217; will only be used when the user goes to Print, or Print Preview, one of your pages.</p>

The problem with this is that the CSS still follows the loading rules that all CSS pages are beholden to, meaning that if the print stylesheet is going to override anything it needs to be the last page loaded, even though it won’t be used to display anything. On a site like the new WSU Catalog, where we will display a different CSS based on the options for the page the user is visiting, specifically the Campus of interest, this leads to an interesting problem. Namely, Firefox 2 does not allow you to set the media attribute on a link tag dynamically, once the CSS has loaded.

So, what do I mean by this? Well consider the following code, using the Yahoo User Interface (YUI) Get Utility:

    var urls = [
        "http://designer.wsu.edu/template/css.aspx?key=11f14af0axx081",
        "http://www.wsu.edu/navigatewsu/gen2/print.css"
    ];

    YAHOO.util.Get.css(urls,
    { onSuccess: function(o) {
        for (var i = 0; i < o.nodes.length; i += 1) {
            if (o.nodes[i].href.indexOf('print') > 0) {
                o.nodes[i].setAttribute('media', 'print');
            }
        }
    }
    });

In the above JavaScript, I’m loading a WSU Dynamic CSS Template, as well as the official WSU Print template. Once the CSS is loaded, I find the print CSS, and set it’s media attribute to print. This works on IE 6/7, and Firefox 3. However, on Firefox 2, the Print CSS is displayed, even when out of printing mode. Not willing to stop there, I ran a test, where I build the <link> tag directly and inserted it with the media attribute already set, and sure enough it worked.

This implied to me, that YUI could be extended to support Firefox 2 in this manner. The question was how? I see two answers to that question.

First, would be to allow an optional object associated with each URL, which was an associative array of attributes to apply. The second would be to allow the object that Get.css takes to accept a set of attributes to apply. The first one is potentially confusing, the second requires more code. Ultimately, this probably means the second one is better. Also, both sets of changes requiring hosting the modified Get utility yourself, as you can’t simply replace the necessary code at run-time.

But, for the sake of example, let’s talk about what the first one would require. To the developer, the interface would look like this:

    var urls = [
        "http://designer.wsu.edu/template/css.aspx?key=11f14af0axx081",
        ["http://www.wsu.edu/navigatewsu/gen2/print.css", { "media": "print" }]
    ];

    YAHOO.util.Get.css(urls);

It’s not much of a change, and the code is fully backwards compatible with the old way, but it’s not quite as obvious what the intent is. For the second method the external interface would look like this:

    YAHOO.util.Get.css("http://designer.wsu.edu/template/css.aspx?key=11f14af0axx081");
    YAHOO.util.Get.css("http://www.wsu.edu/navigatewsu/gen2/print.css",
    { "attributes": { "media": "print" }
  });

Method 1: URLs with Attributes

The necessary code changes are visible in the following unified diff):

--- get-debug.js    2008-06-18 08:18:27.939931500 -0700
+++ get-debug-changed.js    2008-08-28 11:34:24.478309800 -0700
@@ -87,15 +87,15 @@
      * @return {HTMLElement} the generated node
      * @private
      */
-    var _linkNode = function(url, win, charset) {
-        var c = charset || "utf-8";
-        return _node("link", {
+    var _linkNode = function(url, win, custom_attrs) {
+           var attrs = custom_attrs || { };
+           attrs.charset = attrs.charset || "utf-8";
+        return _node("link", lang.merge({
                 "id":      "yui__dyn_" + (nidx++),
                 "type":    "text/css",
-                "charset": c,
                 "rel":     "stylesheet",
                 "href":    url
-            }, win);
+            }, attrs), win);
     };

     /**
@@ -106,14 +106,14 @@
      * @return {HTMLElement} the generated node
      * @private
      */
-    var _scriptNode = function(url, win, charset) {
-        var c = charset || "utf-8";
-        return _node("script", {
+    var _scriptNode = function(url, win, custom_attrs) {
+           var attrs = custom_attrs || { };
+           attrs.charset = attrs.charset || "utf-8";
+        return _node("script", lang.merge({
                 "id":      "yui__dyn_" + (nidx++),
                 "type":    "text/javascript",
-                "charset": c,
                 "src":     url
-            }, win);
+            }, attrs), win);
     };

     /**
@@ -245,12 +245,18 @@


         var url = q.url[0];
+        var custom_attrs = {};
+        if (lang.isArray(url)) {
+           custom_attrs = url[1];
+           url = url[0];
+        }
+        custom_attrs.charset = q.charset;
         YAHOO.log("attempting to load " + url, "info", "Get");

         if (q.type === "script") {
-            n = _scriptNode(url, w, q.charset);
+            n = _scriptNode(url, w, custom_attrs);
         } else {
-            n = _linkNode(url, w, q.charset);
+            n = _linkNode(url, w, custom_attrs);
         }

         // track this node's load progress

It’s a fairly small number of code changes, but it does change the internal structure fairly significantly. Some of these changes are unavoidable in trying to solve this problem, other may be.

Method 2: Attributes for All

The diff in question:

--- get-debug.js    2008-06-18 08:18:27.939931500 -0700
+++ get-debug-otherchanges.js   2008-08-28 12:41:30.457239000 -0700
@@ -87,15 +87,16 @@
      * @return {HTMLElement} the generated node
      * @private
      */
-    var _linkNode = function(url, win, charset) {
+    var _linkNode = function(url, win, charset, attrs) {
         var c = charset || "utf-8";
-        return _node("link", {
+        var a = attrs || { };
+        return _node("link", lang.merge({
                 "id":      "yui__dyn_" + (nidx++),
                 "type":    "text/css",
                 "charset": c,
                 "rel":     "stylesheet",
                 "href":    url
-            }, win);
+            }, a), win);
     };

     /**
@@ -106,14 +107,15 @@
      * @return {HTMLElement} the generated node
      * @private
      */
-    var _scriptNode = function(url, win, charset) {
+    var _scriptNode = function(url, win, charset, attrs) {
         var c = charset || "utf-8";
-        return _node("script", {
+        var a = attrs || { };
+        return _node("script", lang.merge({
                 "id":      "yui__dyn_" + (nidx++),
                 "type":    "text/javascript",
                 "charset": c,
                 "src":     url
-            }, win);
+            }, a), win);
     };

     /**
@@ -248,9 +250,9 @@
         YAHOO.log("attempting to load " + url, "info", "Get");

         if (q.type === "script") {
-            n = _scriptNode(url, w, q.charset);
+            n = _scriptNode(url, w, q.charset, q.attributes);
         } else {
-            n = _linkNode(url, w, q.charset);
+            n = _linkNode(url, w, q.charset, q.attributes);
         }

         // track this node's load progress

This diff is shorter, and the API is more consistent with the Get utilities old behavior, as well as the rest of YUI. This is probably the better answer to this problem. In fact the only downside is that a second call to Get.css is required, and is that really that negative?

Method 3: YUI ain’t Broken

Okay, there is actually a means to accomplish this without changing YUI, and I would be remiss to not mention it. It looks something like this.

In the Header:

    <link id="print_css" media="print" type="text/css" charset="utf-8" 
        rel="stylesheet" href="http://www.wsu.edu/navigatewsu/gen2/print.css" />

In the Script:

    YAHOO.util.Get.css("http://designer.wsu.edu/template/css.aspx?key=11f14af0axx081", { "insertBefore": "print_css");

This works fine. I only have two problems with it.

  1. I can not dynamically choose my print css with this method
  2. It does not future proof the Get utility for future optional attributes

Ultimately, I like Method 2. I think it’s the most flexible, while also the most compliant with current YUI standards. I’ve submitted these patches to Yahoo!, so we’ll see if they go anywhere, and I’m going to look into YUI 3 to see about extending this functionality there.

Fun with Internet Explorer and JavaScript

        <p>Last week, Adrian Reber (known in the <a href="http://fedoraproject.org/">Fedora</a> community), posted about why he feels <a href="http://lisas.de/~adrian/?p=189">Internet Explorer is important</a>.  The tone of the post is pretty snooty, but I find myself often in a similar position.  I use Firefox most of the time.  At work, where I have to use Windows, I have Internet Explorer available, but it&#8217;s pretty rare I fire it up.  Firefox is simply the better browser.</p>

But, IE is the more common browser on the web, by a pretty significant margin. Surely, Firefox has made significant progress on the marketshare front, but it’s still definitely a minority share. But it has some great tools for web development. By using a combination of the Web Developer Toolbar, and Firebug I’m able to put out CSS and JavaScript far faster than I would be able to without the tools, particularly when it comes to the JavaScript automation, but these tools are only available on Firefox, and can’t really emulate the issues that are encountered when trying to run with IE.

Yesterday, I published a new backend for the Washington State University Catalog, which is based on the ASP.NET MVC Framework. It’s not 100% bug free, but it works well, and I’m actively monitoring it to fix the issues that arise. However, before I published it yesterday, I had to work through a series of issues to make the site work correctly in Internet Explorer. Below is a description of several of those JavaScript issues, and the resolution.

Form Element Detection

The DOM makes handy references for parts of a Form available based on the name attribute of the input element of the form. For instance the following HTML:

    <form name="test_form" id="test_form">

        <input name="text1" type="text" />
        <input name="button" type="submit" />
    </form>

This create a simple form, with two input elements. If you hold a reference to the form (either by getElementById, or document.forms, or whatever), you can reference the sub elements by their names. For instance

    var frm = document.getElementById("test_form");
    frm.text1.value = "Fill in the Form";

Will place the text “Fill in the Form” into the text1 element of the test_form. Only one problem with this, IE doesn’t add such references for dynamically created input elements. On the General Catalog Academic Calendar, I allow users to select a campus that they’re interested in. This is because we have Calendar data for campuses which do not have full Catalog data prepared yet, so it’s a reasonable compromise. However, to build that option, I dynamically create the campus select box. In Firefox, when I set the name attribute of the input element, and insert the code, it just works. I ran into an issue, however, where the function which performs the JavaScript manipulation of the function was getting called twice. Here is the basic code as I’d originally wrote it:

var OnFormLoad = function() {
    if (form.AC_Select) {
        form.removeChild(form.AC_Select);
    }

    if (catalog.campus === "General" && !form.CampusSelect) {
        var cs = document.createElement('select');
        cs.setAttribute('name', "CampusSelect");
        form.insertBefore(cs, form.YearTermSelector);
        connect.asyncRequest('GET', catalog.base_url + "/AJAX/ValidCalendarCampuses",
            { success: function(o) {
                try {
                    var campuses = YAHOO.lang.JSON.parse(o.responseText);
                } catch (e) {
                    form.removeChild(cs);
                    return;
                }
                var option;
                for (var i = 0; i < campuses.length; ++i) {
                    option = document.createElement('option');
                    option.appendChild(document.createTextNode(campuses[i]));
                    option.setAttribute('value', campuses[i]);
                    cs.appendChild(option);
                }
                event.addListener(cs, "change", OnYearTermSelector_Change);
            },
            failure: function(o) {
                form.removeChild(cs);
            } }, null);
        }
    }
};
event.addListener(yts, "change", OnYearTermSelector_Change);
event.onAvailable(form, OnFormLoad);

This uses the YUI to register the onAvailable event to the form, and call the function, which simply removes the Select button, registers the onChange event for the YearTermSelector box, and creates the Campus select box, before using AJAX to get the contents and fill in the box. Simple enough, right?

Unfortunately, it doesn’t work in IE. Internet Explorer does not add form child elements to the form object when they are created dynamically like this. Not only that, but it doesn’t remove references in the form, when the child element is removed from the form. Which means that form.AC_Select will always evaluate as truthy, because IE never nulls it out, and form.CampusSelect will always be falsy, since IE never initializes it. In order to make this work, I had to create a boolean variable outside the scope of the function, and set it in the function to make sure that it worked. Like this:

var OnFromLoadRan = false;
var OnFormLoad = function() {
    if (!OnFormLoadRan) {
        form.removeChild(form.AC_Select);
        if (catalog.campus === "General") {
            // Build Campus Select box
            // removed for brevity.
        }
        OnFormLoadRan = true;
    }
};
event.onAvailable(form, OnFormLoad);

Since the OnFormLoad function is located within a function, this puts the OnFormLoadRan variable in a function-level closure, which means nothing outside of the function which contains OnFormLoad can touch it. This is kind of hacky, but it works, and one could argue that the lower level of de-reference is more performant. It may be, but the code is less clean, and that’s my big problem with it.

Inserting Options into a Select List

Another UI trick I use pretty heavily on a few of my forms is the creation of a “dummy” record in a select box, that is set as the default, which allows me to take advantage of the “OnChange” event of those select boxes more effectively. The example page I’ll use for this is the Academic Unit selector. This page will create the “* Select an Academic Unit * option, and append it to the top of the list, setting it as the default option. Now, whenever the OnChange event fires, I can be reasonably sure that the user has selected something valid.

I was initially doing the addition of this special element as follows:

var OnFormLoad = function() {
    if (form.AU_Select) {
      form.removeChild(form.AU_Select);
      form.AU_Select = null;
      var no_select = document.createElement('Option');
      no_select.appendChild(document.createTextNode("** Select an Academic Unit **"));
      form.AU_ID.add(no_select, form.AU_ID.firstChild);
      if (unit_info.textContent === '') {
            form.AU_ID.selectedIndex = 0;
      }
    }
};

The add function is the one in question. The W3C standard for the DOM declares that the add function takes the option element to add to the select box, and, optionally, the option element that you wish the new element to precede. Generally speaking you’ll either omit the second argument, to append to the end, or you’ll use the firstChild reference as I have to insert at the beginning of the list. Actually, this might be somewhere where IE’s API is a bit more sensible, but it is against the standard and it does require special coding around.

In IE there are two methods to fix this, both surrounding that add function call. You can either use add the way IE declares it, where the second argument is the index of the option you want to insert before, as follows:

try {
    form.AU_ID.add(no_select, form.AU_ID.firstChild);
} catch (ex) {
    form.AU_ID.add(no_select, 0);
}

Or, you can use the more general DOM manipulation method, insertBefore:

form.AU_ID.insertBefore(no_select, form.AU_ID.firstChild);

Which you choose is dependent of which you prefer, and I’m not sure what the best answer is. insertBefore is more direct, but if you’re using insertBefore to insert children, I’d suggest using appendChild to add new elements to the end of the list, for consistencies sake. That’s really the more important part is being consistent.

But there was one more bug in first version of the function that you may or may not have seen.

Default Values for Contents

Yep, when I make that comparison of the unit_info.textContent to the empty string, I am depending on browser specific behavior.

So, to review, the code is this:

if (unit_info.textContent === '') {
    form.AU_ID.selectedIndex = 0;
}

Which works great in Gecko and Webkit-based browsers. However, IE does not default the value of textContent to the empty string for empty divs. I suspect this is related to the fact that IE doesn’t treat white space as their own DOM elements, but the point is that on IE, unit_info.textContent is null, so it does not absolutely equal the empty string, like I’m checking. Plus, even if I allow type coercion by not using the tripe-equals operator, null and the empty string are not equal. Luckily they are both falsy, so changing the if statement to the following works fine:

if (!unit_info.textContent) {
    form.AU_ID.selectedIndex = 0;
}

JavaScript is a poorly defined language, with a lot of implementation level details that we need to be aware of. It would be fantastic to have a more standardized platform, and we’re slowly working our way there, but somehow I doubt we’ll even be truly free of these sorts of niggling implementation details.

Whole Food Adventures: Barley

Barley is one of the top cereal grain crops in the world, in fourth place behing Corn, Rice, and Wheat. However, it is probably one of the most underrated of all the cereal grains, particularly in the US, where the vast majority of it is used as feed for livestock, and most of what isn’t used for animal feed is used in the production of Beer. Sweet, sweet beer.

Don’t get me wrong, Beer is a fantastic thing, and so is livestock, or at least the meat that comes from Livestock, but Barley is a very tasty grain, that simply doesn’t get the recognition it deserves. Barley comes in a few different varieties, probably the most popular being Pearl Barley, the kind you’ve most likely seen in your beef and barley soup, but I’m here to advocate you resist the lure of pearl barley, which gets it’s white color by removing the nutrient-rich bran coat, and stick with the healthier dehulled barley.

So, what is barley good for? Besides Beer and Animal feed? Well, Barely can be used just about anywhere that rice can, and with similar results, though different flavor. I wouldn’t try barley sushi, but aside from that, I’d feel free to experiment. But, there are some very common preparations, such as the aforementioned Beef Barley soup, and there is my personal favorite barley salad, whose recipe I’ll share shortly.

Beef Barley soup is a classic soup that is very easy to make. Brown some beef in a pot, cheap meat is good for this, and then add water, onions, celery, carrots, barley, whatever else you want really, and let it cook. Easy and fairly fast. But worth it, particularly on a late fall day.

But my favorite barley recipe is a simple barley salad, that I typically construct as follows.

Serves 2 People

1 c Barley

1/4 large onion, julienned

1/2 fennel bulb, chopped

Several slices of bacon

Cut the bacon into smaller pieces, and fry it up. Reserve some of the bacon drippings. Turn the heat down to low on the bacon drippings and add the onion, a pinch of salt, and let it sweat.

Once the barley is done cooking, rinse it in cool water, to bring it down to a cool temperature, then put in a bowl. Add the bacon bits, fennel, and onion to the barley and toss the salad. Serve with a lemon and oil dressing.

It’s delicious, and fast, and simple. And it may just help you discover how great a grain this is.

On Politics and Technology

Seven days ago, John McCain’s campaign posted McCain’s technology platform to his website. As expected, reading through it, it sounds pretty good. A lot of tax breaks for R&D, and technology investment. He wants to “Preserve Consumer Freedoms”. He commits to pursuing “High-Speed Internet Access For All Americans”. He commits to patent reform to reduce the costs of challenging bullshit patents.

Unfortunately, there are quite a few places where this policy falls short. For one, the policy is clear that McCain does not support Net Neutrality, the principle that service providers should not be able to choose what content and applications are allowed across their network. Regrettably, the actions of many service providers in the nation have proven that Network Neutrality is vitally important. This has turned into a battle between consumers and providers, and I am immensely disappointed that McCain has thrown his hand in with the providers, or more accurately the lobbyists for the providers. This opposition to Network Neutrality fits in well with McCain’s stated goal to “Protect The Creative Industries From Piracy”. However, while unlicensed copying (piracy is far too strong a term), is a problem, law makers attempts to regulate the behavior through law has been heavy handed, ineffective, and immensely negative for consumers. I’m looking at you, DMCA.

I would go on, but frankly, Stanford Law Professor Lawrence Lessig does a better job of addressing this issue than I could ever hope to. In the following video, he provides his analysis of McCain’s technology platform, and why he thinks it falls short.

I don’t agree with everything Mr. Lessig has to say. For one thing, I think he ascribes far too much to President Bush in regards to technological failures of the last eight years. But, there is the fact that the Bush Administration has expressed very little interest in technology, particularly after the ‘Bubble’ burst back in 2000, which was more of a function of Clinton-era apathy towards the technology market than anything else. However, whatever the cause, this country has slipped, and this is largely because of the increased regulation placed on users by providers. To date, the Government has done nothing to stop it, and it seems likely that they are, in fact, going to embrace this behavior.

This is one place where I believe that Obama has the upper hand. His technology platform recognizes the need to protect the openness of the Internet. His proposed policy recognizes the need for truly Open Government, an idea which hasn’t been particularly well codified, but ultimately comes down to making the government more transparent, and therefore accountable, to the people. After the last eight years, I suspect many people are interested in that. There is the argument that too much openness can impede the functions of government, but ultimately, so does too much secrecy, as the secrecy impedes the ability of voters to make the best educated decisions about who will lead the nation.

Is this issue of technology enough for me to vote for Barack Obama? I’m not sure. I agree with Lessig that this issue is vitally important to the continued success of this country. However, I’m not convinced that Obama is what he says he is: a maverick, and a vector for change in Washington. According to GovTrack.us’ analysis of Obama’s time in the Senate, he is merely a rank-and-file democrat. His voting for the issue of Telecom Immunity, which he’d sworn to oppose, shows that he is not immune to giving in to special interest. And migrating away from Technology, I agree less and less with Obama’s policies.

I want change in Washington, DC. I really, really do. But it’s simply not going to happen with either of these candidates. McCain is an honorable man who has surrounded himself with slime and morons, and he makes the mistake of listening to them. Obama is like most Democrats, a sweet exterior which hides something different underneath.

Robert Steele gave a talk at The Last HOPE about what he calls the Earth Intelligence Network](http://www.oss.net/EIN). The audio is available for free download, and I’d suggest listening. In the talk, Steele is highly critical of both candidates, focusing on how this is not going to be an election for change, and outlines work that’s beginning to truly bring about change. It’s needed. We, the People, just need to do it.

Sudden Movement on the Android Front

Not long after my, and plenty of other people’s criticism of Google’s last six months of silence on the issue of Android. Google claimed that they didn’t want to take developer’s away from moving forward in order to prepare for release, but as I said last time, it was a concern because they were being more open with the few people who’d won the Android Developer’s Challenge I.

However, with the recent FCC approval of HTC’s Dream, and the impending release that this foretells, Google has finally started to move again on Android. Android 0.9 Beta was released yesterday, and having only had a brief chance to play around with the new emulator, it’s fairly exciting. The new UI is clean, and seems pretty intuitive. Even cooler, they released the source to the old dashboard UI, which goes a long way to show that truly, almost every piece of infrastructure on the phone can be easily replaced if you don’t like it. Awesome.

However, it’s not all chocolate and roses. Some Analysts are claiming the HTC Dream will come pre-installed with Google’s advertising software. Currently, I have to view these rumors as unsubstantiated, as Google, nor any Open Handset Alliance member, has said anything to that effect, and the SDK doesn’t contain anything to that effect. However, Google’s CEO has made it very, very clear that he feels that Mobile Advertising is the future of the company. And no doubt he’s right. He thinks advertising could eventually pay completely for the mobile phone.

But at what cost? The only reason this would work, is that Mobile Phones know basically everything about you. Where you go, how long you spend there, who you talk with (and again how long), where you go online, what you say, what you search for. More than enough information for Giants of Data Mining to target you pretty directly. The privacy invasion implicit in this sort of world is really quite disconcerting however.

Still, due to the current openness of Android, and the future openness once the source is fully released, any undesirable parts of the platform can be excised. Admittedly, I dislike that the default may well become insane tracking, but at least a way out will be availble. Mobile phones are expensive, far more expensive in the US than they need to be. As users we’ve allowed a service providing culture that thrives on double billing and price gouging to develop. That’s going to take a lot of work to rectify at this point, but I don’t think giving up privacy to advertisers is the way to do it.

Whole Food Adventures: Canning

Continuing our series on improving self reliance and depending less on industrial foodstuffs, both for health and economic reasons, I’d like to talk a bit about canning. Catherine and I (well, mostly Catherine) spent the majority of Sunday this weekend processing the 25 pounds of fresh peaches we’d bought at the Moscow Farmer’s Market, which incidentally cost only $17.50 for the case. Farmer’s Markets are awesome.

The problem, of course, with buying 25 pounds of peaches is that you then have 25 pounds of peaches. Even if you have a large family, this is a ridiculous amount of fruit, and if you just try to eat it all, your entire family is likely to be sick of peaches by the time you’re all out. Of course, the majority of the canning process does apply to any fruit, we had peaches, so that is what I’m planning to address today.

One thing that is fairly consistent with all fruit is the need to skin them, luckily peaches, and other soft-fleshed fruits, make this pretty easy. To skin a peach, it’s best to employ that blanch. A blanch is a simple process of putting a fruit or vegetable in boiling water for a short period of time (it depends on the fruit/vegetable), and then removing it, and putting it in an ice bath. With peaches, this interval is between 20 and 30 seconds. Once in the ice bath, it’s easy to pluck the peach from the icy water, and simply wipe the skin off with a paper towel. Perfectly skinned peaches in no time at all. This is best done with two people, one handling the boiling phase, and other skinning the peaches.

Once you’ve got your skinned peaches, the time has come to process them. Generally this begins with slicing the peaches, but from there it really depends on what you’re trying to do. For the Peach Butter, we put them in the food processor and processed them until we had mostly a paste. You don’t want it completely smooth, only mostly. Then, add the peaches to a pan, with some sugar and boil for a little while. Preserves and Jam are both similar, though they involve chunkier chops, and the addition of pectin to the mix as a thickening agent.

I’d give recipes, but to be honest, I’m not 100% sure we’ve figured it out yet. Catherine used the recipes from the Ball Blue Book of Presevering, but the bit of Jam that I tasted (which was the dregs on the bottom of the pan) was insanely sweet, having added 7.5 cups of sugar to about 4 pounds of peaches. This may well be related to the freshness of the peaches we were using. Alton Brown explains in the Season 10 episode of Good Eats, “Peachy Keen”, that once a peach has been plucked from the tree, it’s sugar content is locked, but it will continue to soften. This suggests that peaches sold in your average large supermarket are picked early, so that they’re at a good level of firmness by the time they reach the store, though they’ll never be as sweet as more local peaches. Of course, I could be off-base, and the jam may be the perfect level of sweetness, but I suspect that fresh peaches from local growers should probably have less sugar added than peaches bought at the supermarket. They’re just going naturally sweeter because the time to market is so much less.

Anyway, on to the actual canning process. First, you need a collection of jars, and lids with rubber locking seals. These are inexpensively found at most hardware stores, but expensively found at most supermarkets. Buy smart. You’ll also want a canning rack for each size of jar you intend to can, and a canning pot, both of which are fairly inexpensive. Canning is a simple matter of bringing the jars to temperature, since adding hot liquid to cold glass is begging a huge mess. Plus, boiling the glass will sanitize it. Place the hot jars in the canning rack, which rests conveniently on the side of your canning pot, which should have near boiling water in it. Fill the jars, then screw the lids on, and drop the rack. Heat processing depends on the size of the jars, and your altitude, but is usually in the neighborhood of 15 minutes. Use your jar lifter to pull the jars out of the water, and set them on the counter to cool. As they cool, you should hear the sound of the rubber seals sucking shut. This is a good sign.

Needless to say, I’ve mentioned at least three pots of boiling liquid in the above description. This is a hot process, and given that yesterday was one of the hottest days in the summer, it was kind of unpleasant. But then, we were canning for close to ten hours, so as long as you don’t marathon it, and spread out the work over a few days, it should be bearable. Unfortunately, Canning is a summer/early fall activity, since that is when the food you want to preserve is available.

After twelve hours or so, the jars should be cool enough to store. One thing to note, is that you should remove the locking ring, as it can hide problems, and make sure that all the lids don’t open easily. If any do, you can still re-heat process them at this stage, but later, an unsealed jar is likely a sign of botulism, and you should stay away. Unless you like paralytic death, I guess.

So please, look into canning. With practice, it should go fairly quickly, and be reasonably easy. It takes time, but most of that time is waiting for water to boil, so other activities can be pursued while the water boils. Plus, you know exactly what’s in your food, and it can often be cheaper. It’s a matter of priorities, certainly, but it’s worth considering. In the near future, I hope to get some old recipes for canning, both from Catherine’s Great-Grandmother, and perhaps my own, so that I can provide better recipes. In the meantime, the Ball Blue Book is pretty good, and again, not terribly expensive.

How Not To Run a User-Feedback Session

        <p>Early this week we pushed out an initial Alpha version of our current project at work (an online system to assist in Schedule Proofing), and we met with the principal users of the system in the future yesterday to get feedback.  Unfortunately, the meeting wasn&#8217;t nearly as productive as we&#8217;d hoped.  </p>

In many organizations, developers aren’t allowed to conduct these meetings. We don’t have that luxury (and neither do most companies), but the principle is really solid. Developer’s want to educate users on how to use their software, but when trying to get user feedback, education is exactly what you don’t want to provide.

I wasn’t the meeting organizer, so I left the running of the meeting to the other developer on the project, who incidentally has also done most of the UI work. The role that I was taking was to sit back and take notes, so that we would know what we needed to improve. The meeting organizer began the meeting innocuously enough, asking for any feedback that the users had requested. The first item was a feature that was already implemented, and rather than waiting for the user’s to finish providing their feedback, the meeting lead immediately jumped into a miniature training session on the UI. Walked the two users we were talking with through the entire UI, both elements that were already implemented, and elements that we were planning.

Because of this the first half of the feedback meeting, turned into a training meeting. I did nothing, I’d already decided that the meeting wasn’t my responsibility to run, and continued to take notes on what was being said. However, at this point, the feedback we were getting had changed in an incredibly un-useful manner. The feedback we were getting was now regarding things that they really liked the sound of, and perhaps one or two issues which were generally more of a matter of us misunderstanding business-layer needs, rather than the UI feedback we were hoping for.

User Feedback sessions are meant for exactly that. Getting feedback from the users. If the user doesn’t do 90% of the talking, you’re likely approaching the meeting wrong. Because of the impromptu training session our meeting turned into we got basically no feedback on the usability of the application, once we taught how to use the application, things made sense. Problems they had had initially were no longer problems. We learned basically nothing about what we needed to improve.

Due to the specialized nature of the application, and our ease of access to the people who will be using it, we are in a nice position in that we can plan to train the users. However, there is always turnover, and the more discoverable the application is, the less ongoing training will be required. But to figure out where the user’s struggle, we have to let them talk.

Developers are terrible at this. Developers want to help users they see struggling, but that struggle is exactly what the developers need. Let your users struggle, then find out why they struggled, and what you can to prevent it. Resist the urge to help the user. You may be helping them now, but you really aren’t helping yourself in the long run.

Microsoft .NET v3.5 SP1 Firefox Extension?

        <p>Yesterday, I installed the <a href="http://msdn.microsoft.com/en-us/vstudio/products/cc533447.aspx">SP1 of Visual Studio 2008</a>, which included the <a href="http://www.microsoft.com/downloads/details.aspx?FamilyId=AB99342F-5D1A-413D-8319-81DA479AB0D7&amp;displaylang=en">.NET v3.5 SP1</a> as well.  I was a bit surprised when the installer requested I close Firefox, but the Silverlight API installer wanted the same thing, so I just complied and let it go to work.  Needless to say, when I finally got done installing the SP1 (about two hours later), I was awfully surprised to be greeted with this:</p>

MS-Framework.png

I was surprised, since I didn’t recall being asked to install any Firefox extensions, and I was pretty annoyed about it, so I just decided to tweet about it, telling the rest of twitter “VS 2008 SP1 installs a Firefox Extension without telling me about it. NOT COOL “. Surprisingly to me, about 4 hours later, I started to hear from the firefox_answers people on twitter, who hadn’t heard of this yet, and wanted more information. Once I told them about the extension, they were a bit annoyed, tweeting back “@foxxtrot Ugg. Not asking is really lame. I’ll forward that on to the Firefox add-ons team and let them follow up with Microsoft.”

So far, the primary annoyance has been that Microsoft never asked before installing this extension. Since I was a bit fuzzy about what it was for, I decided to go ahead and look it up as well. First problem I ran into, as that the extension wasn’t installed in either my Profile extension folder, or the Firefox install directory extension folder. Frankly, I didn’t even know that was possible, so I was a bit confused. Luckily, my extensions.cache file in my profile pointed me in the right direction. The extension had been hidden away at “C:\Windows\Microsoft.NET\Framework\v3.5\Windows Presentation Foundation\DotNetAssistantExtension". The hunt, it was on.

But first, how did this install there in the first place? And how did it end up in my profile? Well, it turns out that Mozilla implemented a registry hack to make it easier for third-party developers to do exactly this. If you open up regedit, and go to “HKEYLOCALMACHINE\SOFTWARE\Mozilla\Firefox\Extensions”, programs can drop values in this registry key that will cause extensions to be automatically installed in every instance of Firefox on the system. Something similar exists for Thunderbird as well. But hey, at least the feature is documented. I’m not sure why this is allowed, but it is, and I think it’s best people know all the ways that extensions can be added to their browsers.

Moving on to what the extension actually does. It has two parts. First, it modified the User-Agent of the browser to add .NET Framework information to it. With the “Report all installed versions of the .NET Framework to web servers” option not checked, my User-Agent becomes this:

User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.1) Gecko/2008070208 \ 
Firefox/3.0.1 (.NET CLR 3.5.30729)

If I do check that box, I end up with this:

User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.1) Gecko/2008070208 \
Firefox/3.0.1 (.NET CLR 2.0.50727; .NET CLR 3.0.30618; .NET CLR 3.5.21022; .NET CLR 3.5.30729)

An amazing amount of information to be dumping across the wire to every single website I visit. This isn’t even fucking Silverlight related, and I that’s the only reason I can think of why a web server would ever need to know what versions of .NET I have installed. Ever. These values are coming out of the registry at “HKEYLOCALMACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\5.0\User Agent\Post Platform”.

I can only assume it has something to do with this ClickOnce.aspx) stuff that Microsoft is pushing with the Windows Presentation Foundation.aspx). So, what is ClickOnce? In a nutshell, Microsoft is trying to change the way that software is installed on windows. Currently it relies on ‘setup.exe’ files, or .msi files. While those will never really go away (particularly .msi, which is great in the Enterprise), Microsoft has apparently decided that the better way to go is to tap into all that web hosting that everyone has access to these days.

Basically, a ClickOnce file is a special XML file with the ‘application’ extension, and Mime-Type: application/x-ms-application. Oh, and the Firefox Extension will ignore the Mime type is the file has the ‘application’ extension. Wouldn’t want to make people configure their web servers correctly, now would we? The application file contains basic information about the application, including a cryptographic signature to ensure…something. The keys aren’t required to be registered with anyone, so the key really just proves the app was built with ‘approved’ tools. .NET does this a lot actually. All assemblies must be signed, but that signature just means someone had the ability to sign something. It provides very little in additional security. I suppose you could probably opt to blacklist certain keys, but keys are easy to generate, so really it’s kind of a waste of time. My guess is that it’s the first step to cryptographic software registration, but that might just be the foil hat I’m wearing.

Once the file is downloaded, it’s immediately run through the PresentationHost application, which will likely download the necessary assemblies and immediately start to run the application. It’s pretty simple.

But what are the security implication? Absolutely anytime you download an application, it’s a potential security risk. Only download software from sites and developers you can trust. Ultimately, this doesn’t bother me too much for the idea of ClickOnce. It’s a more convenient way to download software, and frankly, Java has been doing this for years with Java Web Start. What does bother me is the fact that Microsoft will send anything to PresentationHost which has a .application extension. This makes it pretty easy to build faulty .application files and upload them, though that was never impossible, as the MIME-type could have been changed anyway.

So, ClickOnce is a security risk, but it’s not much more of a security risk than downloading anything off the Internet. If it’s riskier at all, it’s because the easier something is, the less people think about it. Though you can make people think too hard, making something easier should be balanced with security concerns. Ultimately, my biggest problem was that I didn’t sign up for a new Firefox extension to be installed. I was installing .NET. Plus, I don’t particularly want ClickOnce. For one thing, the only WPF app I am at all aware of is BabySmash, Scott Hanselman’s “Learning WPF Project”. For another, I would far prefer taking a few more steps to be sure of what I’m getting and from where, before I install it.

Security Conference Wrap-Up

Summer is here and with it are a variety of hacker conferences. We’ve got Defcon, Black Hat, and my favorite The Last HOPE (Hackers on Planet Earth, run by 2600).

Defcon is the longest running of the conferences, having been in Vegas since 1993, and having long been an interesting mix of the Hacker community and Law Enforcement. It’s three days of intense learning, hacking contests, games, all sorts of hacking related stuff, and that’s just the advertised events. I’ve heard a lot of stories of people going to Def Con and seeing things like cell-phone scanning going on behind closed doors. And it’s only $120 for the conference. Cheap as shit. I’m going to have to try to go next year.

Black Hat bills itself at “The World’s Premiere Techincal Security Conference”, and I’ll be honest there are some pretty intense sessions. Like the FasTrak system I discussed last week. My big problem with Black Hat is that it’s gotten to be too damn commercial, or maybe it always sort of was. It costs a few thousand to go, and that’s before Vegas hotel rates. Plus, they actually kicked out reporters for allegedly hacking. At a hacking conference. This would never happen at a real hackers conference. Might as well go to RSA, if you’re looking for such a watered down hacker environment.

Which brings me to HOPE. I talked about The Last HOPE a while back, expressing my dismay at possibly missing the last HOPE conference ever. Luckily, the owners of the Hotel Pennsylvania have been convinced not to raze the hotel, and The Next HOPE has been scheduled for 2010. My only complaint is that they didn’t make the obvious Star Wars joke.

Even better though, is that 2600 has made the audio of all the talks from The Last HOPE available for free download. I’m working my way through them, all 2.4 GiB (my ISP is going to be so pissed). But you can easily just pick and choose. When the video comes available, I’ll have to buy some of my favorite sessions.

This is why I love HOPE and Def Con. They’re more open than anything else, they exist to share knowledge, and they try to do it at as low a cost as possible. They’re about teaching and they’re about knowledge. I encourage everyone to download the talks from The Last HOPE. You’re bound to learn something, and that’s ultimately the whole point.

Windows Communication Foundation on IIS Deployment

At work we’ve been putting together a new schedule proofing experience for our campus (and possibly the rest of the University system) which would allow the schedule proofers to do all their work via a web-based interface. As we’re a primarily Microsoft-based house, the entire system is being built upon newer Microsoft platforms, for better or worse (usually the latter). We’ve been building the system using Silverlight 2.0 for the front end, Windows Workflow for the middle-layer, and Windows Communication Foundation for the communication between the two.

WCF is an interesting technology, because it makes producing web-based or application-based services pretty simple, and the framework modifies it’s behaviour based on how you deploy it. Need XML-based output? It’ll do that. Binary output more your style, feel free. If you want it, WCF can do JSON as well. The technology is handy, because it makes it so that you can focus on the implementation details of your web service, rather than worrying about the intricacies of SOAP or JSON data-exchange. The technology is compelling enough that the Mono project founded Olive to bring WCF to Mono.

The technology has a lot of really cool potential, but it suffers from some inherent design flaws that are hideously unfortunate. First, it seems frighteningly difficult to put more than one WCF web-service in an IIS instance. This has had all sorts of implication for people doing ASP.NET in Shared Web Host environments. Requiring unnecessary complexity to work around a bad behavior. While our problem is a little different, and I’ll get into it in a moment, I’m a bit confused as to why how the solution linked above, which is to define a Custom ServiceHostFactory object, even works, since the Factory attribute doesn’t seem valid, at least according to my VS 2008 instance. I’m not going to pursue that direction, however, as our issue stems from a slightly different, but I would argue far more common position.

We currently have three web-services designed. All three have their own SVC files in our web-project, and all three are properly defined WCF services. They all work properly on the local test server, and they were created as each their own services, because they each deal with different sets of data. Two are used to query into certain data systems, and in the interests of proper code separation, as well as the potential for reuse, we wanted to keep them separately. Even the old ASP.NET Web Services would have allowed this. Not WCF, though.

WCF_Error.png

That’s right, try to host more than one WCF service in a single IIS application, and an exception is thrown. Great work, Microsoft. I probably wouldn’t be so annoyed if Microsoft wasn’t trying to defend their position on this.

Wenlong Dong posted in the above thread link: Unfortunately the behavior is by design. WCF does not support multiple IIS bindings for the same protocol (here it is HTTP) for the same web site. This is for simplicity, especially it did not seem to be an important scenario. Is this a very important scenario for you? Can’t you host different services in different web sites (with different ports of course)? If this does block you, we may think about revisiting this issue again.

SIMPLICITY?!? DIFFERENT WEB SITES?!? WITH DIFFERENT PORTS?!?

You have got to be fucking kidding me.

Damn it Microsoft, as far as I can tell, if I’m hosting in IIS, I have to reference the service by it’s full URL anyway (typically ending in .svc), so the Service already has a uniquely identifying endpoint. That’s really all you should care about, that some sort of a URL can identify the location of the service. If I want to host them all on the same port, but with different URLs, why on earth should that matter to your framework? The URL is already telling you which code to run, how can you possibly have any confusion over this?

Luckily, Dan Meineck, a .NET Web Developer from the UK, has come up with a solution. Is his workaround complicated? Yes. Is it unreasonable? Yes. Does it work? Apparently.

The solution boils down to this: 1. Using .NET’s partial classes, put all your webservices in one class, but each discrete bit in different files, each file should only indicate the WCF interface which that file defines. 2. Modify your Web.Config (or App.Config) file, and in the sytem.serviceModel section, for the service, define endpoint blocks for each of your services, you can specify the specific contract Interface on each endpoint, so that only the methods you want are available on that endpoint.

Ultimately, this provides the exact behavior we want, but it’s really not very clean, and forcing users into this particular model is confusing and pointless. I understand Microsoft feels that the interface specifies the behavior, and is therefore the important part of the definition, but this decision will make it far more difficult for me to integrate a web service from one project into a second project, and frankly if this was simpler for anyone (even Microsoft from an implementation standpoint) it suggests to me that there are deeper design issues in the way that WCF works.

I’m refactoring the code today, to match Mr. Meineck’s suggestions, but it just seems so unnecessary, and pointless. Please fix this, Microsoft.

FasTrak Easily Ruined

        <p>The <a href="http://www.blackhat.com/">Blackhat</a> conference was running this week, and a large number of interesting security issues were raised (even if <a href="http://www.fiercecio.com/story/black-hat-presentation-apple-cancelled-last-minute/2008-08-05">Apple wouldn&#8217;t let their devs talk</a>), but one that I found interesting was the discussion of the FasTrak system. FasTrak is a automated Toll paying system used California&#8217;s large cities that have toll booths on their major motorways. Researcher <a href="http://www.root.org/~nate/">Nate Lawson of Root Labs</a> discovered that the FastTrak, which I suspect is very similar to New York City&#8217;s FastPass system, uses no Authentication, and simply replies with it&#8217;s RFID signal to anyone who scan it.</p>

Anyone who’s read Cory Doctorow’s Little Brother will find this familiar. Especially when matched with the next step. Unauthenticated over-the-air upgrading. That’s right, you can change the value of the chip without actually handling the chip. Awesome.

So, what’s this mean? Well, the unauthenticated read allows anyone with a reasonably powerful RFID reader to track anyone with a FasTrak in their car from any location. In Little Brother, the Department of Homeland Security (DHS) uses this system to track people all over the streets of San Francisco. And as bad as it would be for the Government to do something that broad, this system allows anyone who wants to track individual vehicles easily throughout California.

And the unauthenticated update? This makes it trivial to travel for free, as you can easily steal a valid FasTrak code, and re-flash your own FasTrak and travel on someone elses dime. This allows people who have interest in masking their movements to change their FasTrak codes frequently, so that they can not be tracked via FasTrak. Really want to create mayhem? Do what Marcus and the other Little Brothers did, and start just randomly flashing people’s FasTraks.

RFID is an inherently untrusted protocol. It gladly responds to anyone who asks for it’s code, and by default it doesn’t have any method to authenticate even for writes. Over-the-air writes are a dangerous idea in the first place. If someone really needs to recode their pass, they should have no problem taking it somewhere to be safely re-written over a wire, preferably using encryption to verify that the new code was authorized. Over-the-air reads, a fantastically useful thing, should require a strong challenge. This is much harder, though it could be implemented using something like a simple counter and encryption so that the signal is encrypted and can only be decrypted by the software with the other half of the key. It’s harder, and it’s more expensive, but it’s far far safer.

In addition to FasTrak falling apart, the Mifare cards created by NXP Semiconductors, and used for London’s transit among many other systems, has been found to have similar exploits. Bruce Schneier already has a fantastic write up on this on his blog, particularly NXP’s attempt to suppress the researchers who uncovered the flaws.

Security is hard, really hard. It constantly needs to be fixed and updated, but there are certain things that should be so obviously wrong, like RFID update over-the-air, that I can’t believe people base entire businesses on obviously flawed systems. Still, consumers have a right to know, and researchers have a right to research. Plus, by the time the researchers have figured it out and published, there is always a good chance that someone else has already figured it out to, and has been exploiting it for their own gain.

Linux Hater

I’ll admit, I’m occasionally a bit of a Linux Apologist. I have been known to downplay faults in Free Software, and just deal with them most of the time. I haven’t paid for any Microsoft software in a decade, and though I’ve often taken the opportunity to get free (legitimate) licenses for Microsoft software, I’ve always been fine going without.

I think I’m better than most, in that I at least recognize that most users don’t want to put up with some of the problems that I encounter, and I don’t begrudge anyone their choice to use Windows. It’s not the choice I would make, but I’m not the one making it. I’m the same way with iPods. I won’t buy one, since they don’t support the formats that I want to use, and much of the platform is built on DRM (this applies greatly to the iPhone as well), but just because the technology doesn’t fit my requirements doesn’t mean that it doesn’t work for the vast majority of the population. I don’t like it, but there it is.

But like I said, Linux has problems. My system has some bizarre issues that either I lack the time, ability or inclination to fix. My wife’s laptop, which she’s been happy with the Ubuntu installation on, has other issues. Ubuntu is great in that it mostly just works, but sometimes mostly is pretty annoying. Like Jeffrey Stedfasts problems with PulseAudio. To date, I’ve mostly just dealt with it, submitting patches where my time and inclination allowed, but much of the philosophy around Free Software begins to have issues around commoditization.

Hence where the Linux Hater’s Blog comes into play. On the site, some anonymous blogger rants, raves, and curses his way through a variety of major problems both in the Linux and Free Software communities. And I’d clearly suggest reading it. Myself, and others, have taken to viewing the blog more as bug reports, as problems that need attention before Linux will ever see mainstream usage. My wife manages because she has me to help her. My parents wouldn’t, because they live too far away for me to offer significant aide to.

The Linux Hater, who posts simply as “me”, is clearly someone who is passionate about computing, and yes, even Linux. If he didn’t want Linux to succeed, I don’t see why he’d bother with such fervent bile. And I firmly believe that with the right support, Linux can be the premiere Desktop Unix. Or at least, one of the Linux distributions can. And we’re already starting to see Ubuntu falling into the place. Amazon’s MP3 music store offer’s their downloader client for Linux, for Ubuntu, Debian (which is very similar to Ubuntu), Fedora, and OpenSUSE, because these four distributions offer a stable environment in which to operate.

The Open Source community is made up of all types of users, and we all have different views and priorities. But we all want to see the platform succeed. The technology is cool, but the rest of the bits need to be put together. It’s that other stuff that the Linux Hater focuses on. It’s not enough to be cool, you’ve got to work, and you’ve got to be responsive to problems. That is where Linux has traditionally failed, and that is where people like Linux Hater need to call people out.

I’m going to work much harder myself to try to fix these failures, and I’m hoping to get a job soon which will allow me time to do this more. Linux could be the premiere Unix environment, it isn’t yet, but it could be, and that is what we need to work toward.

Whole Food Adventures: Bread

Bread is delicious, plus it really can be pretty good for you. The problem is that most breads that sell in the modern world are chock full of preservatives, high-fructose corn syrup, Monosodium Glutamate, you name it. Just check the loaf of bread you’re eating. Odds are it has any number of things in it that will cause you to scratch your head wondering why it’s in your bread. Often times, the bread that you can buy direct from a grocery store bakery is going to be better, but typically the only guarantee is the reduced amount of preservatives.

Catherine and I did some price checking, and we’ve found that a loaf of bread from the Moscow Food Co-op is just about the same price as the healthiest bread we can find at the grocery store, plus as Co-op members, we get every 11th loaf of bread free. It’s not much, but it’s nice.

However, you can’t really talk about bread without getting into the topic of grains. I’ve discussed whole grains before, and I can’t think of anyone who thinks that white bread is even good for them, at least compared to wheat bread, but even then many people argue for lacto-fermentation of grains being the only way to really get all the nutritional possibility out of a grain as it contains. As such, the suggestion is to use buttermilk, yoghurt, or some other fermented milk product as the base liquid in you bread, and then let the dough sit for at least 12 hours, preferably 24.

As part of this, I’ve make a batch of yoghurt dough using our food processor and the kneading blade (it’s plastic, and shorter). The recipe is simple:

    1/2 lb butter, room temperature
    1 cup yoghurt
    1 tbsp salt
    3 1/2 cups whole wheat flour or spelt

Begin by creaming together the butter and yoghurt in your mixing bowl, this can be done with either a food processor, stand mixer, or a hand mixer, and then add in the salt and the flour. I ended up having to add a bit more yoghurt to get a nice ball forming, but after a few minutes, I had a nice ball walking around the inside of my food processor, which told me I was done. I removed this out to another bowl, and put a cloth over it.

What’s this dough good for? Well, it’s supposed to make a good tart crust, possibly even for a pie, though I’m not sure it would be the best pie ever made. Or a pizza crust. Really, any thin partially bread-like application and it should work well. Just use a bit of white flour when you’re rolling it out to keep it from sticking. Note that this is an unleavened bread, so it won’t rise. The book we’ve been working from since we started this experiment has some pretty nasty things to say about yeast-risen breads, and I’m not sure I agree with her sentiments to be honest. But I be doing more research on this issue, and coming back with a post of risen breads, since non-risen breads just don’t work as good for sandwiches.

In the meantime, think a bit more about your breads, read the label, and try to buy from the bakery. Not only will it almost always be healthier, it probably tastes better too. You can go a long way in food by spending just a little more for quality stuff.

Little Brother's Take Action

God Bless America. Those of us lucky enough to have been raised in the United States often forget just how damn lucky we are. Even with the ever increasing mandates of the Department of Homeland Security, such as their recent claim that Border Agents can search, confiscate, and retain electronic devices indefinitely, even from American Citizens, we still have it better than a lot of other countries. Even the Supreme Court has decided that people being detained by the government for any reason, do not lose their rights.

London is covered in cameras, which have done almost nothing in solving crimes, but the UK government intends to expand the program. In one of the video’s below, the police officer in question mentions that in most of the world, most police “interviews” start physically. Our government, particularly in the last eight years though the trend goes back much further, have worked hard to further regulate our behavior but again, at least I’m an American, and don’t live somewhere else in the world.

bb-poster.jpgLuckily, people are fighting back. In Cory Doctorow’s latest Novel, Little Brother, Doctorow has a group of people in San Francisco who begin recording the government’s bad behavior and posting it on the Internet. In the book, these people call themselves Little Brothers. I love that name. And, with the proliferation of cell phones with cameras (including video), it is easier than ever for us to watch the watchers.

Partially, this comes down to knowing your rights. And respecting those rights. The Fifth Amendment to the Constitution of the United States of America provides us the right to avoid bearing witness against themselves. The Fifth Amendment was not designed to protect the guilty. It was designed to protect the Innocent, but it’s developed such a bad reputation over the years. But don’t take my word for it. Professor James Duane of Regent University School of Law in Virginia Beach, Virginia, does a better job than I can possibly hope to do. And he’s got the Supreme Court on his side.

Not convincd by Professor Duane? Office George Bruch, of Virgina Beach, agrees with everything Professor Duane says.

It’s not that the cops are trying to screw people, but trying to navigate the modern legal system is anagolous to trying to walk across a minefield, while being shot at, while wearing a blindfold. Everyone has broken the law at some point. Usually, these laws are minor, and usually they didn’t do it knowingly.

The system needs correction, but the only way that it’s going to be fixed, without a full-scale revolution, is if we, the people, know our rights, and expect people to honor our rights. These Little Brothers have helped keep certain abuses of power in check, and some have been punished for their abuses. We can keep the Government accountable, only by making their abuses known. We must be proactive if we want to system to change. We must remain cognizant of our rights. We must watch the watchers.