The last few weeks have been very long, but still very, very good. Almost exactly two months ago, I found myself with the opportunity to leave my employer of nearly four years, Washington State University. WSU has proven to be an excellent incubator for me over these last few years, but I know full well I had been ready to go for months, even prior to beginning my job search in earnest. At some point, I had begun to feel that the University had become an impediment to my further professional growth, and I increasingly found myself strongly disagreeing with the direction of the higher leadership at the institution, whom it seemed was continually making decisions that I felt were neither sustainable, nor fiscally responsible for a state-run institution.

While I was ready to move on, the decision to seek a new job was also strongly driven by a new opportunity my wife had created for herself. Due to an unfortunate situation with her advisor for her graduate program, she decided to stop pursuing a Ph.D. at WSU, and instead complete her Master’s degree in Zoology and complete her Ph.D elsewhere. Earlier this year, she was given an excellent opportunity at the University of Louisiana at Lafayette, working under Dr. Darryl Felder, a researcher focusing on decapod crustaceans. It is an amazing opportunity, that will have us moving, six days after this post, to the heart of cajun country.

As for me, I have spent the past six weeks or so working for Meebo. Meebo has been finding itself in a period of really aggressive growth over the first half of this year, with a half dozen people joining on the front-end JavaScript team alone this year, including myself. It’s been an exciting place to be, and though Meebo’s engineering is based in Mountain View, California and New York City, I have been lucky enough to be brought into the company as one of the first full-time remote engineers.

Working remote is definitely a change, though my experience in the open source community over the last decade or so, and especially on the YUI project over the last three or so years, had taught me a lot about working with people you communicate with primarily via e-mail and chat. Still, it has been an adjustment, as my desk is ten feet from my bed, and as my fellow YUI contributor and recent Meebody, Tony Pipkin (@apipkin) recently tweeted:

New office attire: basketball shorts and a plain white t

At Meebo, I have transitioned to being a pure JavaScript programmer. When I need a server-side component, I pass those tasks off to someone else, which is a bit awkward. I have to e a lot more proactive about making sure that my server-side counterpart is aware of my requirements early enough that they can be scheduled, since I’m not in Mountain View, I need to communicate really clearly and with written specifications, because miscommunication can result in the wrong thing being implemented because of ambiguous language.

I’ve been assigned to the Ads product at Meebo, which means that any where you go with the Meebo Bar, when the ad pops up, that’s code I now own running. Advertising is a nuanced business, but I have long been convinced that the best model we have at scale for monetizing content is ad sales (it doesn’t scale down to, say, the size of this blog, however), thought there is an incredibly amount of nuance to that business that I had no idea existed. Comments for another post, however.

In six days, Catherine and I will watch as everything we own gets loaded onto a truck, before we follow that truck out of town for a drive across the country with our two cats. The kind of change that we’re looking at has grown to be incredibly intimidating, even though it’s exciting. Starting work on the 25th, right after we get down there (and incidentally, possibly a week before our possessions arrive. A 6-14 day delivery window is really inconvenient).

I’ll be looking to get involved in a developer group down in Lafayette, and I’m looking forward to getting familiar with the area. And I definitely plan to start blogging regularly again come August.

Confessions of a Public Speaker

Scott Berkun1 is a former Microsoft Engineer who severed ties with the mothership and went full-time public speaker some years back. His book, Confessions of a Public Speaker2 is sort of his manifesto on how he felt comfortable making that change and how he feels he’s found success.

Scott acknowledges the simple fact that he feels comfortable giving away these secrets and hard earned knowledge simply because he knows that most people will never do the necessary work to become a truly great speaker. I know that I’m not apt to immediately. I feel that presentation is important, and I often present at regional Code Camp events and for WSU’s Application Developer’s group, something I began doing in part because I was tired of going to events and sitting through talks that I felt had little value either due to poor presentation, or just because I felt I knew more than the presenter.

More than that, I think that it’s important to share information within our community. Between techniques and tools, we are certainly spoiled for choice, but without discussion and presentation, the majority of people developing software today have no chance to get expose to any idea that isn’t backed by a marketing budget (this is a notable problem in the .NET and Java communities, but that’s another post).

If you’ve done anything with public speaking in the past (and odds are you’ve taken a class at some time that did something with public speaking), you’ve no doubt heard much of this advice before. Practice, prepare more content than you think you need (but be prepared to cut content on the fly if necessary), practice, learn to harness your nervous energy, practice, show up early, etc. However, Berkun goes into a depth on this material that I’m not sure I’ve ever seen before.

He debunks common myths, like “people would rather die than speak in public,” by showing where such myths came from and the inherent ridiculousness in such statements. He presents many cases from his own career of things not going well at all, like giving a presentation at the same time and across the hall from Linus Torvalds’, who’s crowd was overflowing into the hall, while he had less than a dozen sitting in his enormous conference hall.

While Berkun does stress that preparation is the key to making any public speaking gig succeed, it’s the flexibility to deal with surprises that makes the best speakers as good as they are. Quick thinking doesn’t trump preparation, but it’s necessary sometimes to avert disaster.

One bit of advice that I’d like to share is what to do in that circumstance where no one shows up to hear you speak. Berkun suggests getting the crowd to move and sit near one another so that you can at least pretend the space is smaller than it truly is, while also making it easier to engage directly with the audience, perhaps turning it more into an informal directed conversation than a full-blown presentation with slides.

It is clear that Berkun was in technology, and still primarily speaks about tech, but despite his background, and the examples that he uses from his own career that refer to that, this is absolutely a public speaking book, and I think it’s accessible to anyone who wants to improve their public speaking, even if you’re not interested in turning it into a career.

References: 1. 2.

Phyrric Victories are still Victories

This past weekend, President Barack Obama was able to announce1 to the world that, ten and a half years after taking responsibility for the single worst day of attacks on American soil in our history, Osama Bin Laden had been killed.

I am not going to discuss the morality of killing Bin Laden. We are a nation which practices the death penalty, and whether the operation which finally caught up with Bin Laden would capture or kill him, he was going to die. At least this way, we will not be subjected to a mockery of a trial, as the world was given following the capture of Saddam Hussein2.

Some have made much of the fact that Bin Laden had been at the compound in Northern Pakistan which was assaulted on Saturday for some time, certainly long enough that JSOC3, the special forces unit directly answerable to the President which was designed around this sort of mission, had time to build and train in a replica of the compound for one full month. But I think that it’s a simple answer to why, in the end, this turned out to be relatively easy.

Osama Bin Laden was a great many things, but I do not believe he could ever have been categorized a fool. He choose to attack in the manner in which he did in 2001 because he knew that in no way could Al-Qaeda stand up against the American Armed Forces. By taking principle responsibility for the attack, he put himself in our line of fire, he began living on borrowed time, and prepared himself for martyrdom.

For a time, there as clearly value in remaining alive to release statements and further antagonize the West, but in order to be a Martyr, it would eventually become necessary to die.

This is why I believe he had lived so long in North Pakistan with a relatively small retinue of defenders. When dealing with a strike team like JSOC, five militants would stand about as well as twenty, but it would be easier to live, and live in relative comfort with fewer. No doubt there were other activities in progress, training new leuitenants for instance. Plus, a martyr does not walk into death, they must wait for it to find them.

However, even if you don’t believe, as I have come to, that this death was, in some way, prepared for by Bin Laden, there is no true victory in his death for us as a nation.

Ten years ago, we as a nation had certain harsh realities thrust upon us. It was demonstrated that we were not as insulated as we’d believed from the realities of global politics, or the terrible truth of the distrust and resentment created by our government’s historic policy of convenient involvement in other nations’ affairs. The world did not change that day, but American’s view of our place in it certainly did.

And what do we have to show for it today? The death of an enemy for whom 66% of American’s age 13-17 didn’t even know who they were (which, if anything, shows just how irrelevant Bin Laden had become). The legacy of the PATRIOT Act. The formation of the TSA. Erosion of our civil rights. Three Middle Eastern wars, two of which were justified as linked to those September 11, 2001 attacks.

I do not mean to downplay the actions of JSOC and the SEAL team which is responsible for Operation Geronimo. It was a well executed military action, particularly that we only suffered the loss of a single helicopter to mechanical failure. They executed their assigned mission, by all accounts, professionally and expertly.

I am not even terribly concerned with the lack of respect for the deceased shown by the decision to dump the body into the sea, while attempts were being made to follow all other tenets of Islam surrounding the handling of the dead. The concern that Bin Laden’s grave would become a place of pilgrimage for extremists was an understandable one, though the claim they couldn’t find anyone to take the body is absurd, given the size of Osama Bin Laden’s family.

I do not wholly misunderstand the jubilation felt by many at the news, particularly in the city of New York. I myself was in New York City, standing on the roof of the World Trade Center in July of 2011. When those towers fell I was awash in a surreal feeling over that experience. But I didn’t know anyone who lost their lives in the attack. I haven’t watched the growing health problems of the first responders. And I certainly haven’t lived with a daily reminder of the tragedy in the form of a damaged skyline and gaping hole at ground zero.

No, I do not begrudge the celebration, especially in New York.

And while this will no doubt drive many in Al-Qaeda into a new level of fervor, improved communication and analysis of intelligence, disruption of what command structure Al-Qaeda has, and a new level of proactiveness and willingness to respond among Americans has greatly reduced their chances at success. Combined with the real and reasonble security improvements, it is untrue to say we have nothing to fear, but the risk is likely better mitigated today than at any time in our past.

I just can’t help but think that, until we as a nation decide that we will not trade essential liberty for the illusion of security (and much of it has been illusion), than that enemy has still won. Terrorism is not defeated by killing terrorists. It is defeated by refusing to be terrorized.

Notes: 1. 2. 3.

Thoughts on Void Safety in JavaScript

A thread recently arose on the es-discuss mailing list1 regarding the idea of adding an ‘existential operator’2 to JavaScript. This was in line with some thinking I’d been doing lately, but which I was uanble to fully formulate to bring into the discussion on that thread, and now that the thread has been dead for a while, I’m choosing to use this forum to put down my thoughts before I decide to either start a new thread or raise the old one.

The argument for an ‘existential’ operator is an interesting one, with the initial poster, Dmitry Soshnikov proposing the following:

street = user.address?.street

// equivalent to:

street = (typeof user.address != "undefined" && user.address != null)
     ? user.address.street
     : undefined;

An interesting proposal, and functionally similar to my considerations to proposing void-safety into the language. Let’s first define what I mean by, ‘void-safety’, a term I first read during the interview with the creator of Eiffel3 in Masterminds of Programming4. To be void-safe, a simple rule would be added to the language that any attempt to access a property on an undefined value would itself equal undefined. In other words, it would be like the above example, but it would not have the ‘?’ character, and it would apply to all accesses of any value.

I oppose Dmitry’s suggestion to address this issue through syntax, as I think that such an addition would have a lot more value being a fundemental change to the language, than an optional change requiring a potentially obscure bit of syntax. Plus, this proposal is derived directly from Coffeescript5, which is a cool language, but was designed to translate directly into JavaScript, meaning that it’s solutions need to work with JavaScripts limitations in how it solves problems.

Either of these solutions helps to break a common pattern of data access, especially prevalent with configuration objects. However, there is at least one question that I’ve raised in my own head that has led to a bit of reluctance to post this to es-discuss. Imagine the following, fairly common pattern:

function action(config) {
    config || config = {};
    config.option || config.option = "default";
    ... Do Something ...

With void-safe JavaScript, you could do to check for existence on config.option without raising a TypeError, however, the assignment would proceed to raise a TypeError because config is not a value which can have a property added to it. In essence, this requires the first line, resulting in no win for providing void-safety. But then, existential operator isn’t really useful in this case either.

It is an interesting idea to be able to say that ‘if config doesn’t exist, create it and then append the option property to it, but that has the potential to create brittle code by accidentally initializing values as objects that should have been created as functions (and then had properties set) or other imaginable bad side-effects. And while it may be nice to consider functionality like the Namespace function within YUI, such a thing should always be opt-in.

There is one place where the existensial operator requests functionality that I’m not sure I’d want to see in void-safe JavaScript, and that is around functions. When applied to a funciton call, the existential operator will not call the function and just proceed past it in the property chain (potentially raising a TypeError later).

At first, I felt that applying void-safety to functions was a bad idea, one likely to cause brittleness in programs. To some extent, I still feel that way, as JavaScript functions are allowed to have side effects. The question then because is it better to raise a ReferenceError by trying to execute undefined, stopping script execution, or to continue on having essentially completed a no-op? Plus, with JavaScript’s varaiable inference, where a typo can result in a new variable, there are times you’d want to have that TypeError raised, espeically during debugging.

Of course, the varibale creation by inference is disabled in strict mode, and several of these potential threats are caught by tools such as JSLint6, which can be more easily intergrated into your work process today than ever before.

The concern for me, therefore, is that the behaviour I want in development will not necessarily be the behaviour I want in production. Where void-safety comes in the most useful is likely in the processing of a JS object pulled in from an external source, be that a web service, iframe, or WebWorker, where the simplified data access with increased safety is potentially very useful.

I seem to remember seeing Brendan Eich and the Mozilla team (I’m not sure how involved the rest of the EcmaScript community is yet) discussing a new ‘debugging’ namespace for JavaScript, though I’m having trouble finding the source. I think the void-safety could be a good flag in this environment. By default, turn void-safety on. It makes scripts safer as the browser won’t abort script execution as frequently. But developers could turn it off for their browser, allowing them more powerful debugging.

I’m still on the fence about this proposal. It can make data lookups simpler and safer, without adding new syntax, which is a win. But there are definitely circumstances where it can potentially hide bugs, thus making a developer’s life more difficult if it can’t be disabled. I do think I will raise the issue on es-discuss, as I think it at least warrants discussion by that community, and it may be that there are good historical reasons to not change this behaviour that others who have been buried in these problems longer than I will be familiar with.

References: 1. 2. 3. 4. 5. 6.

Introducing connect-conneg

Content Negotiation is the practice of using metadata already included via the HTTP specification1 to customize what your web server returns based on client capabilities or settings. It has been oddly absent in a lot of major sites, with the Twitter API2 requiring you specify the format of the return as part of the URI, instead of using the HTTP Accept header, and returning the French representation, regardless of the Accept-Language header (to be fair, does localize).

While there are benefits (largely security benefits, unfortunately) to owning your domain in every country code, it is cost-prohibitive to many organizations, and your customers are already telling you what language they want your content in. Admittedly, many sites may wish to have a way to override the browser’s language settings, but this should be handled via user configuration, not URI.

Where I find content negotiation to be most useful is in the space of language customization. My elder sister is getting married soon, and guests are coming from the US, Italy, and Mexico, which has required all the web-based material to be made available in all three languages. For her main wedding site, the top-level sidebar looks like this:


Here, we have three links that are functionally identical, each taking the user to a localized version of the page content, but hiding said content behind at least one link, and exposing the user to the fact that all three languages are available, something they probably do not care about. Now, with the CMS that they are using, this is the best solution that I can see. Fact is, most CMSes do a terrible job of allowing for multiple language content, but that is an issue for another post.

However, for their RSVP system that I am building on NodeJS3 using ExpressJS4, I didn’t view this as an acceptable solution.

Express does make one nod to Content-Negotiation, in the form of it’s ‘request.accepts’5 method, which enables the following:

if (request.accepts('application/json')) {
} else (request.accepts('text/html')) {
    response.render(templateName, data);

However, this implementation, in many ways, misses the point. The MIME types in the Accept header (or the language codes in the Accept-Language header) can provide was are called ‘q-values’, or numbers between 0 and 1 to indicate preference order. Consider the two header options.

  1. Accept: application/json, text/html
  2. Accept: application/json;q=0.8, text/html

What this tells the server is that a response either in JSON or HTML is acceptable, but in the first case, JSON is preferred, while HTML is preferred in the second. However, for the above code, this preference is ignored. Using Express’ accepts method, I’ve decided that if they want JSON at all that’s what I’m sending, even if they might prefer a different representation I offer.

For Acceptable Types, this is less relevant, but for languages, it’s very important. Most every user will have ‘English’ as one of their accepted languages, even though for many it won’t be their preferred. Which is why q-value sorting is so important.

Connect-conneg6, which is available on Github right now, is pretty simple right now, but I have plans to add helper methods for common activities. Basic usage for languages is as such when using the Connect or Express frameworks:

In the above example, language, acceptableTypes, and charsets are statically exported functions, built using the same method exposed as conneg.custom. For each method, this will pull the HTTP Header, and sort the values per the rules in RFC 2616. These lists will be mapped to the following properties on the request object.

  1. Accept-Language -> languages
  2. Accept -> acceptableTypes
  3. Accept-Charset -> charsets

These are exposed as separate methods, so that you can 'mix and match', for my use, I'm only caring about languages for right now. Frankly, I can't imagine a circumstance right now where you'd want to use any charset instead of UTF-8, but it's there for completeness.

What I haven't implemented in connect-conneg just yet are the helper methods to determine what the 'right' thing to do is. For languages, I'm using the following method right now:

function getPreferredLanguage(acceptedLanguages, providedLanguages, defaultLanguage) {
    defaultLanguage || (defaultLanguage = providedLanguages[0]);

    var index = 999;
    acceptedLanguages.forEach(function(lang) {
        var found = providedLanguages.indexOf(lang);
        if (found !== -1 && found < index) { index = found; }
        found = providedLanguages.indexOf(lang.split('-')[0]));
        if (found !== -1 && found < index) { index = found; }

    if (index === 999) { return defaultLanguage; }
    return providedLanguages[index];

At the moment, I’m still thinking through how this will be implemented as library code. The above certainly works, but I’m not sure I understand the structure of Connect as well just yet to build this in the most efficient way. For languages, provided and default could (and probably should) potentially be defined on the object created by connect, at which point, should I present the list, or just the language they want? How do I deal with different endpoints having different content-negotiation requirements?

I will be continuing to hack on this, and I’m going to try to get it on NPM soon, though the git repo is installable via NPM if you clone it. So, please, look, file bugs, make comments, changes, pull requests, whatever. I think this is a useful tool that helps provide a richer usage of HTTP using the excellent connect middleware.

Links: 1. 2. 3. 4. 5. 6.

Why I Use YUI3

The JavaScript Community is an interesting one. It grew up from a language which is unique in that, as Douglas Crockford1 says, no one bothers to learn before using. It’s success as a language is indicative of how good a language it is, when you are able to get past the DOM and a few of it’s less-well considered features. And that flexibility has been amazing in terms of innovation. Look at the plethora of modules available for the barely year-old NodeJS2, the dozens of script loaders and feature shims, and the many libraries for DOM abstraction like YUI3 and jQuery4.

It is, therefore, that I find it interesting that when Crockford was on Channel 95 Live for MIX 20116 yesterday, that when he suggested YUI, it responded in so much surprise and nascent criticism from the many, many jQuery proponents inside of the Microsoft Developer community. The comment is hardly a surprise, and not because the Crock-en7 works for Yahoo! He’s not on the YUI project, and while I’m sure he participates in code reviews, his name does not appear in the commit history of either YUI2 or 3. He has, however, been critical of jQuery and it’s creator, John Resig8, in the past, often making snide remarks about ‘ninjas’.

I am not defending Crockford for his criticisms, or even seriously claiming that words from the mouth of Douglas should be taken as gospel. Admittedly, Douglas is a mythic figure these days, and he is very smart and has done great work creating and promoting best practices that have led directly to today’s Golden Age of JavaScript.

I am also not trying to say “don’t use jQuery”, though I tend to think you shouldn’t. My concern is the apparent bifurcation of the JavaScript community into ‘people who use jQuery’ and ‘everyone else’. Now, part of the reason I am a bit anti-jQuery is because most people I know who are heavy users of the library, don’t actually write much JavaScript, they mostly perform copy-paste programming of other people’s code, and often don’t develop much of an understanding of the language or it’s abilities. Incidentally, they like it that way.

I had started JavaScript doing Pure DOM work, and it was everything that makes people hate JavaScript (when really, they usually hate the DOM, and it’s inconsistent implementation). My needs, however, had been very basic, I wasn’t even doing XHR at that point, so it worked. The JavaScript I wrote at that time also wasn’t very good, looking back. Like Crockford, I didn’t really bother to learn the language. I had done plenty of Java and C++ in my university work, and so JavaScript’s visual familiarity led me to a lot of assumptions that were simply untrue.

Eventually, I needed a bit more. I required a date selection widget for a new project, and had also been reading a lot of the performance tips shared by Yahoo, which ended up leading me to YUI2. YUI2 felt quite a bit like Pure DOM, so it was familiar, and it provided a good set of high-quality widgets that did everything I needed, and quite a bit more. Though I started using YUI2, and read JavaScript: The Good Parts10, YUI2 definitely had some major weaknesses, which led to the negative attitudes many people seem to have to YUI to this day. The library was verbose, deciding what components you needed could be difficult, even when using the Configurator tool. And good luck writing your own widgets, it was time consuming and immensely repetitive due to the lack of any sort of standard framework.

But these weaknesses were all identified, and by the time I’d started using YUI2, YUI3 was already in it’s design phase, and when it’s first previews were released, I knew it was something special. It brought Loader, a tool I was intimately familiar with from YUI2, into the forefront making it simple to use. It defined a set of building blocks that promised to make widget creation, perhaps not trivial, but dramatically easier. It integrated CSS Selectors, the killer feature that everyone was so excited about in the jQuery world. It provided a plugin and custom event architecture that allows for easy composition and customization in a way that I hadn’t seen in any other library.

To this day, many of the Widgets in YUI2 haven’t been released in the Core of YUI3 (though many of Gallery11 counterparts of varying levels of functionality and quality), which a many people see as a weakness. However, this is similar to how other projects operate, where the UI Widgets are a different project from the internal core, and that’s great. The fact is that there are more people building cool things for YUI3 than ever were for YUI2, and for those that have worked in other libraries, they almost all say that they find it easier and faster to build their code than in the other options available.

It is sometimes frustrating that the tools I want don’t always just exist, or perhaps aren’t quite right, but I have found very few problems that I haven’t been able to prototype in at most a few hours of work using YUI3, including the first run of my attempt to re-think multiselect. Of course it takes longer than a few hours to polish the idea and make it shine, but rapid prototyping is immensely useful. Plus, for a well-polished widget that does, say, 75% of what I require, it is easy using the framework to extend the behavior I require without needing to directly modify the code for the core widget. There are exceptions to this flexibility, but they are definitely the minority in my experience.

I don’t anticipate that this will directly buy any converts. I have shown no code. I have made comments that will likely offend someone. This post is more a collection of my thoughts on how I ended up using this particular library, and why, when I leave my current position, I’ll continue to advocate for YUI wherever I end up. I am not so inflexible as to refuse other options, but I like to use tools that I know are a good idea, and not just ones that look like it12.

  9. Resig is working on a book Secrets of the JavaScript Ninjas

Building a YUI3 File Uploader: A Case Study

Off and on for the last few weeks, I’ve been trying to build a file uploader taking advantage the new File API1 in modern browsers (Firefox 4, newer versions of Webkit). It’s up on my Github2, and unfortunately, it doesn’t quite work.

The first revision attempted to complete the upload by Base64 encoding the file and custom building a multipart-MIME message including the base64 encoded file representation using the Content-Transfer-Encoding header. This resulted in the NodeJS3 server using Formidable4 for form processing saving the file out as Base64. At first, I considered this a Bug, but per the HTTP/1.1 RFC (2616)5:

19.4.5 No Content-Transfer-Encoding

HTTP does not use the Content-Transfer-Encoding (CTE) field of RFC 2045. Proxies and gateways from MIME-compliant protocols to HTTP MUST remove any non-identity CTE (“quoted-printable” or “base64”) encoding prior to delivering the response message to an HTTP client.

Proxies and gateways from HTTP to MIME-compliant protocols are responsible for ensuring that the message is in the correct format and encoding for safe transport on that protocol, where “safe transport” is defined by the limitations of the protocol being used. Such a proxy or gateway SHOULD label the data with an appropriate Content-Transfer-Encoding if doing so will improve the likelihood of safe transport over the destination protocol.

The reason for this seems to stem from the fact that HTTP is a fully 8-bit protocol, while MIME was designed to be more flexible than that. One of the CTE options is ‘7-bit’, which would complicate an HTTP server more than most would like. Why 7-bit? ASCII6. ASCII is a 7-bit protocol for transmitting the English alphabet. Eventually it was extended to 8-bit with the ‘expanded’ character set, but in the early days of networking, a lot of text was sent in 7-bit mode. Which made sense, in that it amounts to a 12.5% reduction in data size. These days, when best practice is to encode our HTTP traffic as UTF-8 instead of ASCII (or other regional character sets), the problem seems to be largely gone.

I still take issue with the exclusion of Base64 encoding. Base64 is 8-bit safe, and while it makes the files larger, it had seemed a safe way to build my submission content using JavaScript, which stores it’s strings in Unicode.

And I wasn’t wrong. My next attempt, based on a blog post about the Firefox 3.6 version of the File API7 attempted to read the file as a Binary string and append that into my message. This also failed, but more subtly. The message ended up having a few bytes, which some hexdump analysis seems to suggest was related to some bytes being expanded from 1 byte to 2 based on UTF-16 rules. Regardless, the image saved by the server was unreadable, though I could see bits of data reminiscent of the standard JPEG headers.

A bit more looking brought me to the new XMLHttpRequest Level 28 additions, supported again in Firefox 4 and Chromium. Of particular interest was the FormData object introduced in that interface. It’s a simple interface, working essentially as follows:

var fd = new FormData(formElem);
fd.append("key", "value");

It’s simple. Pass the constructor an optional DOM FORM element, and it will automatically append all of it’s inputs. You can call the ‘append’ method with a key and value (value can be any Blob/File), and then send the FormData object to your XHR object. It will automatically be converted into a multipart/form-data message and uploaded to the server using the browsers existing mechanism for serializing and uploading a form. If I have a complaint, it’s that in Chrome at least, even if you’re not uploading a file, it will encode the message as multipart instead of a normal POST message, which seems a bit wasteful to me, and hints that the form data isn’t being passed through the same code path as a normal form submission.

It is at this point that YUI3’s io module fails me. Let me start by saying that io is great for probably 99% of what people want to use it for. It can do Cross-Domain requests, passing off to a Flash shim if necessary. It can do form serialization automatically. It can do file uploads using a iframe-shim. While it was designed reasonably modular and it only loads these additional features at your request, this apparent ‘modularity’ from a user perspective is actually hard coded into the method call. For instance, for the form handling, we currently have this:

if (c.form) {
    if (c.form.upload) {
        // This is a file upload transaction, calling
        // upload() in io-upload-iframe.
        return, uri, c);
    else {
        // Serialize HTML form data into a key-value string.
        f =,;
        if (m === 'POST' || m === 'PUT') {
   = f;
        else if (m === 'GET') {
            uri = _concat(uri, f);

This code example is used purely to suggest that io is currently designed in a way that is a bit inflexible. In fact, 90% of the logic used in io occurs in a single method, and while there are events you can respond to, including a few that occur before data is sent down the wire, you’re unable to modify any data used in this method in your event handlers. So, if this method does anything that is counter to what you’re trying to do, you’re forced to reimplement all of it. And, of course, the method does something counter to my goals. = (Y.Lang.isObject( && Y.QueryString) ? Y.QueryString.stringify( :;

io-base optionally includes querystring-stringify-simple, so there is a very high likelihood that it will be present. And having my FormData object trying to be serialized in this method will result in all of my data magically disappearing. It is unacceptable to me to tell users of my file-upload module that they must turn off optional includes (though for production, you probably should be anyway, but that’s another discussion).

IO being so inflexible makes sense, in some ways. It’s a static method, not a module, so configuration can be difficult, since the only way to add extension points would be to send them in via the configuration object, which complicates things in other ways. The io module, it seems, requires a reimaging.

And we’ve got something. Luke Smith has put together a code sketch of a potential future for IO9, which breaks things out in an exciting fashion. For my file upload, I can declare a Y.Resource to my endpoint, set some basic options when declaring the resource, and post multiple messages to the resource. It actually shortens my code quite a bit, and while I still need to look at a shim of some sort for those browsers which lack an implementation of the File API and XHR Level 2 before I push it into the gallery, since I would want it to work across all A-Grade browsers.

Unfortunately, the code there is just a proposal, it doesn’t actually work. But I’m excited about the proposal, and I’m going to try to get it at least partially functional, but for now I haven’t worked on it just yet, because I wanted to touch base with Luke to see what kind of expectations there were about the API, and there are a few important ones (though I don’t think they’ll impact me getting things sort of working). Hopefully I’ll have this working in Firefox 4 and Chrome very soon, and then I can start working on the shims necessary to support less-capable browsers.

References: 1. 2. 3. 4. 5. 6. 7. 8. 9.

Open Government

The prospect of government transparency is very important to me. I firmly believe that best way to protect the integrity of our union is by the populace taking a more active role in their own governance. This is why I have always been such a supporter of GovTrack, though I have had to become increasingly selective on what events I track (I limit to activity of my own legislators, a few classes of issues, and the occasional specific Bill). It was the key part of Obama’s electoral platform that I supported, though there was plenty with his candidacy that I did not. We, the people, require more data to be able to participate meaningfully with the government.

Incidentally, the Open Government book, a collection of essays from a wide variety of people trying to better the interaction of the public with government, was driven in large part by the promises made by Obama during his campaign, and the efforts begun shortly after his election, like The book was published just over 1 year ago, and at that time, nearly every single contributor felt that the efforts to date were disappointing. I doubt many people’s minds have changed much in this regard.

Now, I work for a state institution, and I’ve made it a goal to make our data more accessible. I understand that it’s hard, but the data that I expose is unquestionably public, has been available in the past (I’m just trying to make it better), and I have a lot fewer roadblocks to the work I’m doing than I suspect most people do. But the federal transparency efforts have been wrought with delays and missed deadlines. Part of this is the fact that much of this data has been behind a paywall in the past, since it required people to physically copy and mail the information, with the new directives, that income, which I suspect had become something of a profit center for many department since transcripts are for us, will be drying up.

A great many of the essays in this book are from people associated with projects like GovTrack, which take government information (either that freely available, or sometimes behind paywalls which they then digitize), and often do analysis of the data to show connections that may not have been directly obvious. Sites like, or, both of which show voting records and campaign contributions, and how they may be related. Both sites do a reasonable job of not editorializing on what they’re presenting.

My favorite technology that I read about was RECAP, which is a Firefox Plugin (I’ve considered porting it to Chrome) which detects when you’re browsing PACER, an online database of all US Federal Court decisions used when researching case law. PACER costs about 8 cents per page, with a max cost of $2.40 for a document. While not an exceptional amount of money, with the number of documents that someone may need to pull can really add up, particularly for a non-profit legal defense firm. Using RECAP, when you request a PACER document, it checks the RECAP database, serving it for free if it exists, and if it doesn’t, if you buy the document, it will be uploaded to RECAP automatically. Even for-profit legal firms can benefit from this, by reducing their research costs (and hopefully passing that on to clients).

This is a really interesting book, but like others of it’s ilk (collections of essays on a similar topic, Beautiful Code being one example), this is not a book meant to be read from cover to cover without breaks as far as I’m concerned. As someone who wants to write a review, this puts me in an awkward position. By the end, I was bored, and not inclined to say much nice about the book. Hell, the only reason the tone of this review is so positive is because I finished reading this book almost two months ago and have had time to reflect on it.

The reason the book got boring by the end was because everyone contributing to it had similar ideas on why openness in government is important, so I kept reading the same points repeated time and again in almost every single essay, and not just in single sentences, but often whole paragraphs felt paraphrased and redundant. To be clear, I don’t know how one would ‘fix’ this issue in a compilation book such as this, but when reading straight through, I know it’s detrimental to my experience.

Still, I think the work these people are doing is interesting and important, and there are plenty of resources I’m now aware of that I wasn’t before, and a lot of great disucssion about the challenges in the data and the way it’s collected that I hadn’t been aware of. It’s absolutely worth a read, but it’s absolutely unnecessary (and I’d say unadvisable) to read from cover to cover.

The Productive Programmer

Neal Ford’s The Production Programmer1, published by O’Reilly Media, claims to teach you the tricks of the best programmers in the industry. The book proceeds to meet this goal by, in the first half, giving specific tips and tricks for various applications and tools across Windows, Mac, and Linux. The second half discusses the techniques that one can use to learn or familiarize yourself with a new tool in order to ultimately improve your productivity.

I’ll be frank. I don’t think this book is worth the value.

The first half talks about too many tools, too many platforms. It’s not able to cover any one tool terribly well, and it’s attempts to cover a given class of tools felt unfocused and messy.

The advice overall is sound. Learn the tools you’re using. Focus on tools that eliminate the need to move your hands from the keyboard. Automate tasks, often by building scripts.

Maybe it’s just because I’ve been developing on Unix for the past decade, and grew up on the DOS command line before that, but most everything in this book just felt obvious. Hell, I was playing around with Beagle2 (now defunct) for desktop search in it’s very earliest days, well before Google had their own desktop search product3. I live at the command line, even when I’m on Windows as I am at my current day job.

I am not trying to brag. It’s just a familiarity that I’ve gained that even in college I recognized as grossly missing in a lot of my class mates who were bound to their GUI’s and their mouse. For me, the advice was all obvious.

Part two, which covers more of software best practice, like not over developing, or doing proper testing, learning multiple languages, using continuous integration, is also great advice, but I can’t help but think that other books cover the topics better. Admittedly, this book isn’t trying to be comprehensive or definitive on any of these topics, but in it’s general coverage, it seems to fall short.

In retrospect, I am not the target for this book. I read plenty of blogs and other books on the subject of software development and general computing. Ben Collins-Sussman claims that there are two kinds of programmers, the 80% who just plug away, and the 20% who really excel and care4, while Jeff Atwood makes the claim that the 20% are largely the people who actually take the time to read this sort of material5. I am not so proud as to claim myself to be in that 20%. I think I could be some day, and I know that I am above 50%, but I am reluctant to accept the idea of a hard 80/20 split.

This book isn’t bad. It’s fairly entertaining, and it’s got a lot of good information. But it’s almost too introductory. If you feel like you’re trying to get your feet underneath you, and that you really, truly want to be at the top of your field (as you should), then by all means pick up this book. It’s worth it. But if you’ve spent a bunch of time studying the material, reading the blogs, and most importantly, working on techniques to make you faster and better at your day-to-day tasks, then odds are you won’t get too much out of this book.


Boise Code Camp 2011

A few weekends ago, on February 26th was the fifth Boise Code Camp1 held at the Boise State University campus. It is the third Code Camp in Boise I have attended, and sadly it was reduced to a single day because they felt they didn’t have enough submissions. As I didn’t submit a talk this year, I suppose I’m at least partially to blame for that, but either way it was still a solid event.

There was substantially less JavaScript talk this year than in years past, the only talk being strictly on the subject was an introductory jQuery talk. Of course, had I submitted a talk, it would have been an introductory talk on YUI3, meaning that we wouldn’t have had much in the way of advanced JavaScript topics. On the one hand, there is still definitely a big audience for introductory JavaScript concepts in the greater developer community, but I’d love to do more advanced talks at this sort of event.

But in spite of the small number of JavaScript talks, there was still plenty of web talks at the conference this year, though the first talk I went to was one attempting to show the basics of what a ‘Monad’ is in Functional Programming2. I say attempted because I had a hard time drawing much from the talk, though that might be because it was the first talk of the day, though I think it’s more that describing Monads is generally made more difficult than it ought to be.

It did however occur to me, that, at the most basic level, a Monad can be described as a collection containing a homogeneous collection of data, whose methods are designed to support chaining commands together into a pipeline. Incidentally, this is very much how working with DOM nodes in jQuery or YUI3 works, though I’m pretty sure either library wouldn’t describe themselves as ‘Monadic’, and it’s probably not wholly accurate, but I think it provides a working definition to help get someone started on investigating this concept.

Second hour, I attended Glenn Block’s3 talk on WCF and REST, which was really interesting. I had used WCF in .NET 3.5, and it was an improvement over the older web-service mechanisms that .NET provided for building web services. However, the new WCF is amazingly customizable. Content Negotiation is nearly trivial, Glenn showing off an easy way to generate vCard files based on the Accept headers sent from the client. Luckily there is a reasonable parallel of this talk at MVC Conf4 this year5. But having recently done up a simple RESTful service in ASP.NET MVC, the tooling that WCF provides is really interesting to me, plus it’s Open Source and available now6.

After lunch, I attended a talk about F# on the Web given by Ryan Riley7. Ryan has built a Sinatra8 from Ruby clone in F#, which reminded me a bit of Express.js9 from Node.js, in that the app is it’s own server and it’s based on routing paths to commands. F#, particularly with it’s asynchronous processing, allows for very clean code for spec’ing out a web service. It’s still a work in progress, but definitely something to at least watch. Implied callbacks in async processing is pretty cool.

I attended Ole Dam’s Leadership talk, which was really inspiring in, but the slides don’t seem to be posted (unfortunately), and it’s hard to describe. The short version is that becoming a good leader requires work and care, and most of the leadership advice available is pretty terrible. I won’t say much more about it, but Ole apparently gives these talks all over the place and for a relatively low cash outlay, so if given the opportunity to hear him talk, I’d suggest taking advantage.

Finally, I attended a talk on web performance measurements, talking about the metrics that Google uses. They have some JS on their homepage that measures how long it takes for things like image or script loading to being and end and reports that back to the server. It was interesting, but I think I preferred what the Flickr guys mention in their YUIConf 2011 talk10, in that they measure only what they care about, which in Flickr’s case is when the image is loaded and when the Scripts are loaded. They just don’t care about the rest of the stuff. I was expecting more out of this talk that I got, since it was a really high-level look at Google’s JavaScript without even much of a discussion about how to improve those numbers or anything else. I am, however, excited about the web timing specification11 in from the of W3C and implemented in Internet Explorer 9. That should be really interesting to have.

Overall, the event wasn’t as valuable to me this year as in years past, but it was still an excellent event, particularly for one that is free to attendees. If nothing else, it’s a great opportunity to meet up with people that I only see once a year or so.