March 2011 Archives

Open Government

The prospect of government transparency is very important to me. I firmly believe that best way to protect the integrity of our union is by the populace taking a more active role in their own governance. This is why I have always been such a supporter of GovTrack, though I have had to become increasingly selective on what events I track (I limit to activity of my own legislators, a few classes of issues, and the occasional specific Bill). It was the key part of Obama’s electoral platform that I supported, though there was plenty with his candidacy that I did not. We, the people, require more data to be able to participate meaningfully with the government.

Incidentally, the Open Government book, a collection of essays from a wide variety of people trying to better the interaction of the public with government, was driven in large part by the promises made by Obama during his campaign, and the efforts begun shortly after his election, like The book was published just over 1 year ago, and at that time, nearly every single contributor felt that the efforts to date were disappointing. I doubt many people’s minds have changed much in this regard.

Now, I work for a state institution, and I’ve made it a goal to make our data more accessible. I understand that it’s hard, but the data that I expose is unquestionably public, has been available in the past (I’m just trying to make it better), and I have a lot fewer roadblocks to the work I’m doing than I suspect most people do. But the federal transparency efforts have been wrought with delays and missed deadlines. Part of this is the fact that much of this data has been behind a paywall in the past, since it required people to physically copy and mail the information, with the new directives, that income, which I suspect had become something of a profit center for many department since transcripts are for us, will be drying up.

A great many of the essays in this book are from people associated with projects like GovTrack, which take government information (either that freely available, or sometimes behind paywalls which they then digitize), and often do analysis of the data to show connections that may not have been directly obvious. Sites like, or, both of which show voting records and campaign contributions, and how they may be related. Both sites do a reasonable job of not editorializing on what they’re presenting.

My favorite technology that I read about was RECAP, which is a Firefox Plugin (I’ve considered porting it to Chrome) which detects when you’re browsing PACER, an online database of all US Federal Court decisions used when researching case law. PACER costs about 8 cents per page, with a max cost of $2.40 for a document. While not an exceptional amount of money, with the number of documents that someone may need to pull can really add up, particularly for a non-profit legal defense firm. Using RECAP, when you request a PACER document, it checks the RECAP database, serving it for free if it exists, and if it doesn’t, if you buy the document, it will be uploaded to RECAP automatically. Even for-profit legal firms can benefit from this, by reducing their research costs (and hopefully passing that on to clients).

This is a really interesting book, but like others of it’s ilk (collections of essays on a similar topic, Beautiful Code being one example), this is not a book meant to be read from cover to cover without breaks as far as I’m concerned. As someone who wants to write a review, this puts me in an awkward position. By the end, I was bored, and not inclined to say much nice about the book. Hell, the only reason the tone of this review is so positive is because I finished reading this book almost two months ago and have had time to reflect on it.

The reason the book got boring by the end was because everyone contributing to it had similar ideas on why openness in government is important, so I kept reading the same points repeated time and again in almost every single essay, and not just in single sentences, but often whole paragraphs felt paraphrased and redundant. To be clear, I don’t know how one would ‘fix’ this issue in a compilation book such as this, but when reading straight through, I know it’s detrimental to my experience.

Still, I think the work these people are doing is interesting and important, and there are plenty of resources I’m now aware of that I wasn’t before, and a lot of great disucssion about the challenges in the data and the way it’s collected that I hadn’t been aware of. It’s absolutely worth a read, but it’s absolutely unnecessary (and I’d say unadvisable) to read from cover to cover.

The Productive Programmer

Neal Ford’s The Production Programmer1, published by O’Reilly Media, claims to teach you the tricks of the best programmers in the industry. The book proceeds to meet this goal by, in the first half, giving specific tips and tricks for various applications and tools across Windows, Mac, and Linux. The second half discusses the techniques that one can use to learn or familiarize yourself with a new tool in order to ultimately improve your productivity.

I’ll be frank. I don’t think this book is worth the value.

The first half talks about too many tools, too many platforms. It’s not able to cover any one tool terribly well, and it’s attempts to cover a given class of tools felt unfocused and messy.

The advice overall is sound. Learn the tools you’re using. Focus on tools that eliminate the need to move your hands from the keyboard. Automate tasks, often by building scripts.

Maybe it’s just because I’ve been developing on Unix for the past decade, and grew up on the DOS command line before that, but most everything in this book just felt obvious. Hell, I was playing around with Beagle2 (now defunct) for desktop search in it’s very earliest days, well before Google had their own desktop search product3. I live at the command line, even when I’m on Windows as I am at my current day job.

I am not trying to brag. It’s just a familiarity that I’ve gained that even in college I recognized as grossly missing in a lot of my class mates who were bound to their GUI’s and their mouse. For me, the advice was all obvious.

Part two, which covers more of software best practice, like not over developing, or doing proper testing, learning multiple languages, using continuous integration, is also great advice, but I can’t help but think that other books cover the topics better. Admittedly, this book isn’t trying to be comprehensive or definitive on any of these topics, but in it’s general coverage, it seems to fall short.

In retrospect, I am not the target for this book. I read plenty of blogs and other books on the subject of software development and general computing. Ben Collins-Sussman claims that there are two kinds of programmers, the 80% who just plug away, and the 20% who really excel and care4, while Jeff Atwood makes the claim that the 20% are largely the people who actually take the time to read this sort of material5. I am not so proud as to claim myself to be in that 20%. I think I could be some day, and I know that I am above 50%, but I am reluctant to accept the idea of a hard 80/20 split.

This book isn’t bad. It’s fairly entertaining, and it’s got a lot of good information. But it’s almost too introductory. If you feel like you’re trying to get your feet underneath you, and that you really, truly want to be at the top of your field (as you should), then by all means pick up this book. It’s worth it. But if you’ve spent a bunch of time studying the material, reading the blogs, and most importantly, working on techniques to make you faster and better at your day-to-day tasks, then odds are you won’t get too much out of this book.


Boise Code Camp 2011

A few weekends ago, on February 26th was the fifth Boise Code Camp1 held at the Boise State University campus. It is the third Code Camp in Boise I have attended, and sadly it was reduced to a single day because they felt they didn’t have enough submissions. As I didn’t submit a talk this year, I suppose I’m at least partially to blame for that, but either way it was still a solid event.

There was substantially less JavaScript talk this year than in years past, the only talk being strictly on the subject was an introductory jQuery talk. Of course, had I submitted a talk, it would have been an introductory talk on YUI3, meaning that we wouldn’t have had much in the way of advanced JavaScript topics. On the one hand, there is still definitely a big audience for introductory JavaScript concepts in the greater developer community, but I’d love to do more advanced talks at this sort of event.

But in spite of the small number of JavaScript talks, there was still plenty of web talks at the conference this year, though the first talk I went to was one attempting to show the basics of what a ‘Monad’ is in Functional Programming2. I say attempted because I had a hard time drawing much from the talk, though that might be because it was the first talk of the day, though I think it’s more that describing Monads is generally made more difficult than it ought to be.

It did however occur to me, that, at the most basic level, a Monad can be described as a collection containing a homogeneous collection of data, whose methods are designed to support chaining commands together into a pipeline. Incidentally, this is very much how working with DOM nodes in jQuery or YUI3 works, though I’m pretty sure either library wouldn’t describe themselves as ‘Monadic’, and it’s probably not wholly accurate, but I think it provides a working definition to help get someone started on investigating this concept.

Second hour, I attended Glenn Block’s3 talk on WCF and REST, which was really interesting. I had used WCF in .NET 3.5, and it was an improvement over the older web-service mechanisms that .NET provided for building web services. However, the new WCF is amazingly customizable. Content Negotiation is nearly trivial, Glenn showing off an easy way to generate vCard files based on the Accept headers sent from the client. Luckily there is a reasonable parallel of this talk at MVC Conf4 this year5. But having recently done up a simple RESTful service in ASP.NET MVC, the tooling that WCF provides is really interesting to me, plus it’s Open Source and available now6.

After lunch, I attended a talk about F# on the Web given by Ryan Riley7. Ryan has built a Sinatra8 from Ruby clone in F#, which reminded me a bit of Express.js9 from Node.js, in that the app is it’s own server and it’s based on routing paths to commands. F#, particularly with it’s asynchronous processing, allows for very clean code for spec’ing out a web service. It’s still a work in progress, but definitely something to at least watch. Implied callbacks in async processing is pretty cool.

I attended Ole Dam’s Leadership talk, which was really inspiring in, but the slides don’t seem to be posted (unfortunately), and it’s hard to describe. The short version is that becoming a good leader requires work and care, and most of the leadership advice available is pretty terrible. I won’t say much more about it, but Ole apparently gives these talks all over the place and for a relatively low cash outlay, so if given the opportunity to hear him talk, I’d suggest taking advantage.

Finally, I attended a talk on web performance measurements, talking about the metrics that Google uses. They have some JS on their homepage that measures how long it takes for things like image or script loading to being and end and reports that back to the server. It was interesting, but I think I preferred what the Flickr guys mention in their YUIConf 2011 talk10, in that they measure only what they care about, which in Flickr’s case is when the image is loaded and when the Scripts are loaded. They just don’t care about the rest of the stuff. I was expecting more out of this talk that I got, since it was a really high-level look at Google’s JavaScript without even much of a discussion about how to improve those numbers or anything else. I am, however, excited about the web timing specification11 in from the of W3C and implemented in Internet Explorer 9. That should be really interesting to have.

Overall, the event wasn’t as valuable to me this year as in years past, but it was still an excellent event, particularly for one that is free to attendees. If nothing else, it’s a great opportunity to meet up with people that I only see once a year or so.