June 2010 Archives

Weight Is Not Health

Recently, Scott Hanselman, software technologist and diabetic, has been periodically tweeting details of daily weigh-ins, in an effort to use the community to hold himself accountable as he tries to lose a few pounds. And he is far from the first. Jason Calacanis was doing this some months back with the help of a wifi-enabled bathroom scale. I’m not sure it was that particular scale, but it was a similar product.

Now, I’m big. My weight is around 325 pounds (give or take a few), but I’m about six foot and broad shouldered, so I think I wear it better than many people who would be 325 pounds. I ended up with a large share of the genetics from my paternal grandmother. She had a big family, and while I’m noticably taller than her and her siblings, I still have their barrel-like body shape.

It’s not going to go away, but a few months back, I began working out regularly, utilizing the Student Recreation Center at Washington State University, which is both convenient, and probably cheaper than any other local facility. However, while all my clothes fit noticeably looser than they did when I began, my weight has remained a near-constant. This is with at least four, but often five, visits to the gym per week.

While I do look forward to being thinner, the core reason to being an excercise program, and to modify your eating habits, should be general health. For most of us, getting somewhat thinner/lighter should be a side-effect of better health, not the goal.

One of the benefits at the SRC is that most of the cardio machines are configured with USB interfaces to load progressive data about the state of the workout, current speed, heart rate, calories burned (or an estimate at least), and a host of other data based on the type of machine I’m on. With a chest strap heart rate monitor (a free checkout), I can ensure that the measurements are reasonably accurate (occasionally the strap starts erroneously reporting my heart rate at being around 225 beats per minute).

Of course, the only place I can use this data right now is the eFitness website that UREC works with, but I’m working on reverse engineering the data format. Ultimately, these are the metrics that are more important. Am I able to work out harder over time?

We need to stop focusing on one of the less important characteristics of health, our weight, and start focusing more on those other indicators, which largely come down to being able to perform at a higher level that before. Plus, daily fluctuations are less than useless. Daily fluctuations are apt to be largely water-based, or dependant on the size of your last meal. I suspect the real reason people look at weight is that it’s an easy metric to get, it’s convenient. However, it’s simply not an accurate measure of health, which is the real end goal that we should have in mind.

Not Mono is Not a Feature

2 Comments

Recently, it was announced that in Ubuntu 10.10 (Maverick Meerkat), that F-Spot is going to be replaced with Shotwell. I’ve used F-Spot for a while, mostly by virtue of it being the default Photo Manager in Ubuntu, but I’m not particularly attached to it, but I don’t yet have an opinion on Shotwell either. This post is about how almost every story I’ve seen on the subject has greatly stressed that Shotwell isn’t basd on Mono.

Not being based on Mono is not a feature.

I understand our community’s distrust of Microsoft. I’ve been there. I get it. I’ve been that person. However, while Microsoft may have created .NET, of which Mono is a free reimplementation, that does not mean that Microsoft is going to kill Mono. I’d argue that, at this point, Microsoft needs Mono for .NET more than Mono needs Microsoft. Mono has proven to be a major incubator of new ideas that Microsoft has rebuilt into the core of .NET (consider Mono.AddIns, which I believe is a direct precusor to MEF), or the fact that Microsoft is rebuilding their C# compiler in managed code so that they can support a C# REPL, like Mono can today. Microsoft has benefited greatly from the Mono community and open source .NET, and they’ve given a lot back, in the form of the DLR, MEF, ASP.NET MVC, and a lot more.

And, if Mono needed to, they could drop the Microsoft Interop stuff and be a fully ECMA-compliant CLR implementation, with good support on Linux through great libraries like GTK#, and on the Mac through the upcoming MonoMac work.

Further, Microsoft has even gone so far as to validate the Mono Project through Moonlight, and they’ve had over ten years to do something about the project. There is no reason to expect movement now, when Microsoft is consistently giving back to Mono and others.

Still, the community goes apeshit raging against Mono. Complaining about the use of Tomboy and F-Spot in Ubuntu. .NET is a great platform, with a solid programming experience. It’s a hell of a lot easier to avoid common programming mistakes in .NET than it is in C or C++ (both languages which I like). It’s solid technology. There is no technical reason to be opposed to .NET. It’s purely political. And while Microsoft may be hard to trust in patent terms with regard to Linux and the Operating System stack, there is not only no evidence that Microsoft will target Mono or open-source users of .NET, there is plenty of active evidence going the other way.

So please, while there may be plenty of reasons that F-Spot is inferior to Shotwell, F-Spot being written in .NET using Mono shouldn’t even be on the list of it’s ‘flaws’. That’s about as sensical as me refusing to use something like Vym or Scribus because they’re written in QT and not my favored GTK+/Gnome.

How Important is the Phone in Smartphone?

Yesterday evening, as a lark I tweeted out the following:

If I was AT&T and I had to make selective QOS decisions, I’d drop iPhone calls first. What are they gonna do? Buy another phone? Doubt it.

Mostly, I was joking. I don’t think AT&T should do this. As a long-time AT&T customer (though I’ve never owned an iPhone), I think the general state of the AT&T network is abysmal. This is a company that desperately needs to invest in infrastructure, but who seems to be going about it in the slowest way possible.

And the generally idea came from an incident during Hurricane Katrina, whereby Something Awful, a web community I was heavily invovled with at the time, was shut down for…well, let me just quote Richard Kyanka, founder of Something Awful.

Something Awful, one of Zipa / DirectNICs most bandwidth-intensive customers, had to be taken offline so their other 8,000 customers could stay up. This is comparable to shooting the morbidly obese 900-pound fat guy taking up space in a nuclear fallout shelter so 20 other people can fit into the area he was consuming. And, judging by the rescue footage of New Orleans, this is a scenario we might very well encounter in the near future.

The Zipa / DirectNIC guys were doing an amazing job, but in the end, SA was simply too big to deal with as their OC-3 lines started dropping off like flies. The story of the saga was posted by Rich shortly after SA was brought back up, including some unfortunate incidents with PayPal, which is, incidentally, while I’ll never use PayPal for fundraising. Ever.

The other reason I made the comment was that to this day, I hear iPhone owners complaining about poor service from AT&T in their regions, which largely come from the fact that AT&T simply doesn’t have the capacity to deal with the load, especially in busy areas, but I firmly believe that there is something seriously wrong, either with the iPhone radio, antenna, or baseband drivers. In our region, I’d had connection problems with my Android Dev Phone 1 until AT&T installed a second tower nearby. Immediately my connection problems ceased to exist. There are a few places where I can’t get a signal, but they’re places that are either surrounded by metal or concrete (or both), so it makes sense that they have reception problems.

Several of the iPhone users I know saw things get much better, but they still have complaints, and I don’t think they’re as simple as continuing to make fun of AT&T’s failures as a carrier. The iPhone 4, with it’s very nice looking antenna integrated into the case, might do better for reception than the iPhone today, but that remains to be seen in the field.

But it raised an interesting thought for me. I’ve been on AT&T for several years, and it’s been good enough for me. My Android Phone, which is a device designed for T-Mobile’s network, didn’t pull me away from AT&T largely because I wanted an unlocked handset, with the engineering firmware. Plus, I actively wanted to stay on a GSM network, so non-GSM providers have been deal-breakers for me. I like being free to swap out a SIM card to keep using my service.

However, at this time, I’m really watching the coming Android devices to see where they land, and debating if I should take my Wife and I’s business to T-Mobile to get better devices, though today they don’t have anything too exciting.

My point is, that while I’m pretty committed to the Android platform, I’m not tied to my cellular provider. iPhone users, at least in the US, don’t have that option. So, it leaves me asking, just how important is the ‘phone’ part of our smartphones?

Why is it an acceptable joke that our phones are good for almost anything…except making calls?

Given that iPhone users are so devoted to their phones, even through constant complaints about poor service, why not further reduce their service quality, if it better serves the rest of the customer base?

Ultimately, smartphones and their ilk change the way we interact with data in an immensely personal way. I think they’re a strong step forward to a day of truly personal computing. However, without a strong connection, not only for making calls, but for data. Yes, almost all these phones support Wifi, but the point is to have an always on data connection, and there are definitely huge areas outside of Wifi clouds (or certainly open and available wifi clouds). It’s interesting how willing we all are to put up with lousy data service for these devices that are mostly useless without that ongoing connection.

YUI Grids on Movable Type

So, as anyone who’s ever read my blog before, you’ll see that over the weekend I upgraded my blog template to use YUI Grids and YUI3 for the JavaScript. By switching away from the MT templates (or, the templates that were standard when I installed the first versions of MTOS 4), I was able to reduce the HTML pageweight by damn near half. The old templates were really div-heavy, and had a ton of extra markup. Mostly, the decision was driven by a desire to redo the visual feel of my blog, and I felt that I may as well rewrite under YUI Grids while I do it.

I chose to use the YUI 2.7 CSS Reset-Fonts-Grids, since I was more familiar with the Foundation in 2.x, and I don’t think things changed much for Reset and Fonts in 3.x anyway. Actually, rewriting the HTML templates was pretty simple, and I’m planning to share them, if I can figure out how to share them easily (the templates don’t seem to export well).

As for how I got the color scheme, I used the excellent Color Scheme Designer, specifically with this key, and I’m pretty happy with it. This color scheme designer is awesome, including letting you simulate the colors if a user was color-blind. If you don’t have this bookmarked yet, you should. Especially if you are, like me, not much of a designer. It provides me with a collection of colors that I can play with and tweak to try to get things right, and I do think I’ve got some tweaking I still need to do.

For the JavaScript, I used YUI 3, and mostly was trying to redo the absolutely necessary functions from the Movable Type javascript, and I’m pretty sure I got it right. I further plan to migrate the Y! Buzz, StumbleUpon and Flattr sections to be dynamically added via JavaScript. The Flattr in particular has proven to be really slow, so I need to move that to deferred loading.

I also upgraded to the latest version of the Syntax Highlighter script from Alex Gorbatchev, which is a ton better, and didn’t require any of the hacking I did to the old version to let it generate standards-compliant markup. I may do some deferred execution work on that, and try to build it to load only the brushes needed, but I haven’t determined how best to do that (or if it’s worth it).

In all, I’m really glad to have the visual refresh done, and mostly to be done with the rebuilding of my skins. I think things are greatly improved in this way, and I hope that people enjoy the new look and feel. I also plan to do some more posts in the coming weeks detailing some of the JS work that I’ll be doing on the blog, so that other people can take advantage of those techniques.

Electricity from Algae

Mike Thompson, a British designer working in the Netherlands, recently published the design for a solar-powered lamp, that uses water, CO2, and blue-green algae to charge a small battery, which then powers a small LED lamp when power is needed.

I was pretty excited when I saw the story cross my feed reader. In High School, I spent some time doing a bit of research on a similar battery, using blue-green algae and a purple chemical whose name I can not recall, which would strip the electrons from chloroplasts as they were excited by a proton of light. The reaction was stimulated by heat as well, you’d get a sharp rise in the current drawn from the reaction. However, the chemical had a saturation point, and after a while, it couldn’t draw any more electrons. We did some work to see if there was a way refresh the reaction, but got nowhere.

So, to see something that functions identical to work that I was doing a bit on over ten years ago, was pretty interesting.

Now, reading the PDF linked above, the mechanism for this is a lot more involved. It required inserting 30 nanometer wires into a chloroplast, it’s able to produce about 0.6 milliamps per square centimeter, not much, but if all it’s going to do is charge a battery to power a small lamp, then it doesn’t need to be much. Now, I don’t remember the exact output of the reaction that I was working on all those years ago, but I do remember it being more prolific, though immensely shorter lived.

However, once done, you have a system that requires you only add water and CO2. And, if you read the document, a large part of the reason behind this is to make people more cognizant of their energy usage by making the generation of that energy more personal.

I think this is a really interesting project, but I’m not sure it’s sustainable. Cory Doctorow in his new book, For The Win, several times makes the claim that all the gold ever mined on Earth, would create a block no larger than a regulation Tennis court, but that the certificates for gold sold amounted to roughly double that. Gold has value because it’s rare. It’s current value, according to Yahoo! Finance, is over $1200 per ounce.

Admittedly, the amount of gold in a 30 nanometer wire is miniscule, and if anything is going to be in the way of this process, it’s likely to be the difficulty of building this device than the cost of gold needed to build it. However, the general idea of making people more cognizant of their energy use decisions, is valuable, and I’m in favor of any research that focuses on energy generation in novel ways, and I think algae could be a really interesting source of electrical power if we can find an easier way to recover that energy. I’m not sure this method can be done at scale, but in a world where 2.6 Billion people still defecate openly; by streams, rivers, or lakes; generating energy on the small scale, to provide light into the evening, is certainly not going to hurt anyone.

A Micropayments System That Might Actually Work

I’ve been interested in the micropayments space for quite a while. The web is full of content that is great and helpful, but the barriers to actually paying for it are pretty high. Using something like PayPal, as many sites do, is problematic because PayPal’s fees are high, and it takes a LOT of clicks (and typing) to make the donation.

My feeling has been that what we need is a company that lets you put a few dollars in an account, and then with a few clicks, donate a few cents to whatever you’re interested in supporting. It appears, however, that someone has likely beaten me to the punch, at least potentially. Flattr is a recently released project based out of Sweden, which takes this Micropayments problem and makes it dead simple for the end user.

How does Flattr work? You begin by setting up a Flattr account, at this time either by requesting an invite or getting an invite code from a friend. Then, you seed your Flattr account with funds, it’s worth doing several months at a time to avoid being too heavily dinged on processing fees. Choose how much money you want to give per month, from €2 to €100. From there, you just keep an eye out for “Flattr This” buttons, which you can see at the bottom of this post (and every post on my blog), or in my sidebar. Your monthly Flattr budget will be split evenly among everything that you’ve Flattred.

If you’re a content provider, then Flattr does require that you Flattr things every month in order to qualify to earn any revenue, which I see as being more of a way to seed the system. However, I don’t see it as a big problem, you should always be able to find something to Flattr. If there is a weakness to Flattr right now, it’s that the Beta status, and particularly the slow ramp up of the invite system, weakens Flattr’s ability to reach critical mass and usefulness. I’ve put Flattr up on my blog on hopes of earning even a small amount of revenue (my goal is to make my blog at least pay it’s own expenses), however, I suspect most of my readers have never heard of Flattr, but even now, having read about it, that doesn’t make it trivial for you to sign up and begin using Flattr.

Some people I’ve spoken to about Flattr feel that the equal disbursement Flattr does every month isn’t ideal, they want to be able to add ‘weight’ to their Flattr’s, giving some people more than others each month. Frankly, I think that overcomplicates the problem (Flattr only allows you to Flattr something once per month, that something can be a Blog or an individual entry. Flattring an entry doesn’t preclude you from Flattring another entry or the blog however). The beauty of Flattr is that you just don’t need to think about what you’re donating. I’ve allocated €2 per month for the time being, and if I want to Flattr something, I don’t need to decide how much to give. It’s a one-click process, requiring no more thought than ‘That was awesome, I want to donate to that.’ It’s this unthinking simplicity, and the ease of budgeting the static amount per month, that makes Flattr appealing to most users.

That said, I don’t think that Flattr is the end of the story of Micropayments on the web. It seems great for many uses, but I think there is room for those more…thoughtful…disbursements where you do want to decide how much to donate. However, I think Flattr has a lot of potential, and I sincerely hope that they can continue to grow and thrive.

Simple Web Debugging in Fiddler2

7 Comments

Fiddler2 is a great tool for doing web inspection, but it also provides excellent tooling for modifying data in HTTP requests while on Windows. On Linux, I usually use WebScarab or Paros, both of which are cross-platform in Java, but I find that Fiddler is a better project when on Windows.

To provide a simple example of using Fiddler2 for web debugging, I’m going to talk about the workflow for the Doctor Who: The Adventure Games geolocation check, and discussing how it can be examined using Fiddler (as I was on Linux, I used tcpdump and wireshark, but that gave me a TON of noise). First, install and start Fiddler. I would probably immediately go to the File menu and make sure that ‘Capture Traffic’ is turned off, so that it doesn’t overwhelm you immediately. Then, go the Filter’s tab on the right part of the screen:

Fiddler BBC.co.uk Filter

You’ll then want to set the options the way that I have above. Set the Host Filter to only show the following hosts, then put www.bbc.co.uk in the list. Then, filter to only the /doctorwho/tag/api/geo/isukrequest URL path. Then, we want to break, and to do that we’ll break response on Content-Type: text/plain. This will ensure that we can intercept the response for any geolocation request from this particular API.

Firing up the installer, watch your fiddler window, until you see the following show up in the Web Sessions window:

Fiddler BBC.co.uk Breakpoint

Click on that line, and then choose the “Inspectors” tab, instead of the “Filters” tab on the right. You should then see the following on the bottom half of the view:

Fiddler BBC.co.uk Response Playback

From the drop-down select the “200_SimpleHTML.dat” option, then click on the “TextView” option, and change the content to 1. Click Run To Completion, and you should be done.

Fiddler2 is an awesome tool for web developers, however it, or another Web Proxy, should be a core tool in any security toolset. Seeing how an application responds to simple tweaks of the input or output is really interesting, and is at the root of most security research. If Fiddler has any apparent weakness, it’s that I don’t see a good way to automate fuzzing responses and the like, however, that may just be something I’m not seeing.

Installing Doctor Who: The Adventure Games Outside Britain

11 Comments

This weekend saw the launch of Doctor Who: The Adventure Games, a series of downloadable adventure games starring The Doctor and Amy Pond which coincides with the currently running series of Doctor Who.

Many people world-wide (and in Britain) have been disappointed with their attempts to install the game, since it calls to the BBC to determine if you’re in the UK or not. I have a friend in London who can’t install the game because of this check, which just seems laughable to me.

Knowing that the geolocation check in the installer was almost certainly related to a web call, I set up tcpdump to capture all the network traffic on my machine while I attempted the install. I then loaded the resulting pcap file into wireshark, and filtered out all the traffic not going to the BBC netblock with the following filter rule: ip.addr >= 212.58.224.0 && ip.addr <= 212.58.224.255.

This showed a single HTTP request to the following URL: http://www.bbc.co.uk/doctorwho/tag/api/geo/isukrequest

The response was a 403 with a 0 in the body.

I then decided to test a frighteningly simple theory. I started up the nginx instance on my machine, dropped a file named isukrequest containing 1 in the /var/www/nginx-default/doctorwho/tag/api/geo directory on my Ubuntu 10.04 machine, added a line containing 127.0.0.1 www.bbc.co.uk to my /etc/hosts file, and began the install.

Note: Actually, I had to disable to /doc location section in the /etc/nginx/sites-available/default file, as it matched the /doctorwho request, and completely messed up the request.

The install worked perfectly, and I was able to launch the game in Wine1.2 (from the Wine PPA) on Ubuntu 10.04 while running in Virtual Desktop mode for Wine. I haven’t tried full-screen just yet, and the option to let DirectX programs lock the cursor doesn’t seem to be working really well, but the game is playable. I’ll have a more thorough review later this week, once I’ve had an opportunity to play through it.

I expect the BBC will eventually release a non-UK exclusive version. Some people feel that they’ll charge for it, since many UK residents feel the reason they’re getting the game at no charge is because it was developed using their TV Licensing fees. That may well be true, and if the game gets released outside of the UK with a charge associated with it (there has been no word from the BBC about this possibility to the best of my knowledge) I would encourage people to consider paying for it. However, I would encourage the BBC to use this series of games purely as a way to drive interest in this season of Doctor Who, which has been, by far, the best since the show relaunched.

I was also surprised by just how easy circumventing this was. There was no encryption. No handshake. No reverse engineering was required, just a tiny bit of observation of the traffic on the wire, and setting up a web server on your own system. The ‘attack’ on this system is completely trivial, not even running afoul of anti-reverse engineering provisions in certain laws (which I disagree with). A simple challenge-response handshake would have made this task even remotely challenging, and protected the software via anti-reverse engineering clauses.

LINQ to JavaScript

I’ve been using LINQ in C# since shortly after Visual Studio 2008 was released. It was an awesome addition to the language, allowing you to call into a database model in a method that is identical to how you deal with in-memory collections. Admittedly, LINQ is basically an entire data-munging framework built around various classic functional programming constructs such as map and reduce.

If LINQ has a problem, it’s that writing a new provider can be really difficult, since it requires pretty deep understanding of .NET expression trees, however most people won’t ever need to do that. Plus, LINQ is one of the primary reasons why extension methods were added to .NET, and even if LINQ wasn’t as great as it is, extension methods are something to get excited about in a statically-typed language.

Anyway, for manipulating pure data, LINQ is a wonderful too, and Chris Pietschmann has released a version of LINQ for JavaScript. And it’s in the flexible Microsoft Reciprocal License, so it’s easy to download and use. It’s about 7k raw, with very clean code, and about .7k after minification and gzip, for a minimal impact on page weight. And, if you’ve ever written LINQ, the syntax is basically identical to the lambda syntax.

var myList = [
  {FirstName:"Chris",LastName:"Pearson"},
  {FirstName:"Kate",LastName:"Johnson"},
  {FirstName:"Josh",LastName:"Sutherland"},
  {FirstName:"John",LastName:"Ronald"},
  {FirstName:"Steve",LastName:"Pinkerton"}
];
var exampleArray = 
        JSLINQ(myList)
        .Where(function(item){ return item.FirstName.match(/^Jo/); })
      .OrderBy(function(item) { return item.FirstName; })
      .Select(function(item){ return item.FirstName; });

/**
 * exampleArray = [ "John", "Josh" ]
 **/

Now, the blurb that Chris wrote for JSLINQ is a bit wrong. He says the following:

LINQ to JavaScript (JSLINQ for short) is an implementation of LINQ to Objects implemented in JavaScript. It is built using a set of extension methods built on top of the JavaScript Array object. If you are using an Array, you can use JSLINQ.

The important distinction is that JSLINQ works on array-like objects, such as HTMLCollections. But, also that I first red the ‘extension methods’ comment to mean that he was modifying Array.prototype. Thankfully, that’s not the case, which is why you need to call JSLINQ on your collection first.

Personally, I think the ability to query the DOM with this is less useful than the ability to query over data sets, especially since YUI3 already provides pretty rich support (using CSS Selectors built into modern browsers). However, the ability to take an existing list and filter it down using LINQ style expressions can be very useful. YUI3’s NodeList is not an array-like object. There are basically two options.

The following example will get all the anchor tags on a given page which reference internal links (links to the same domain as the page):

JSLINQ(Y.NodeList.getDOMNodes(Y.all('a')))
    .Where(function(item){ return item.href.match(location.host); }
    .Select(funtion(item) { return item; }

Y.all('a[href*=' + location.host + ']');

The second example does require the ‘selector-css3’ submodule to be loaded (it isn’t by default), but it’s a far cleaner syntax for querying the DOM, so the examples of JSLINQ that query the DOM aren’t necessarily the most useful, but if you need to do any client-side filtering or munging of data, LINQ is a really nice syntax for doing that, which is SQL-like, but still unique to your environment. If you’re already familiar with .NET 3.5 and LINQ in that world, then I’d suggest looking at JSLINQ for your projects, it’s familiar, and it looks as though it should work well.

The Power of Vim

4 Comments

I am a reformed emacs user.

Used it for years, on a dialy basis, as my sole programming environment. Eventually, I got tired of it. It took forever to start up (I didn’t just leave it open), it wasn’t always available, like when booting from a Live CD or something, and it was disturbingly overkill for any need to perform a simple edit on a configuration file. Unfortunately, the default editors that were (and are) on most Linux distributions are dinosaurs like pico or nano, which are downright unpleasant to use.

Oh, sure I’d tried Vi before, but it had seemed so obtuse. A model text editor? Why would I want modes on my text editor? But over time my love for emacs waned. Doing simple things like copying text required mastery of obscure chorded commands, and hardly an incantation existed that didn’t require a half dozen modifier keys. It was originally the desire for a text editor that I didn’t mind opening for quick edits, one that I didn’t feel I needed to marry just to learn to use, that originally drew me to vim.

Soon, switching between entry modes was second nature. The first time I wanted to search, and found it no further away than the Perl regex language…I knew I couldn’t go back.

Then, I found myself out in the workplace, a free software hacker trapped in a Microsoft Ghetto (it’s a big ghetto, don’t get me wrong). Sure, I could have installed Vim on Windows, and I did, but I’ve always found Visual Studio makes for a painful tool when trying to use other editors. Sure, other’s have done it, but it wasn’t worth it for me. However, recently, I’ve decided to refocus on learning vim. In part, because I’m tired of hearing people brag about features of their editors that Vim has had forever, but also because those skills finally translate into my day job, now that there are bindings for Visual Studio 2010 to behave like vim.

Still, there is much to learn. Rob Conery, a former Microsoftie who’s since gone out on his own, has started posting Vim Kata on his blog, an extension of the code kata’s that have gained popularity lately, this is just a repetitive set of instructions you can repeat to get a little better at certain vim commands. Plus, Drew Neil is doing weekly vimcasts, which lack the repetition of the kata’s, but provide great information and more explanation than Conery has.

Lately I’ve also been looking at modules for vim to make it more complete. To help with my recent YUI3 presentation, I started using SnipMate, a port of TextMate’s snippets to vim, creating a custom snippets file for that. I’ve installed the Gist script, to make working with Github Gists easier. I’ve been learning to configure vim so that I automatically get the file formatting rules I want for whatever platform I’m working with.

I won’t say that Vim isn’t a complex editor, and that the modal editing takes some getting used to. However, I will say that since I’ve really worked to learn Vim, I’ve found there is a logic to everything it does, and that my programming has probably gotten faster. Certainly, there is something to be said for the tools in Visual Studio which aid in refactoring, or IntelliSense, which helps make sense of an immensely complicated API. However, ultimately these editors are just text editors, and I’m increasingly convinced that no editor mangles text as efficiently as vim.