November 2009 Archives

Chromium OS is not the OS for You and Me

Wednesday morning I was looking over my twitter feed and saw a tweet from Scott Hanselman, seeming to complain that Chromium OS was little more than a bootable web browser. Which more than suggested to me that Scott, simply didn’t “get it”.

Chromium OS is being sold to us as nothing more than a bootable browser, because from Google’s perspective, that’s all that most users need. And there is definitely something to that.

I know a lot of users that use their computers as nothing but a gateway to the Internet. They do all their gaming in the browser (usually via something like Facebook), their e-mail is web-based, and they quite literally, have little or no need for anything but a Browser.

Chromium OS is not a geek OS, though. I live in vim, at the command line, and through ssh sessions. For me, Chromium OS is a non-starter, and while I might try it on my Eee PC, possibly contribute, and definitely suggest it for other people, it will never replace Ubuntu for me. And I suspect that Hanselman will never be happy with it either. And that’s fine, Chromium OS wasn’t designed for us.

What Chromium OS does represent is sort of a full-circle attitude toward computing. In the early days, it was all about mainframe computers and dumb terminals. It wasn’t until the late 1970s and early 1980s, when prices on hardware started to drop, and the PC began to enter the home, that people started to question the need for the monolithic computing environments that had persisted until then.

Now, we have the same thing, people want to take their data with them wherever they go, so computing is starting to migrate toward the “cloud”, so that data can always be available. Google does this with any number of services, from GMail to Google Contacts to Picasa and the like. Microsoft is also going down this path of offering monolithic services for users to take advantage of while Microsoft holds their data for them.

More and more, the role of the computer, for many users, is becoming little more than a gateway to the Cloud, and all that sort of user needs is a web browser.

Myself, I don’t believe we’ll ever be completely cloud based, I think it’s more likely that as data density and interfaces get better, we’ll be able to carry all of our data with us, and have it be accessible across everything. But, for this moment, where people are entrusting more and more data to the cloud, Chromium OS is all the OS that a lot of users will need, especially on netbook class devices.

Minor Thanks: Symphony of Science

On last week’s Science Advocacy post, I included a video from the Symphony of Science, but given that they just released a new video on Monday I figured this would be a great opportunity to give some thanks for the work this composer is doing.

The new song, entitled “Our Place in the Cosmos” features clips of Carl Sagan, Richard Dawkins, Michio Kaku and Robert Jastrow for the purposes of putting together this composition, and it’s pretty awesome.

Now, this really isn’t quite what I meant when I said that we need a new voice for Science in America, but it’s still awesome work that I hope will catch some people’s attention and imagination, because at this point, every little bit helps.

Sustainable Holidays and Life in General

This week is, of course, the Thanksgiving Holiday, and we, like many are traveling to visit family. Since we are not running the holiday, and there will likely be over two dozen people at dinner, I’m sure there are some decisions being made that might not have otherwise. Rather than a sustainably raised heritage turkey, I expect to see a Tyson-brand frankenbird grace our table. I have little doubt that much of the vegetables we’ll have will be purchased with little consideration to where they came from and how far they had to travel to reach our plates, let alone the means used to grow them. I’m sure there will be food prepared from boxes containing ingredients that only an industrial chemist could love.

Of course, none of this will stop me from eating any of it. I’m a pragmatist, and for me, spending the time with family for Thanksgiving and the coming Christmas is more important than these high ideals that I try to live by the rest of the time. Sure, I’ll try to plant the seeds of change, and perhaps nudge things, but ultimately the holiday is more important.

But, we still need to do our part, particularly where what is frugal and what is sustainable cross paths. We’re visiting my in-laws this weekend, and we’ll be borrowing a vehicle from them that is several miles-per-gallon more efficient than our pickup for the trip, at least until we can get our pickup fixed, which should help it’s efficiency immensely. And what we eat outside of the big feast will, at least half the time, not have the same kinds of concerns.

Christmas, to me, is the bigger violator of sustainable practices, since it’s a holiday that revolves entirely around consumption of everything, not just food. Of course, people are already asking what we want for Christmas, but as I sit in our new condo and look around at the stuff that we still have in boxes that we’re not sure where to put. Now, the new place isn’t really any smaller than our last place, it’s just laid out in a way that maximizes living space to storage, where the last place took another direction on that tradeoff.

Ultimately, we want or need very little new stuff. What stuff I can think of are things that would replace the need for other things, though those things tend to be more expensive, or household gadgets that would make it easier to make more of my own food from scratch more easily (rolling out noodles by hand sucks).

We had, for a while, considered buying a larger condo than we did, and while we certainly could have afforded to do so, we’re left thinking now that all the extra space would have done for us would have been to make fill it with more stuff. Even when we have kids, I’m not sure I’d want to see our primary living space (ie, the house) exceed 2000 square feet on the highest end (though I would want an outbuilding for a workshop).

I’m giving a lot of thought to trying to start a hackerspace in Pullman, because more and more I want the ability to indulge in creative instinct, but I don’t always have the time or money to buy the tools that I would need myself. What I really want, these days, are things that’ll make it easier to indulge that creative impulse, to learn. Ultimately, those are the gifts that would be most valuable to me, those that would allow me to create, and to create gifts for others that will hopefully mean something to them as well.

Science Advocacy

I’ve always been really interested in Science, and while my career has taken me to Computers and software in particular, I still try to keep up on, at least in a superficial sense, what is going on in scientific research. In the last few years, this has involved getting a crash course on modern evolutionary theory, since my wife is a researcher in that field, but more than that, it’s a topic that (miraculously) has been the topic of an increasing debate in the last few decades, so evolution is something that anyone with an interest in science should at least have a basic understand of.

Today, at least in the United States, there seems to be a war on science, at least in the public eye. We have scientific principles that have decades of evidence and research backing them up, that some people claim is simply wrong, even though their entire argument is based on the fact that the body of knowledge can’t yet explain everything. We have states that have passed laws to counteract scientific consensus.

Maybe ‘war on science’ is too conservative a claim. This is pretty much a war on common sense at this point.

But, when you look at the scientific community, it’s clear why these problems exist. Scientists suck at selling their ideas and work to non-scientists, hell explaining can be a challenge for these people. But, I’ve talked about this before.

This is about the need for advocates. If not the scientists themselves, those of us who follow what’s going on in scientific research, and who are willing to take the time to learn things well enough to explain them. We need bloggers and podcasters and everyone else to take the time to have reasonable discourse with people who deny scientific consensus to find out why, and respectfully inform people why the consensus is what it is.

The scientific community has, regrettably, lost it’s two greatest advocates to the public in the last fifteen years, and both died very young. Stephen Jay Gould is responsible for a large body of modern evolutionary theory, from punctuated equilibrium to heterochrony and beyond. By all measures, a highly accomplished scientist. But more than that, he was a prolific writer of material that could be marketed to non-scientists, he spent a lot of time on television from the 1980s on, including a guest spot on the Simpsons. Now, he’s been criticized by some in the scientific community for not always presenting the cutting edge of evolutionary theory to the public, but it’s the nature of science to disagree with one another. The main thing is that Gould was able to address the issue of evolution intelligently, and approach-ably to the general public.

And then there was, arguably, the most famous astrochemist who ever lived: Carl Sagan. Sagan took the sort of advocacy that Gould was doing to a whole other level (or actually, Gould never quite managed to reach Sagan’s level of advocacy), including the often rebroadcast PBS series, “Cosmos”, where Sagan talked to people about the origin of life and the universe. He appeared with Johnny Carson on the Tonight Show, and was recently the subject of an xkcd cartoon.

Sagan, if only for his advocacy, is a legendary figure in science, and one of the best advocates that science has ever had, and we’re desperate for a new advocate. I leave you with this mash-up of footage from Sagan’s “Cosmos”, and hope that we’ll see that advocate soon.

This video, and others are courtesy of the The Symphony of Science.

The Problem of Monetizing Content on the Internet

Last week, News Corp CEO Rupert Murdoch announced that News Corp was seriously considering blocking all of their content from Google. Mind you, this is the same man who’s been talking about putting a pay wall up around Hulu, or at least parts of it.

Let me address the Hulu thing first, since I’m an avid Hulu user. If Hulu goes pay, I won’t say I won’t use it. I will only pay for Hulu if they dramatically increase the available content. I would pay for Hulu if I could then watch any episode of any show at any time. And if the subscription rate was right, I wouldn’t even bitch if the commercials (as they stand today) stay. That said, if the price is too high, or the amount of available content doesn’t improve, I’ll leave Hulu, sadly perhaps, but I will.

Murdoch’s argument is that content has value, and we, as consumers, need to be willing to pay fair market value for it. I agree, but Murdoch needs to understand, that at least as it relates to their Television outlets, or even their traditional web-based businesses, users are fully accustomed to getting this data for free. At the end of the day, even users who pay a monthly fee for cable service, tend to view the programming as free content, if nothing else for the sheer amount of content that we get for a relatively low amount of money on cable providers, makes the content seem far less valuable on an individual basis. Even in the newspaper and magazine realm, the subscription charges mostly cover delivery of the content, not generation of the content.

Music, movies and books don’t have this problem of people not wanting to pay for the content, because that content has never really been ad-supported, so we have no expectation of getting it for little or no cost, save the cost charged by the provider (ie, 99 cents a song). Admittedly, the price of this content has been dropping, but this seems to be at least because there is so much more content than their used to be. Cory Doctorow often talks about the information economy, and how we as a country have moved away from manufacturing to generating information. This low apparent value of content is a side effect of this world we’ve created. Generating content in the form of video and music is easy, just look at YouTube, Vimeo, and so on. But as we get more content, the content competes with itself, and eventually, prices have to drop. That’s the nature of the information markets. On the other hand, it’s easier than even to reach more people, so selling these low-cost copies of content is easier than ever.

But publishing, text on page publishing like newspapers (of which blogs are a cousin) and television have never worked this way. They’ve always been ad-supported in the US, so the idea has always been that they were functionally without cost. But now, Rupert Murdoch and the rest of News Corp is trying to put up a pay wall around content that people have always been given for free, and it’s going to result in a massive blowback.

For some reason, Internet Advertising has always had problems monetizing. And this is amazing to me. I can make certain assumptions about the readership of my blog. They’re into technology (these days probably web and MS Web Platform technologies), and/or they have an interest in sustainability and DIY type stuff. These are two fairly small segments, and could be targeted fairly effectively in the sense of those topics, especially since I try to keep the sustainability stuff to one day.

Hulu knows an INSANE amount about my viewing habits. They know what shows I’m subscribed to, how often I watch them, what my viewing patterns are, but they don’t seem to be taking advantage of that. Their advertisers advertise on certain programs, and I’m reasonably sure that pretty much everyone watching Late Night with Jimmy Fallon on Hulu lately have been watching Verizon Droid ads to go with it lately. Why not take advantage of the profile that Hulu has on me (even just in the form of my subscriptions and non-subscription viewing), and try to customize their advertising to me. I mean, with Hulu, the advertisers can get hard statistics on exactly how many viewers they have, while in current distribution channels, all they can get are weirdly calculated estimates that seem to me to be nearly impossible to trust (except for things like American Idol, where people can text in to vote, that provides good clues as to viewership).

Do users need to need to think about paying? Yes. But content providers need to realize that we’re in the most content-rich society ever created by humanity, and that means that they’ve got a lot to compete with, both from the old houses, and the new upstarts. If they’re going to hit a wide market, they need to keep prices low, and they need to keep content availability high. I don’t know how many shows I haven’t started watching on Hulu because I couldn’t start at the beginning.

Fixing a Leaky Toilet Flapper

1 Comment

Just before we started moving into our new condo, I noticed that our toilet was continually running, and that unfortunately, it was a fairly fast leak. Step one, was to determine the source of the leak. Our toilet has a standard flapper-flush-valve, so the leak was almost certainly with the flapper.

flapper.JPG

Incidentally, we have really hard water in our area, which is why the toilet tank looks so filthy. It’s vaguely metallic smelling, and it’d be a bitch to clean, but the inside of the tank isn’t such a concern for me.

After turning off the water and draining the tank, the leak stopped immediately, ensuring that, sure enough, the flapper wasn’t sealing properly anymore. Excellent, because the alternative was that the tail piece was leaking, and that would be a lot more of a pain in the ass to fix. Plus, the flapper is a sub-$5 part, depending on which one you buy. I opted for one from a brand named ‘Corky’ which was about $4, but they’re all pretty much the same.

Actually, the only difference between assemblies is whether the flapper needs a ring to slip all the way down the overflow pipe, or if it attaches via some small clips on a plastic ring already built on to the flapper. Most flappers on the market today, are designed to be used in either system, plus, they tend to be bigger than the one that was on our toilet.

Flapper Comparison

Our toilet had the clips on the side, so the ring has to go, but luckily, a utility knife is enough to remove the ring and the back piece and install the new flapper, which will just hook directly on the same attach points as the old flapper. Turn the water back on, you might need to flush once to ensure a good seal between the flapper, but it’s a pretty simple, painless fix, something that doing yourself will cost $5 and fifteen minutes, but would cost probably $60 for a plumber to come out and fix (based on estimates of plumber time).

OpenSource .NET Concurrency

In talking a bit more about the concurrency issue I mentioned yesterday, I only alluded about one .NET threading concurrency library, Microsoft’s forthcoming PLINQ. Go figure that that same day Miguel de Icaza would link to another group who’s been working on this problem for years (and whom I should have remembered sooner).

MindTouch works on web-based enterprise collaboration tools built on .NET, but they’ve also developed a threading and concurrency model that, according to the Mindtouch presentation at the recent Monospace conference, vastly outperforms the existing thread pools in .NET. I’ve got a project I’m doing design work on that could benefit greatly from strong thread pooling, so I’m looking forward to downloading the Core version of MindTouch, and seeing what I can do with it.

video platform video management video solutions free video player

The Future of Systems Programming? Google's Go

The big G announced a new language being worked on internally today, that they hope will revolutionize systems programming. I was highly focused on systems programming when I started, and it’s still an interest, so I was curious what Go would have to offer beyond what was available today in other languages.

What I’ve seen, is very interesting. It’s similar in syntax to something like C, which is little surprise considering that Ken Thompson is a contributor, but like a lot of what Google’s been working on lately, it’s designed with concurrency and multi-core in mind. This is hugely important, since processors aren’t getting any faster, but we are seeing more and more cores on each processor.

Even modern languages, like C#, only provide easy concurrency support at the library level, and those libraries are mostly research projects which haven’t quite hit general availability yet. Having this built into a language as a consideration from the base level of the design stands to make certain classes of tasks much easier.

I’m probably going to have more Go posts in coming weeks as I play around with the language some more, and try to learn what it can do, but for now, the introduction video from the Google Tech Talk is a must watch.

ChromaHash: Not As Dangerous As You Think!

The ChromaHash module I’ve submitted to the YUI3-Gallery got hit up on Reddit this week, which incidentally is the second time ChromaHash has been discussed there. And this time around, the discussion was just as negative this time around.

First, a lot of people focused too heavily on the fact that the demo screen is a confirm password box, commenting that a simple checkbox confirming the password was the same would be sufficient. Of those who did recognize that this was meant to be used on a login page as well, we got the general reaction we get that since this gives any information about the password, then it’s wholly unacceptable, and it completely compromises the password security.

Now, Mattt Thompson, creator of ChromaHash for jQuery (and who’s module mine is based off of), has written a pretty good post outlining why this implementation isn’t as bad as the reaction we’re getting off of certain information security folks, and I’m not planning to reiterate his points (at least, not entirely), since I, as a person with great interest in information security, think that Mattt’s post is more than sufficient at making the point.

Instead, I’m going to talk a bit about some of the suggestions that have come out of the Reddit threads.

  1. Salt Password with Username

Actually, this is a reasonable option, since it would ensure that users with the same password don’t discover that. There are a few implementation-level details, since we’ll have to tell chroma-hash where to find the salting value. The initial thought I have is making the salt object optionally take an array, where any element of the array that is a valid CSS selector, then I’ll take the value property of the node referred to by the selector and append it to the other elements of the array (as strings) to get the salt value. This does mean I’ll be recomputing the salt periodically, but I think there are ways around that (subsribing to the ‘change’ event for the node, for instance). This suggestion, I think, warrants some more consideration. Though really, password collisions should be pretty rare.

  1. Could reconstruct password as user typed it.

This is really only an issue for slow typists, since in my implementation the color shifts take half a second. This ought to be user configurable, and it will be in a new release soon-ish. An alternative, that I’m not as convinced to it’s usefulness, would be to set a delay between the last keypress and the animation beginning. I might do this one, but again, I’m not convinced it’s useful.

  1. Randomly salt password on pageload

Umm, no. This would make this tool completely worthless.

  1. People are colorblind.

Yes, approximately 14.5% of the population has some color-related vision problems. But that means 85.5% of the population isn’t. And even the people who are color-blind can still glean information from chromahash, even if they’re more likely to encounter collisions. Plus, this is a non-essential tool, so it’s not like using chromahash prevents the color blind for interacting with your site.

  1. Any information about the password is TOO MUCH INFORMATION

Actually, my favorite one of these is a guy who made a bruteforce for chroma-hash. Now, his example is kind of bullshit because he uses a crap password, so of course it’s fast, but it completely fails to take into account a few notes:

  1. We’re using MD5 on the backend, which outputs 32 hexadecimal digits, of which we’re only using 6 to 24 (which is configurable by the ‘bars’ option). The collision space, particularly if you only use one or two bars, is non-trivial.
  2. There are very few circumstances where an attacker could get the exact hex values for a chromahash, when they wouldn’t have better mechanisms to steal your password (ie, keyloggers). And for those cases where it could be, disabling chromahash (at least, the YUI3 version I wrote) isn’t very difficult, and could be wired up to a key event handler, an example of which I’ll probably have later.

It’s highly unlikely that someone will be able to get enough immediate data from this system to be able to make a reasonable attack on a password, certainly not when there are so many other, easier ways to perform that attack. And ChromaHash is configurable, based on when it goes color as well as how many bars it displays, which both would help this situation.

Ultimately, however, passwords are a failure as a security mechanism. Most people use the same password (or small set of passwords) everywhere, they don’t change them very often. Not to mention the fact that a lot of people storing password are doing a poor job of it. I worked at an e-commerce company not too long ago that when I took over their web presence, the passwords were in the database in plaintext, access rights were driven by a cookie. Hacking this site was trivial until I rewrote it, and even then, there are a few things that I didn’t do correctly right away, like not salting my MD5 hashes I was storing in the database (or using MD5 as my hashing algorithm in the first place…).

I believe that ChromaHash can improve usability, since it provides immediate feedback to the user that their password is accurate, and given that many people are visual, I think that they’ll develop, rather quickly, a gut reaction to the hash colors if they’re wrong or not. Will it work for everyone? No, but it could help most people.

We need to move beyond passwords as an authentication mechanism. I’m a big fan of the Yubikey, particualrly when paired with OpenID, though just migrating toward OpenID is a huge improvement. But ChromaHash, as it stands, does not significantly weaken the nature of passwords.

YUI3 Gallery - Community Driven Development

Dav Glass has probably become one of my favorite people on the Internet. The work he’s done in the last year or so to make the Yahoo! User Interface library more of a community driven project has been really awesome, between the new forums, the transition to GitHub, and now reaching a new level with the YUI3 Gallery.

I’ve been using YUI3 since shortly after it was released, quickly recognizing the benefits of YUI3’s advanced architecture over YUI2, going so far as to submit a bit of code, in the form of the collection module, and the work I’ve done on the Chroma-Hash module for YUI3. And that’s just the code I’ve made public.

So, I was really glad to be able to begin contributing to the Gallery, having already uploaded two modules, with one or two in mind now that Dav’s mentioned the Gallery API. The great part about Gallery is that, if your module is free, licensed under the YUI BSD license, and you’ve signed a Contributors License Agreement, you can have it pushed to Yahoo!’s content distribution network. Global, high availability serving for JavaScript widgets? Hell yes.

The best part of the gallery is that it’s a dead simple way to make modules widely available for use, even if those modules will never really make sense in the context of the YUI framework itself, like my Chroma-Hash module, Luke Smith’s Konami Event, or Dav Glass’ quick and dirty port of the Simple Editor, but still might be useful to other people today. Plus, it’s an incubator for code that might someday make sense in the core (Dav’s YQL module being an example).

Yahoo!, and specifically the YUI team, have demonstrated exactly how a corporately backed open source project should run, as well as being a great example of why git (or really any distributed source control system) is so great for collaboration, especially for a growing team.

And it’s just one way to contribute. Dav does a great job outlining all the methods that exist today, and some still to come

Cloning XML Nodes from one Document into Another

Microsoft’s .NET Framework was designed in part to offer a powerful architecture for working with the Web. In the early 2000’s, when .NET was first being designed, that meant having strong support for handling and manipulating XML. Visual Basic.NET even includes support for XML literals. The result of this need was the System.XML Namespace, which contains all the methods and objects you’ll need to manage an XML document.

Almost. First, System.XML is kind of a pain to use, but that’s mostly because parsing XML with a statically typed platform is pretty rough, and frankly, the only languages where working with XML is usually even reasonable are dynamic, unless the XML format is very, very strongly defined, such as in SOAP.

System.XML, however, has no built in mechanism for cloning an element from one document (or document fragment) into another. Attempts to do so (even to copy simple attributes) will always result in exceptions because the elements were derived from different documents.

Recently, I was writing a simple set of MSBuild tasks to do xml file replacement on arbitrary XML files after reading a fragment in from a different document. This was in response to a problem I had using the web.config file replacement tasks from the Web Deployment Projects. Namely, the web deployment project’s task works by using the System.Configuration tools to do the section replacement, and that was causing me problems when referencing a non-GAC assembly in one of the pieces I was replacing when doing a build to production. Their task needs the file to appear to be a valid web.config, and errors out if it thinks it’s found a problem. Mine doesn’t care what you’re replacing with, so long as it’s XML.

My first attempt, was something like this:

// This XML Doc won't be repeated, since the method signature won't change.
/// <summary>
/// Replaces a section of an XML document with the document fragment found in a given file
/// </summary>
/// <param name="document">The XmlDocument object to modify in place</param>
/// <param name="section">The XPath referencing an XmlElement (or set of elements) to replace</param>
/// The path to the file containing the Xml Fragment to replace with</param>
void ReplaceSection(XmlDocument document, string section, string filename) 
{
    var sections = document.SelectNodes(section);
    var fragment = new XmlDocument();
    fragment.Load(filename);

    foreach (var s in sections.Cast<XmlNode>())
    {
        document = s.ParentNode.ReplaceChild(fragment, s).OwnerDocument;
    }
}

As stated above, this is going to throw an exception, because the fragment comes from a different document than the document I’m trying to replace into. So, it because necessary to clone the fragment into a new node, but this is non-trivial.

void ReplaceSection(XmlDocument document, string section, string filename) 
{
    var sections = document.SelectNodes(section);
    var fragment = new XmlDocument();
    fragment.Load(filename);

    var newNode = document.CreateNode(fragment.FirstChild.NoteType, 
                                       fragment.FirstChild.Name, 
                                       fragment.FirstChild.NamespaceURI);
    newNode.InnerXml = fragment.FirstChild.InnerXml;

    foreach (var s in sections.Cast<XmlNode>())
    {
        document = s.ParentNode.ReplaceChild(newNode, s).OwnerDocument;
    }
}

XML Parsing engines are great, so that InnerXml property is a lifesaver, since otherwise I’d have to recursively clone the entire fragment tree in order to do the replacement.

Keen eyed observers will note that this is also incomplete. since it doesn’t take into account Attributes on the fragment element, and will in fact, no include them at all. XmlAttributes have the same weaknesses as XmlElements regarding your abililty to simply replace them with ease, so yet more code is required to clone them.

var newNode = document.CreateNode(fragment.FirstChild.NoteType, 
                                   fragment.FirstChild.Name, 
                                   fragment.FirstChild.NamespaceURI);
newNode.InnerXml = fragment.FirstChild.InnerXml;

foreach (XmlAttribute attribute in fragment.FirstChild.Attrbitues)
{
    var newAttribute = document.CreateAttribute(attribute.Name);
    newAttribute.Value = attribute.Value;
    newNode.Attributes.Append(newAttribute);
}

This is an awful lot of code to have to write simply to convert an Xml Fragment into a form that .NET will allow me to inject into a new document. But, now that it’s written, it can be pretty easily injected into a new static method you can use!

public static XmlElement CloneNodeIntoDocument(this XmlDocument doc, XmlDocument fragment)
{
    var newNode = document.CreateNode(fragment.FirstChild.NoteType, 
                                       fragment.FirstChild.Name, 
                                       fragment.FirstChild.NamespaceURI);
    newNode.InnerXml = fragment.FirstChild.InnerXml;

    foreach (XmlAttribute attribute in fragment.FirstChild.Attrbitues)
    {
        var newAttribute = document.CreateAttribute(attribute.Name);
        newAttribute.Value = attribute.Value;
        newNode.Attributes.Append(newAttribute);
    }
    return newNode;
}

As always, the code samples on my blog are BSD-Licensed, feel free to use and remix them, just give credit where due.

New Thoughts on Carbon Sequestration

Generally speaking, I think that most people in the environmental movement tend to focus a bit too heavily on the issue of carbon emissions, often to the exclusion of other issues. For instance, we’re supposed to use compact fluorescent lamps, because they use less power, but ignoring the mercury used in production (given that most people don’t properly recycle the bulbs, this is a huge heavy-metals problem waiting to happen). Same thing with hybrid cars, like the Toyota Prius. So little of what goes into current environmental thinking even begins to consider long-tail, that while we’re busy putting out this current fire, we’re literally pouring gasoline on the next one.

Which is why, it’s so awesome to see real work going on that could potentially solve a lot of problems. Like this talk from July 2009 at TED by Rachel Armstrong on work that she’s involved in that works on literal nanotechnology that creates this microscopic, almost alive, protist-like things that can create limestone reefs in the ocean. And she proposes using this technology to save cities like Venice, which has been sinking into the sea for centuries.

If this works, and I do have some concerns about the ecological impact (namely, how does this system stop growing), it stands to be absolutely amazing, allowing us to create reefs which not only shore up our buildings, but also sequester carbon and serve as habitat for wildlife. Really fascinating.

But more interesting in the short term, because it definitely seems that the implications are far simpler, is some work being done by Gary Lewis of BioAgtive Technologies, where they’ve designed a tractor kit which takes tractor emissions and uses them to fertilize a field. The Australian farmer in the story linked figures he’s saving a half-million Australian dollars per year on fertilizer costs. Plus, he’s taking an output that he’d have anyway and utilizing it in a productive manner.

It’s this sort of enviromental work that really excites me, because they seem to be something which will bring around real, long term, meaningful change in environmental thinking. Incidentally, this is part of why I love TED. The talks are fascinating, and tend to focus on things that you’re not likely to hear about elsewhere.