January 2009 Archives

Crowdsourcing by Jeff Howe

Jeff Howe subtitles his August 2008 book “Why the Power of the Crowd Is Driving the Future of Business”, and from a business perspective I think Howe really understands the way the Internet is changing business. While I don’t necessarily agree with all of Howe’s point, the core principles described in Crowdsourcing are well articulated and there are probably few who understand this phenomenon much better than Howe himself.

Crowdsourcing is, in short, the practice of using a community (the crowd) of Internet-connected individuals to create value for your brand. There are a ton of examples, some of which Howe includes, others which he doesn’t talk about. For anyone who has been on the ‘Net for any period of time, particularly since they were young, there is something stupidly obvious about Howe’s book. But Howe isn’t writing for people my age or younger (I consider myself to be near the upper age limit of the so-called Digital Natives), he’s writing for people just a bit older, who see what is happening, but find themselves unable to quantify (or perhaps entirely understand) what is going on with it.

Even for me, and I consider myself on the edge of these two worlds, since I was almost thirteen before we had the Internet (Dial-Up) at home. Sure, I’d been involved (at least on the edges) of the BBS scene before then, and I’ve been using computer’s since I was five years old, but the idea of being always online and having fast connections still, on some level, amazes me.

Being able to see this issue from both sides, I enjoyed Howe’s book. Did it amaze me? Not really, aside from some specific examples that I’d not heard of before, like Innocentive or iStockPhoto. Clearly, Crowdsourcing is powerful, and I see little reason to expect it to go away any time soon.

I will say, however, that if you expect this book to be a guide to building a business based on this phenomenon, don’t expect any easy steps. Howe doesn’t dance around the fact that Crowdsourcing business are hard to build because the crowd’s interests can be hard to guage, some tasks are hard to break down into pieces the size necessary to make Crowdsourcing work.

Howe doesn’t make the distinction, but I think it’s important that to identify that Open Source and Crowdsourcing, while similar, are not really the same thing, at least in my opinion. To me crowdsourcing is more like iStockPhoto or NASA’s Clickworker program. While a very active community can grow around them, the general contributions of the members tend to be very small. In Open Source, or at least the model inspired by Open Source, contributors tend to be more committed. Tend to work on projects longer. In my opinion, Wikipedia fits this model well, because most of the edits are made by a relatively small group of committers, just like in Open Source software development.

The world is changing, and no longer will business be able to present it’s product in a take-it-or-leave-it fashion. Community and dialogue are central to the crowdsourcing movement, and even traditionally Business-to-Business companies will be deeply affected by this change. However, there is a lot of room for wealth to be generated, if only we can figure out how. Sure, most people are going to fail in this new market (but then, most people already fail in the current market), but the biggest successes, generate wealth within the community, not just from the community.

ASP.NET MVC Release Candidate 1

As per both ScottGu and Phil HAACK, ASP.NET MVC has it’s first release candidate today, not to mention it’s first major bug disclosure. ACtually, I consider the bug to be more of an issue with ASP.NET in general, but c’est la vie.

I’ve been using ASP.NET MVC since the CTP 2 days, at least for a few projects, and I really want to congratulate Phil Haack’s teams for getting this out. I’ve really appreciated being able to write truly ‘webby’ applications using C#. While I’m still a fan of other MVC frameworks like Django and Catalyst, it’s great to have access to a solid MVC framework in .NET. Plus, even though Microsoft endorsed JQuery in this product, I can still use YUI with it cleanly.

Now, I haven’t looked much at the fubuMVC, which was created in response to what was seen as failing in ASP.NET MVC, but I’ve been really happy with it so far. Now, I do really want to see the MVC framework released in the MS-PL or something similar, but while the code is technically available today, I don’t expect to see it get any more open than it is. Which is unfortunate.

I’ve always hated ASP.NET because it tries way, way too hard to make the Web not behave like the web, and is a nightmare to debug. MVC is a great middle ground in this respect, because you have access to the ASP.NET controls, if you want, but it doesn’t abstract away the nature of the Web in a difficult or painful manner. If you’re looking to do a new web project, and you know you’re using .NET for it, I’d encourage you to consider ASP.NET MVC. Now, I just need to figure out if the Release Candidate will work on Mono

The Risk-Takers, The Doers, The Makers of Things

I try not to talk politics much on this blog, partially because I suspect my own political leanings do not mesh cleanly with the majority of my readership, but also because I don’t believe most of what I believe, or write about, needs to be a political issue. Of course, I recognize that everything is a political issue, but that doesn’t mean I believe it needs to be.

With that in mind, I will say that I am not excited about the Barack Obama Presidency. Yes, his stated agenda on Government Transparency, Technology, and particularly Net Neutrality fall very much within my own feelings on those issues, I disagree on many other aspects, not limited to the Economy, and failure to identify the real problem facing health care today. I’m still not convinced, but as the election is over, and the man has been sworn in, I’ve little choice but to give him the benefit of the doubt. He may well prove me wrong.

That said, during the Inauguration, one comment in particular stood out to me.

Rather, it has been the risk-takers, the doers, the makers of things—some celebrated but more often men and women obscure in their labor, who have carried us up the long, rugged path towards prosperity and freedom.

This, I think, is probably the most important thing said during what was otherwise an impotent speech. Beautiful, yes. Powerful, not really. Don’t believe me on the impotent thing? Just consider what the markets reaction. But the above comment shows that Obama (or one of his speechwriters) is willing to vocalize what makes this country great. Most of the countries greatest accomplishments have been from the cult of the amateur. Charles Goodyear’s development of Vulcanization was a piece of home experimentation and tinkering, than any deeply funded research.

There is a deep history of American Ingenuity spawning, not from the mega-corporations of their day, but from the individuals toiling away in obscurity. Even the dot-Com boom, and the Web 2.0 rebirth has been driven, not by large companies, but by the individual who wants to create something interesting. Sure, a lot of those companies have been swept up by your Google’s and your Yahoo!’s, but very few came from there.

And this idea of the doers, and makers, is exactly what being sustainable is about. Being able to re-purpose, reuse, and repair are a core principal of living well with the environment. Makers consume, sure, but we consume less. For the new President to so accurately recognize the importance of this maker’s spirit in front of such a large audience, was significant to me. While I certainly feel that the new President recognizes the importance of the maker’s spirit, we’ll see how well it is encouraged and fostered over the coming years.

Remixing is Okay...With a Great Dance Beat

A few weeks back, Lawrence Lessig was on The Colbert Report to talk about his new book, Remix. I still need to read the book, so I’m not really going to be addressing anything in that, but suffice to say that I agree with Lessig’s overall point, that the act of remixing is a legitimate creative act which needs to be encouraged, and not criminalized.

The interview, above, was pretty much exactly why I have grown to hate Colbert’s style since he left the Daily Show. He’s not a very good Interviewer, he chooses to stick to his line, ridiculous as he clearly knows it is, and get combative with the guest. While I understand that Colbert is simply trying to parody political pundits (particularly “Papa Bear” Bill O’Reilly), I just don’t generally find it that funny. Though lately it seems that Colbert has been toning things down and creating a better show.

In short, in my opinion, Lessig wasn’t really allowed to talk about much of anything, or make any sort of point, and Colbert just came across as a huge asshole. The interview wasn’t that good, and Colbert (jokingly) refused to let anyone remix the interview. Lessig responded by releasing the interview under a CC-BY license, and the remixes flowed.

Then, this week, Colbert made the masterstroke.

The video Colbert made here is by far one of the funniest things Colbert has ever done. I just want to take this opportunity to thank Colbert for providing such a visible venue for the issue of remixing and copyright reform, and encouraging people to take part in this new form of creation and expression. While I still don’t find the character Colbert plays to be very funny, he’s definitely won back plenty of respect from me.

Should Universities Pay For Oracle?

1 Comment

Here at Washington State University, there has been a lot of discussion lately at switching to a new Enterprise Resource Planning (ERP) system. I say a new one, because in my opinion we already have one, it just happens to be 30+ years old, homegrown, and dependent on an ancient database and hardware platform. We desperately need a new system. The number of COBOL and Natural/Adabas programmers certainly isn’t rising. Not to mention the problems inherent in replacing our ancient IBM Mainframe if it were necessary.

A new ERP will allow us to host our data on commodity hardware, in a true relational database, with a more easily extendable interface (we’re trying to jump on the SOA bandwagon). Currently, there has been a lot of discussion of the Banner or Peoplesoft educational ERP products. Having used Banner before as a student, I wasn’t terribly excited about it. Now, as a software developer, I fear the products because they’re closed systems, who will only run on Oracle.

Why do I consider that a problem? Well, Oracle’s Licensing is complicated enough that it apparently requires a 61-page document describing the licensing terms. They also license based not only on the number of processors in a system, but also the number of processor cores. You have to license not only the database servers, but also the management software. Oh, and don’t even think of putting your Oracle instance on a virtual machine like VMWare, you’ve got the pay the underlying system licensing, not the virtual hardware licensing. And this doesn’t even touch the expense of the licensing individually, which can run upwards of a thousands dollars per processor per year for the Enterprise edition.

Certainly, Oracle is a good database. It wouldn’t have been around as long as it has been if it wasn’t. However, is it worth the cost? Personally, I’m not wholly convinced, but I’m not a DBA.

Taking a step back to the ERP problem, I’ve been looking into a product called Kuali. Kuali is an Open Source ERP specifically targetting Education. It’s being worked on by a variety of Universities across the nation, including the University of California, Cornell University, among many others. There are a few problems with it so far, however. First, the Student module isn’t release yet, the financial module doesn’t support Accounts Recievable or Accounts Payable yet, among other things. At this time, it’s just not a fit replacement for our existing system.

However, the pace of development is fairly fast, we have developers on staff who could be tasked with extending Kuali, and the amount of money we could save by not licensing Banner or Peoplesoft (or something similar) is significant. If we wanted commercial support, there are companies that offer it. Is it ready to replace our systems today, no. But it could probably be brought online and begin us down the necessary path to correct the issues we need to correct, and we could likely end up with a fairly stock system, potentially more so than with the alternatives because this system is being designed from the ground up by educational institutions.

Back to the database issue, Kuali does support Oracle, but it also offers MySQL as an alternative. Now, for people who think MySQL doesn’t scale, MySQL backs many of the largest websites on the planet. It’s not my favorite Open Source Database, that honor falls to PostgreSQL, but it’s well known, it’s fast, and it’s proven. And it doesn’t cost hundreds or thousands of dollars a year in licensing fees.

We’re in an interesting time right now. The economy is in shambles and Universities nationwide are getting their budgets cut. Does it really make sense to apply tens of thousands (if not much more) of dollars per year on support contracts, when cheaper alternatives are available? Does it really make sense to spend all that money on a product you’re going to have to customize anyway, instead of paying far less for a system which was likely designed with more customization in mind? Does it make sense to pay Oracle’s licensing fees when MySQL is already well proven in the database industry?

I don’t think so. I’m investigating Kuali, and I think I’ll likely start pushing it a bit. If nothing else, I’d really like to see my university get more behind the principles of Open Source Development, and given that we could likely save hundreds of thousands (if not millions) in the process, I think it behooves us to try.

Rebuilding Workflow On The Fly, The Real Answer

Last month, I wrote a post about rebuilding workflow instances when attempting to run the workflow when using WF 3.0. Unfortunately, that post doesn’t work. Okay, I posted it before I’d fully tested it because it so obviously seemed like it would work. Bad on me, I know. But hopefully, I’ll be making it up today. As for the delay, I had a few other projects crop up which delayed solving this problem.

It turns out that when you call into an instance of the WF ExternalDataEventArgs class (or rather, one of it’s children), try to raise the event, the system will actually remove the workflow instance from persistence. Since you raise the events before you manually run the workflow, the logic I had before won’t work since an exception regarding the workflow instance being broken is thrown before I even reach the code written back then.

The best approach to employ seemed like it would be similar to the method used in RunWorkflow. I’d extend the mediator to take an action which would raise an event, like so:

void RaiseEvent(Action raise, ExternalDataEventArgs args, Func rebuilder)
{
    try {
        raise(args);
    } catch (Exception e) {
        if (rebuilder != null) 
        {
            rebuilder(args.InstanceId);
            raise(args);
        } 
        else 
        {
            throw e;
        }
    }
}

There is one major refactoring this solution required. None of our Raise Event methods took ExternalDataEventArgs, they all took subclasses of the ExternalDataEventArgs class. Since the signature on a delegate must always match, every single one of the raise event methods must be converted to take the more generic ExternalDataEventArgs, performing a cast internally. Basically turning this:

// SpecialEventArgs is a child class of ExternalDataEventArgs
void RaiseSpecialEvent(SpecialEventArgs args)
{
    if (specialEvent != null)
        specialEvent(null, args);
}

into this:

// SpecialEventArgs is a child class of ExternalDataEventArgs
void RaiseSpecialEvent(ExternalDataEventArgs args)
{
    if (specialEvent != null)
        specialEvent(null, args as SpecialEventArgs);
}

Now, these two calls:

RaiseSpecialEvent(new SpecialEventArgs(val1, va2));
Mediator.RaiseEvent(RaiseSpecialEvent, new SpecialEventArgs(val1, val2), null);

Do exactly the same thing, just the Mediator.RaiseEvent call passes through an extra layer of redirection. However, if you replace that null argument with a function that can rebuild a workflow given a valid Guid, the system will instead rebuild the workflow and raise the event properly (this should almost always work). The only downside? The code is slightly less clear, and some checks that used to happen at Compile Time are deferred to Runtime. These aren’t ideal, but for us, the ability to push out new versions of the application without having to take everything down, is a bigger win.

Luckily, a lot of these changes probably won’t be necessary in WF 4.0, at least if the PDC videos on the subject are any indication. WF 4.0 certainly looks like it will finally fulfill more of the promise that WF was supposed to fill. While I’m not terribly likely to suggest anyone pick up WF today, I suspect that will change in a year or so, when .NET 4.0 and WF 4.0 become available.

Tort Law and Necessary Reforms

1 Comment

After Last Monday’s post, where I discuss how I feel that I disagree with the some actions and feelings of one particular personal injury lawyer, Bill Marler, and the brief discussion I had with Marler regarding what was, frankly, a far more personal attack that it had any right to be, I’ve been thinking quite a bit again about the issue of Tort Reform. I’ll get to that in a moment, but I want to start by talking about my stance on risk.

First, I agree wholehearted with Bruce Schneier that we, as humans, are absolutely awful at estimating risk. Going back to the discussion from last week, based on Marler’s own statistics, since 1993 the CDC has had 667 reported cases of illness caused by E. coli O156:H7. The most recent (and largest) reported on Marler’s site was from Spinach from Dole in 2006, which accounted for 205 cases.

Even assuming that only 1% of cases were properly attributed and reported (which I feel is a low estimate), that 2006 instance would have resulted in roughly 20,000 food poisonings. I’m chosing to focus on this 2006 instance because it’s the largest instance, as well as the most recent. But what is 20,000 (or even 200,000) food poisonings in the large scheme of things? As of the most recent US Census (2000), the population of this nation was 281.4 million people. It is assumed that our population passed 300 million in 2006. But even with that 281.4 million number, amd guessing that only .1% of cases were properly reported to the CDC (a figure I guess is ridiculous), the odds of contracting E. coli O156:H7 are in the neighborhood of .07%, roughly 1 in 1,400.

Note that those figures are focusing on a particular strain of E. coli, one which is considered to be particularly vicious. With a more reasonable error rate of maybe 1 in 10 cases being properly reported, the likelihood drops to about 1 in 140,000. Now I’ll admit, that figure is a little scary, since the odds of contracting this particular strain of E. coli significantly lower than hitting the grand prize in the Powerball. But, even at 1 in 1,400, most people will never contract this particular strain of the bacteria. At the more reasonable 1 in 140,000 figure, most people will never even meet a person who has been poisined with this particular bug.

In reality, the figures are a little different. Outbreaks tend to not be spread evenly across the entire population. The 1993 instance which brought Marler to the front of the Food Safety issue, was restricted mostly to Washington State, and more specifically, people who also ate at Jack-in-the-Box. However, since the meat-packing industry effects just about everyone, and according to the USDA 70% of all cattle were processed by only 4 companies by 1997 (a sharply increasing trend from 1980), it is clear that beef production (in particular) is an industry which can have widespread affects when something bad happens. I wonder what that figure is today, but I was unable to find any data.

A phone conversation I had with Mr. Marler on Monday on this, and a few other topics, contained a comment regarding that, while the odds are slim, if it happened to you or someone you love, how many of us would just say, “Oh, well, that’s the luck of the draw.” Not very many, and while I’d acknowledge the rarity of the case, I’d certainly still be upset. But I never tried to imply that people shouldn’t be culpable for their actions. If someone causes an outbreak through deliberate action (or failure to act in a reasonable manner), they should be held accountable. If someone fails to respond in a timely, appropriate manner when an outbreak is discovered, then they are open to legal action.

For instance, last week it became apparent that there was a national Salmonella outbreak that was getting traced back to Peanut Butter under two particular brand names, King Nut and Parenell’s Pride. King Nut issued a recall on January 10th for their peanut butter, but on January 12th, they issued a press release claiming that they absolutely couldn’t be responsible for the outbreak, because they are only distributed Peanut Butter in 7 states that they recieved from the Peanut Corporation of America, who happens to distribute Parnell’s Pride. It wasn’t until January 13th, three days after King Nut’s recall, that the PCA decided to recall their product. The potential link was first identified on January 8th, and confirmed on January 9th, making PCA’s tardiness even more problematic. Especially when you consider that their primary market is large-scale food service operations (like, say, school lunch rooms), and by their own admission several of the lots in question were produced as early as July 1, 2008. Even the CDC recognized the source of the outbreak the day before PCA’s recall.

Is the Peanut Company of America liable for these illnesses (and deaths)? Possibly. It’s perfectly reasonable that PCA knew that King Nut Peanut Butter was really just their own peanut butter repackaged, and when King Nut recalled, they should have followed suit sooner than 48+ hours later. I can’t speak to the conditions of PCA’s factories where the peanut butter is made, so I can’t say if there was any negligence on the part of PCA, but certainly an investigation is warranted.

Returning from nuts, to meat. I will not argue that the meat-industry has not generally shown a real lack of concern for quality and safety. In the case of E.Coli in beef, the evidence is clear. However, I would argue that the issue is far deeper than simply the packing. The majority of the cattleman’s industry has led to the situation we have today. We took a ruminant that ate primarily nutrient poor grasses and began feeding it a diet rich in corn and grains which has drastically changed the stomach chemistry of cattle. Even some Free-Range, Organic, or Grass-Fed beef end their lives in some sort of Concentrated Animal Feeding Operation (CAFO) operation, where their diet is pretty much the same as the diets fed the ‘normal’ beef. High animal concentrations in small spaces also leads to immense disease problems, just look at the brucellosis problem in Yellowstone. Now brucellosis in Bison is not dangerous to humans (as far as we can tell), and may not be transmissible to Cattle (but most ranchers don’t want to risk it), but when USDA estimates set the brucellosis rate at half the herd, clearly there is a population problem. If half of a town of 5,000 got infected with something, it would be considered a fairly serious outbreak.

So, in the realm of food safety, there is clearly a lot of room for improvement. Both on the parts of government (FDA and CDC, primarily), but also on the parts of the handful of companies we’ve ended up handing the majority of our food production to. But enough on Food Safety, Risk, and Accountability. This post is, ostensibly, about Tort Reform.

Tort law is the name given to a body of law that addresses, and provides remedies for, civil wrongs not arising out of contractual obligations.[1] A person who suffers legal damages may be able to use tort law to receive compensation from someone who is legally responsible, or “liable,” for those injuries. Generally speaking, tort law defines what constitutes a legal injury and establishes the circumstances under which one person may be held liable for another’s injury. Torts cover intentional acts and accidents.

From the Wikipedia article on Tort

Now, part of the reason that Tort law has such a bad name these days is because of Personal Injury Lawyers who will take any suit, on the off-chance that it might work out in court. Things like that woman who spilled McDonald’s Coffee on her lap and sued because it burned her, or inmates suing the prison system because they only had access to the wrong kind of peanut butter (chunky v. smooth, not salmonella-laced v salmonella-free).

Now, for every asshole lawyer out there who takes on bullshit cases like these, there are several who really do try to only take cases they feel involve real fault. Mr. Marler assures me that his firm works hard to only take cases they feel are valid, and while I still disagree with that particular case of the raw milk dairy in Massachusetts (mentioned in the post on Food Safety and Accountability), I accept that the firm of Marler-Clark had evidence which suggested to them that there was fault.

However, while many lawyers aren’t shifty, there are enough who are, that it leaves a poor impression for the entire profession. The Bar Association’s across in most US states require membership to practice law, and have method for revoking bar membership for cases where people step beyond appropriate lines. Traditionally the Bar has been very reluctant to do this policing, and that is understandable. But there are cases like Jack Thompson, who spent 11 years of his career on a tear against the Video Game and Pornography industries, because of his strong feelings that such things undermined his Christian Values. In short, he used the law, not as a tool to find Justice, but as a weapon to fight something he disliked on religious grounds. It wasn’t until last year, that the Florida Bar finally disbarred him (a process that had begun in 2007) for professional misconduct.

Now, by their definition, his professional misconduct was related to defamation, lying (even under oath), and attempts at coercion to win his cases. However, I think that the Bar, as a professional organization needs to take a more active role in policing their own members, even in Jurisdictions where the Bar lacks the legal power to disbar their own members. Clearly, this wouldn’t change the need of the Bar to not take disbarment proceedings lightly, but for cases where people are clearly using the law as a weapon, and have a demonstrable history of this, the bar should be more proactive. A lot of lawyers lament the reputation of their chosen profession, and I suggest taking things further and supporting better policing of the profession from inside the profession. Particularly for instances, like Thompson, where the research they use to present their cases is routinely sourced from research institutions who have an obvious agenda, and who routinely contradict a larger body of evidence to the contrary.

But there are more issues that simply this. In much of the US, there are no penalties for bringing a frivolous suit against a defendant. In Britain, the rule is that the loser of a case is required to pay the legal fees of both sides. I like this situation a lot, but there is a definite fear that this system unfairly benefits the wealthy (which largely will constitute large corporate entities), in that they can generally afford to pay the legal fees far better than the people seeking the damages. So, while the system discourages frivolous cases, it can also make prosecuting potentially legitimate cases, that certainly should be heard by the courts, from being brought forward all together.

Mr. Marler described to me a system in a Mid-Western state (I forget which one, and my Yahoo!-fu is not helping), where a panel of judges would decide on the ‘value’ of a case, and whomever lost the case by at least 10% was responsible for both side’s legal fees (if no one won by the 10% figure, both sides pay their own). I know very little about this system, but it sounds intriguing, and like a really good compromise between the the “British” and “American” systems of Tort Law.

As a final note, I’m not convinced that a jury is appropriate for civil cases. For one thing, the Sixth Amendment to the US Constitution, only talks about a jury in criminal prosecutions, so there is no constitutional reason for a Jury in a civil case, but I’m willing to allow the decision to be made by a jury based on the rule of law as presented by the Judge, and both sides of the case. The Jury simply won’t be as educated on the law, either existing case law or the written law (nor will they be as educated on researching it) for me to feel completely comfortable turning the decision, which is ultimately a decision based on the interpretation of the facts and the law, into an fully educated decision.

Then, there is the issue of unreasonable awards. The $2.9 million award for the old woman spilling the McDonald’s coffee on her lap, or the $15.6 Million settlement for the 9-year old Seattle girl who was poisoned in the E. Coli outbreak at Jack in the Box that made Bill Marler famous, seem on the surface to be patently absurd. There may well be underlying circumstances that make those judgments make much more sense, but on the surface, this feels ridiculous. This concern, however, may be somewhat unfounded. Certainly, there are cases where Juries provide judgments which are ridiculous, but Mr. Marler (and a few other litigators I’ve asked), have indicated that there experience is that most of the time, Juries are not willing to make judgments for enormous sums of money. This is functionally hear-say, but I’d be interested in seeing an analysis of these sorts of judgments.

There is no perfect system. The system that we have is probably one of the best, particularly compared to China. However, in my opinion, a lot of problems that this country is facing, comes down to the ease of filing and lack of repercussions there are for filing lawsuits that are baseless. Medical Malpractice Insurance is higher than it historical ever has been, in part because of the increased trend of people suing doctors. Some doctors deserve it, but many probably don’t.

People are emotional, particularly when things happen to them or their loved ones. Their desire may often tend to be toward retribution instead of justice, and the system needs to ensure that the line between retribution and justice, which can get blurry, is better considered. Some of that burden needs to fall on the state, through deeper penalties for baseless suits, but some of it too should rest with the bar association, in the self policing they do of their members. Certainly, some of this is already done, but I’m not convinced the self-policing of the profession is vigilant enough.

Above all, we all would do well to remember that sometimes bad things happen, and it truly is nobody’s fault.

Recieved My Dev Phone 1

On Monday, my FedEx package arrived with my Android Dev Phone 1. It’s essentially exactly the same as a T-Mobile G1, but it’s SIM-unlocked and doesn’t required a signed firmware. I’ve been considering upgrading the firmware to cupcake, what is expected to become Android 1.1, but I’ve not done it yet, since that firmware has been pretty unstable in the emulator, making me nervous about using it on what I’m trying to use as my only phone.

Photo0130.jpg

Opening the box was exciting, since I’d really been anticipating the device. I hurriedly unboxed it, and plugged it in, placing my SIM card from my old phone into it. Upon booting the device for the first time, I had trouble activating it to my Google account, because it wasn’t configured properly with the AT&T APN. If you’re using the AT&T network, the APN that worked for me, was this:

APN: wap.cingular Username: wap@cingularprs.com Password: CINGULAR1

The only thing that isn’t working for me is 3G connections, but this is related to the fact that the AT&T network uses 3G on the 850MHz and 1900MHz bands, while the Dev Phone 1 operates at 1700MHz and 2100MHz, so this is a hardware limitation that I was already aware of. It’s disappointing, but as I said, I knew it was going to be there. Aside from that, I am unable to connect to the Wireless network at work, but that is because it requires either PEAP support (which the iPhone lacked until it’s 2.0 revision) or VPN support. So, while at work, I’m stuck at EDGE network speeds, which honestly, is still generally fast enough for e-mail and other basic network uses.

Of course, I didn’t buy the phone just to get a phone. I am working on a few applications, and I’m looking to contribute to the open source project, so I do plan to use the phone as a development platform, but I do think I’m going to enjoy using it. The phone is significantly faster than the emulator, and there are lots of interesting apps in the Market already.

I do have a few complaints, however. I’m having trouble with the Alarm Clock being too quiet to wake me in the morning, but that might be related to the fact that it’s been laying on it’s speaker at night. The device will not import your contacts off the SIM card, I’m looking into if it’ll be at all possible to correct that. The touch screen is taking a bit of getting used to, in that some elements (particularly on the web) can be hard to select, but I believe that’s mostly an issue of learning the input.

Is the device perfect? No. Is the OS perfect? No. But the potential there is amazing. I would say that it has, in many ways, already shown itself to be a technologically superior platform to the alternatives. However, time will tell on the developer support. I like the development model, but while there are plenty of interesting apps on the Market, there is a lot of garbage too. But then, there is a lot of trash on the iPhone App Store as well. Once the Market opens up to pay apps, I think we’ll see some more interesting trends.

A SQLiteOpenHelper Is Not A SQLiteTableHelper

4 Comments

I’ve been working on a few projects for the Android platform, which I intend to turn into applications. While I do plan to commercialize at least a few of these apps, I’m also looking for ways to improve the platform itself, and the body of knowledge surrounding it. One of those Body of Knowledge issues might seem fairly trivial, but it did catch me up for a few days.

As an aside, a large source of this error was due to the fact that I’ve been doing a lot of work with the LINQ-to-SQL ORM, which led me to make some assumptions about ORM design that in retrospect don’t make a lot of sense, and certainly didn’t work. But the reason they didn’t work was interesting, and worth sharing.

One of my applications is a Microblogging application with clean support for multiple services. The first version supports Twitter and and Laconica-based services, such as the TWiT Army. Currently, no such apps exist, and I’ve seen very few apps handle the multiple accounts situtation in a manner I consider to be reasonable. Because of this need to support multiple accounts, and a desire to normalize the database somewhat, I ended up with the following (basic) table structure.

+----------+ 1 -> n +---------+ n <- 1 +---------+
| accounts |------->| notices |------->| senders |
+----------+        +---------+        +---------+
     |                                      ^
     |            1 -> n                    |
     +--------------------------------------+

If SQLite supported Foreign Keys, I’d have set those up as well, but it doesn’t, and generally doesn’t need to. But these three tables should be able to handle any microblogging situation I can think of off the top of my head. However, every example of writing a SQLiteOpenHelper I was able to find only discussed creating a single table. Between this and my thought that each table should have it’s own class, I created three SQLiteOpenHelper classes, one for each table, even though they all referenced only a single database file.

Here is where everything fell apart. Android maintains Versions for databases based on the package it’s associated with, the name of the database, and the version number you provide. The package and name go into decided what the path on the device will be, while the version is stored (somewhere) on the device so that it knows when it needs to call an OpenHelper’s onUpgrade event handler. It turns out that if, in the SQLiteOpenHelper Constructor, it determines that the database already exists, it won’t call your onCreate or onUpgrade methods at all, even if the particular class which is making the call has never been called before.

This makes perfect sense, in retrospect. And, in fact, there are benefits to keeping all those methods a single class. For instance, it now makes sense to use slightly more complex SQL queries utilizing joins to get the data about a Sender from the notices table, where before I would likely have loaded the notices, and then for each record in the notices table, made a call into the senders table to get the data about the person who sent that message. This would have been far more inefficient, particularly if notices was allowed to reach a non-trivial size.

Basically, I needed to think about the SQLiteOpenHelper more like the LINQ-to-SQL Database Context than as a LINQ-to-SQL table object. One interesting idea from a software engineering standpoint (which may not be worthwhile in Android) would be to define the functions on each table in their own Interfaces, which would then be used to identify in which scope the OpenHelper is being used. I’m not intimately familiar with the sort of output Davlik would generate in this instance, so it’s possible this redirection may not be worthwhile, and most Android apps should have a good enough split between their Activities and their data layer, that it probably would save almost nothing, but it’s an interesting idea.

Hope this helps someone, I know I spent more time on this problem than it deserved.

Garden Planning Begins Now

Last Summer, Catherine and I got a 400 square foot plot down at the local community garden. Due to our late start, getting married, and the unfortunate weather last year, our yield was lower than we’d hoped for, but in all, we were still pretty happy. This year, one of those distractions definitely won’t be in our lives, and another we’re working to avoid. Of course, we don’t receive as many seed catalogs as some people, but that’s just because last year we started all of our plants from other people’s starts.

This year, we saved quite a few seeds from various tomato and pepper plants. Saving seeds is generally pretty easy (and I can’t believe I didn’t write about this last year!). This summer and fall as we go about saving some of our seeds, I will certainly post about what we do. Saving seeds is a great money saving method of gardening, especially since you can often use grocery-store bought fruits and vegetables to get seeds.

Still, even ignoring preparation which would have begun last year, there is still a lot of planning to do this year. We’re still going to need other seeds. We’re still going to need to plan our plot, as we can’t simply grow all of our crops in the same places we did last year. Crop rotation is important, as you generally don’t want to grow members of the mandrake family (tomatoes and potatoes, for instance). This is partially related to the potential for disease, but it’s also related to the pH requirements of those plants.

Due to this, we need to plan exactly where we want to plant everything. We need to decide what kind of row covering we want to build (we’re planning on building row covers out of PVC so we can plant earlier and have easily removable greenhouses). We need to decide how our rows are going to be situated. There are a lot of decisions to be made before we start our seeds, and certainly before we get the garden ready for planting this Spring.

This year, we have decided to do ‘flood’ gardening, where we will dig our beds down a couple of inches below our walkways, allowing us to ‘flood’ the beds when we go to water. This decision was driven in part because one of the most prolific gardeners at our little community garden uses this method, and it certainly does seem to work well.

So, between that and our PVC row covers, we’re definitely going to have our work cut out for us. But, it should lead to a solid yield, and some really good food both to eat right away or to can. I know it seems early to be thinking about gardening, but at least now it’s mostly just thinking, there will be plenty of time for work soon.

Git: Social Source Control

1 Comment

I’ve written about git before. Git was created by Linus Torvalds in response to the Bitkeeper people revoking the license given to the Linux Kernel Project. Git was designed to support a massively distributed, large scale project. It’s salient features are that every user has the entire history of the project, git can be used completely offline, needing a connection only to either push the local repository to a remote or to send e-mail patches to the project maintainer.

Okay, so the book-keeping is out of the way.

I’ve always thought that git was perfect for distributed projects, like the Kernel, or really most Open Source projects. And while I used to feel that git wasn’t appropriate for ‘traditional’ source control, I’ve since changed that opinion. What’s so beautiful about git, is that git doesn’t force you into any particular workflow. There are web-centric worklfows. Workflows meant to keep succint, short revision histories. Workflows based on e-mail submissions. You can branch frequently or infrequently. You can have a central repo which many people can commit to, or force all patches through a single (or very small group) of maintainers.

In short, git can be made to work as you work. The only thing that is keeping me from getting anywhere pushing git at my current position is that it’s windows support is still wonky, and there aren’t any good windows tools for managing git repos. My co-workers almost all have a distinct fear of the command line.

However,while git is pretty great on it’s own, the popular site GitHub has really changed the way I use git. Now, not everyone likes GitHub. Evan Prodromou with the Laconica project feels that using Non-Free Software to create Free Software isn’t worth it. I understand where he’s coming from, and while I can’t say I completely disagree with him, the features github provides make using git the way I want to trivially easy.

Unfortunately, I’m hosted on a shared host, meaning I don’t have a ton of control over my server. Admittedly, my provider is amazingly responsive to concerns, and generally does a great job considering the pittance I’m paying him. I was going to request git be installed (they recently installed SVN for users), but with the GitHub, I’m not so sure I’ll need it.

GitHub allows me to fork projects with the click of a button. Create new repositories easily, and share my code quickly. If I don’t want to fork a repo, I can simply follow one, so that I’m notified of commits to the repo. If I’ve forked a repo and want my changes to make it upstream, I can request a pull.

Ryan Tomayko was right. GitHub is MySpace for Hackers. And now, unlike when Tomayko was using it, anyone can sign up for a github account. No invites necessary. Currently, I’m following the YUI project, and John Resig’s Sizzle among others. If you’re a Ruby hacker, there is a ton of Ruby on github, including the core ruby code.

Github. with it’s ease of collaboration has really taken git to the next level for a lot of people. You don’t have to worry about the way the e-mail system works (from what I understand using Git with e-mail requires a client like mutt). You don’t have to worry about finding a webhost with git support (or running your own public server). It’s git without any headaches on how you’ll share. I like that.

I’m thinking about looking into hacking a git plugin for Visual Studio. While git has reached critical mass in the linux community, until we get GUI tools for Windows, we’re going to have a hell of a time breaking out of the Unix world. Just look at how SVN is doing on Windows since Tortoise was released. And the flexibility in a distributed system like git is fantastic.

If you do any Open Source development, check out github. Even if you don’t want to host your own projects there, odds are good that there will be a project you’re interested in watching, if not contributing to.

Static Vs Dynamic Typing

Between work and personal projects, I spend a lot of time jumping between Statically typed languages (primarily C# and Java), and dynamically typed languages (Perl, Python, JavaScript, and VBScript). Even when I ignore the (minor) problems inherent in working with so many different languages (primarily centered around having to context switch between the syntaxes), the biggest headache I continually run into switching between these languages has simple to do with the type system.

Perhaps it’s because I cut my teeth on C, which despite appearances is really a dynamically typed language, since type names do little more than tell the compiler how many bytes it should be addressing at that moment in time, or maybe it’s the fact that much of my programming time is spent on the web, where really, everything is a string. I’m not sure I buy that one, though, most of my programming in College was centered around binary data, and I used to hack proprietary file formats for fun.

Maybe I’m just getting lazier, but constantly having to manage types is really getting annoying. This is a little worse today than it’s been because I’m finally getting back into Android programming, and I’d forgotten how much manual type coercion Java really forces you to do that C# handles on the backend for you. If it’s not already obvious, I’m definitely in favor of Dynamic type systems. The just make for easier code to write, and easier code to read.

At the moment, I’m working on an application form in ASP.NET MVC using C#. Not terribly difficult, but I’m having to write a significant amount more code in C# than I am in the project JavaScript (I duplicate the code in both places for a better user experience). This is mostly because of the need to explicity convert types to numeric types to test that certain fields (such as Social Security Numbers) are entered in correctly. It’s just becoming a simple fact, that dynamic languages allow us to write code faster. There is always the potential that we can write worse code faster, but with a little care it’s just as easy to write better code faster.

Now, while I definitely prefer dynamic typing, there is the valid performance argument. Dynamic typing is slower. This is a fact. However, for most uses these days, it isn’t noticably slower. Computers are fast enough today that the overhead involved is negligible for almost all applications, but there is one place where static typing is still generally preferred. Mobile Computing (primarily phones) require static typing, not so much because of the speed of the application execution, but because the battery life would be negatively impacted by the constant messing around with determining (or modifying) types.

However, that doesn’t mean that I can’t program as if the types were more dynamic. C#, particularly in later versions, has implemented a ton of compiler hacks which make C# seem more dynamic. The ‘var’ keyword, inline property setting and dictionary construction, plus quite a few more things, all making the code easier to write, while keeping the language static. I’d be a lot happier with Google’s choice of Java for Android if Java would implement even a small number of these more ‘dynamic’ features. Perhaps I should just put together a C# compiler that spits out Davlik Bytecode (I wonder how modular the Mono compiler source is?).

Well used, dynamic typing can lead to much cleaner code, much faster. I’m very glad to see the trend in Software Engineering toward dynamism, even if it frankly could come much faster than it is.

Food Safety and Accountability

5 Comments

Over at the Ethicurean this weekend, there was a long post regarding Bill Marler, a prominent lawyer who has made his name (and fortune), suing food companies for selling food contaminated with E. coli and other pathogens. I don’t know much about Marler’s history, so I can’t say if his interest in food safety is anything other than a convenient professional venue, but he has been litigating these cases for fifteen years. The Ethicurean post was related to his December 30 post regarding what he considers the coming Food Safety challenges this year.

To be fair, I do agree with some things that Marler feels are important. Depending less on global production of food stuffs, poor funding for food science, environmental impact of food production, lack of pet food standards, and so on. However, Marler also specifically targets farmer’s markets and CSA organizations.

Outbreaks linked to local food and/or farmer’s markets. Community Supported Agriculture (CSA) groups and food co-ops need to demonstrate knowledge and practice of food safety, and be inspected. In addition to produce and meats/fish, prepared items are currently unsupervised.

Okay, I don’t know about the markets Marler visit, but I know that at my co-op the take food safety seriously (and I’m pretty sure they receive regular health department inspections), and at the Farmer’s Market, nearly all the vendors are professional caterers, and I’ve never seen anything dangerous. That may just be my experience, but frankly, I’ve seen far less sanitary conditions in normal restaurants.

My problem with Bill Marler is that he doesn’t exercise discretion. On Ethicurean he once commented that “if I have a legitimate client sickened by a food product, it frankly does not matter if the manufacturer is Con Agra or Mom and Pop. A child who suffers kidney failure or death certainly does not care who poisoned them.”

There is truth to that, and if I had a child suffer from Listeria poisoning and die, I would be distraught. I would be angry. I would be looking for fault. But the question everyone one of us (including lawyers like Mr. Marler), comes from that definition of fault. Every single action we partake in, including eating, bears with it a certain element of risk. Contamination and outbreaks will happen. This is a fact of life (and one that I suspect Mr. Marler’s bank account depends on).

At just as much of a risk is the fact that when we find ourselves ill, we generally just wait until it passes. My wife and I both got fairly ill during the Christmas shopping season the day after our final day of shopping. We both sincerely suspect it was the dinner we’d had at the Qdoba (though proving that is difficult), and we were both fairly ill for a day or two. It passed, and we’re both fine, but had illness been the precursor to something more serious, our decision to do nothing about it at the time may have proved fatal. Even if we had gone to a doctor, the doctor may likely have prescribed the same course of action we were already taking (though it’s possible a blood test would have been performed).

Let’s say, hypothetically, that we had gotten worse, and one or the other of us had died from food poisoning. Who would be at fault? The purveyors of the burritos? The distributor who sold them the food stuffs which poisoned us? The producer who sold said foodstuff to the distributor? Perhaps the doctor who failed to diagnose us in time?

Food Safety is important, I am in no way denying that. People preparing food should wash their hands. Kitchens should be cleaned regularly. Animal carcasses should be cleaned of their own filth before butchering begins. However, just because a bad thing happened, doesn’t mean that there is negligence.

Ethicuean Writer Ali tells the story of a Massachusetts Dairy Farm which had a listeria outbreak which left three adults dead (and led to one miscarriage). Immensely unfortunate. Marler represented one of the affected families in the lawsuit which followed the outbreak, and now that dairy is no longer operating. The real shame here, is that apparently the evidence was that the dairy was doing all they could to ensure a quality product. They genuinely cared about the health of the animals, fed them well, and once the outbreak occured they worked hard to mitigate the impact.

And yet, they are now closed, unable to continue operations after the suits filed against them. This was an unfortunate happening. There is absolutely no doubt of that, but was it really worth putting that small dairy out of business? That was probably a dairy that you could go visit, see exactly how the animals lived before you bought any of their products, and I’m sure the people would have gladly shown potential customers around. Contrast that to larger operations, who operate in secrecy, and respond slowly even after problems with their products are discovered to be tainted.

These small producers genuinely care about what they produce. The producers at our farmers market know many of their customers, if not by name at least by face. If I got sick, and my illness was traced back to one of those producers, they would likely respond personally. They would claim responsibility, and they would try to prevent it from happening again.

Every time I open my mouth, I’m taking a risk. Whether what I’m about to eat is from our friendly farmer’s market, our own garden, or some industrial producer who’s source I’d have difficulty tracking down, that risk is everywhere. But what we should be looking for before we sue, and this applies not only to food safety but all aspects of life, is fault. Intent. Did they knowingly (either out of ignorance or lack of concern) operate in a fashion that raised the risk of illness? Then perhaps there is fault to be examined. But if a producer genuinely tries to operate in a manner conducive to producing a healthy product, and lessens the contamination vectors by exercising reasonable methods (ie, proper cleanliness, keeping un-composted feces away from fields and so on), is the level of fault (and it’s not fair to say there is no fault) such that a lawsuit is even reasonable?

Ethicurean Ali ends her post (which, looking back mine is awfully similar to) talking about how she generally likes Bill Marler, and that she feels he is a great hope in the fight for food safety. A part of me wants to agree with her, but I’m not sure I can. Bill Marler, appears to me, to be among the worst kind of personal injury attorneys. Targeting anyone he thinks he can get away with, focusing on clients which a jury is likely to get emotionally involved in (like children) which can lead to greater judgments. Admittedly, Marler has done work which has held a lot of negligent producers accountable. His stance that the meat packing industry needs to hold higher standards of sanitation, I completely agree with. But his statements on Ehticurean and elsewhere, and parts of his own litigation record, suggest to me that food safety concerns would be far better served by an individual that cared more about reasonable standards than suing anyone who has accidentally poisoned someone, regardless of their efforts to prevent food-borne illness.

Update: I have reread the above, and traded a few comments with Mr. Marler on Twitter. Considering how the above reads, he’s been very polite, inviting me to call and discuss the issue (as of this writing, I have not done so). I want to reiterate what I said above, but which was masked behind some of my other statements: Bill Marler has done a lot of good for food safety. He has done a lot to take on the meat packing industry, particularly with their attitudes that we should just cook the hell out of their contaminated meat. I agree wholeheartedly with this, and my knowledge of certain cases (such as the dairy case discussed above) is certainly less than Mr. Marler’s, though I still disagree with that particular action.

Food Safety is a complicated issue, and I do not wish to imply that the issue is solely the realm of the consumer. Producers should be held accountable. It in on the level of accountability which is reasonable where I suspect I disagree with Mr. Marler.

There is another source of frustration that led to the tone of the above, which I am leaving as is. Namely my frustrations with the current state of the US Tort system. I believe that the system is too prone to manipulation by those who wish to do so, and that all too often the damages awarded through that system are far too punitive. It is fair to say that Mr. Marler has personally benefited from that system which I feel is broken. It is unfair to say, or even imply, that his legal interest in the field of food safety is solely to pad his own pocketbook.

My words contained an unfair amount of venom toward Mr. Marler. I disagree with him on the issue of Raw Milk, and I suspect our definitions of what constitutes reasonable efforts to avoid contamination may differ. I still stand by my stance that what I’ve read makes me think that Mr. Marler is not best possible advocate for food safety that those of us who support local food production (and certainly Raw Milk) could ask for. However, to characterize him as an ambulance chaser more interested in padding his own pocketbook than helping other people was grossly unfair.

Mr. Marler, I apologize for this mischaracterization. While we are going to disagree on several issues, the work you’ve done has had positive impact, and it is unfair to make light of that. I think I will take up your offer to call, though not today. If nothing else, I need to spend some more time familiarizing myself with you professional life.

Resolution (sort of) to the ServerXMLHttpRequest Issue

2 Comments

Last week, I wrote about how the installation of KB954459 broke our usage of the ServerXMLHttpRequest objects. After putting together a test case, I e-mailed it to Microsoft’s XML Team, glad to have found the bug and hoping for a fix to come soon. While Microsoft was quite quickly able to confirm my bug, they don’t plan to fix it.

From: Samuel Zhang <*@microsoft.com> To: “Craig, Jeff” <*@wsu.edu>, Xml Team Bloggers xmlblogs@microsoft.com Date: Thu, 25 Dec 2008 22:49:12 +0800 Subject: RE: (Microsoft XML Team’s WebLog) : KB954459

Hi Jeff,

Thanks for the sample. We have verified that this is due to a security fix in KB954459, as described in bulletin MS08-069, especially in section MSXML Header Request Vulnerability - CVE-2008-4033. MSXML no longer supports modifying referer header, so we suggest you to apply KB954459 and change your design. Thank you for your understanding.

Samuel

Now, I’d been over that particular security bulliten backwards and forwards prior to e-mail Samuel’s team, and I hadn’t seen anything regarding problems with the “referer” header, only the “transfer-encoding” header. My gut reaction, and this may be really unfair to the Microsoft XML Team, is that dropping the referer header was a gut reaction, and an unnecessary one. The gist of the exploit, is that the MSXML objects could have been used to make a remote request that violated the “Same Origin” policy used on the web, resulting in unintended information disclosure. And since Outlook uses the same rendering engine as IE for it’s HTML e-mail, this exploit would work equally well with HTML e-mail as a crafted web page.

At this time, I am unable to find any information about the research that went into uncovering this vulnerability. Microsoft reported it was reported from outside their organization using “responsible disclosure”, and I really hope that now that the fix is available, the researchers will publish their research. But still, I am left thinking that the object was overly neutered in order to correct the issue. For one thing, the “ServerXMLHttpRequest” object should never have been available in the scope of Internet Explorer, and it doesn’t make sense to reduce the scope of the ServerXMLHttpRequest just because the scope of the XMLHttpRequest object needed to be (rightly) reduced.

Luckily, there is a solution. Microsoft also has available the “WinHttp.WinHttpRequest.5.1” object, which exposes a nearly identical interface as the ServerXMLHTTPRequest object. I suspect that the return of WinHttp would require a few more steps to be usable in the MSXML library, so working with XML data as a data source using WinHttp may be more difficult, but it wasn’t really an issue for what we’re trying to do.

The good news is that we can now apply KB954459 to our web servers and continue to function as normal. However, I still greatly disagree with the methods Microsoft used to implement this fix, as they reduced functionality in an object that shouldn’t have been affected by the attack vector described in the security bulletin. If I can get a better justification as to why the Server object was affected, I may change my story, but until then, I’m going to remain unhappy with this solution.

SSL Weaknesses Shown

This last week at the Chaos Communication Congress put on in Berlin, a group of international security researchers revealed some research exposing some interesting flaws in the Secure Socket Layers Certificate system we use to validate that a connection is secure.

The attack was very specific, and was based on an attack against the MD5 digest algorithm first discussed in 2004. This attack basically states that it is possible on commodity hardware today to compute two blocks of data which, though different, share the same MD5 signature. This is a huge problem in cryptography, because when a file is cryptographically signed, the private key doesn’t actually sign the file (because that would be encrypting the file with the private key), but rather they encrypt a signature of the file. This means that if you have two-plaintexts that share the same signature, signing one plaintext essentially signs both.

Now, even though MD5 has been known for nearly four years to not be cryptographically secure for generating signatures, it’s still been in relatively high use, and the researchers found several companies who were generating SSL certificates which would be trusted by default in IE and Firefox, but signing them with using MD5. Most certificate vendors have migrated toward SHA-1, and we’re beginning to see migration to SHA-2, but the holdouts saw little reason to update, until recently, I suppose.

The attack involves generating two certificate requests that will generate the same MD5 signature, so that when the Certificate Authority signs the legitimate request, they can substitute in the illegitimate one as valid as well. There were a few problems with this. There were two parts of the signed certificate they didn’t control. First, the serial number, and second the time the certificate started being valid. The serial number turned out to be easy, as RapidSSL, the provider they were using, assigns serial numbers incrementally, and they’re certificates become valid exactly six seconds after submission. With that in mind, they were able to guess what the serial number and validity time would be, given that they checked the serial number counter a few days before the attack.

They would then use 200 PlayStation 3 consoles to generate a colliding request which was itself an intermediate CA (meaning the ‘rouge’ certificate could sign valid-looking certificates). There was a lot of work these researchers did to make the process of generating the collision faster, but they apparently don’t plan to release any details of that work. I understand their decision, but I’m not sure I agree with it. Perhaps in a year or so, when more people have migrated away from MD5.

So what’s this mean? Well, paired with a DNS spoofing attack it means that the attackers could redirect your banks website from your banks servers to their own, all with a valid SSL certificate. It means that a man-in-the-middle attack could be performed over a secure connection, while looking 100% valid the entire time.

Ultimately, it doesn’t change anything for most users. Most users click right through invalid security certificates, blame for which I place on the high cost of SSL certificates. Perhaps with the new EV certificates we’ll see prices on the less expensive certificates drop, but I doubt it. For those users who do pay attention, it means that they could be effectively tricked.

Luckily, this research has convinced those few MD5 hold-outs to switch to SHA-1, which will effectively render this attack impossible (at least until SHA-1 is broken), but the last part that the research revealed was the problems with revoking SSL certificates. The browser depends on the certificate to tell it where to look to see if the certificate has been revoked, but the rogue certificates don’t supply that information, they’ve had to overwrite it was random data. The solution involves running your own revocation server, but that is just not reasonable for most users.

SSL is still a good thing, and something that we should be careful to know we have, but it seems that it may require some fundemental changes, not only in how it’s used, but in the specification itself.