December 2007 Archives

RIAA Attacks Personal Use

Ira M. Schwartz of the RIAA, who I’ve written about before, has stepped up her attacks on the Howell’s of Scottsdale, AZ, to include not only the placement of ripped songs into the Howell’s KaZaA share directory as a violation of fair use, but even simply the act of ripping the CDs to his computer violated the tenets of fair use.

I can’t believe I defended that woman.

It’s not a new idea, by any means. Sony BMGs chief of litigation has gone on record saying that copying the song is the same as stealing it. Back in the day of cassette tapes, the record industry prematurely lamented their death. VHS was going to destroy the movie industry. DVD had to have CSS copy protection to prevent people from stealing movies. As the Washington Post said today, this is just an example of an old guard business trying their hardest not to adapt.

Some people do steal media. Some people will steal anything that they can manage. Most people don’t. Most people don’t have an interest in stealing, and simply want to be able to use their media where they want. I buy CDs because I like CDs. I like to have control over the format I rip them into. I don’t buy MP3s, because I dislike the patents surrounding MP3. I think that Ogg Vorbis is a superior format. And importantly it’s a Free format. I’m free to use it without having to worry about my data being covered under software patents.

I’m immensely pleased to see Amazon selling DRM-Free MP3s and having the RIAA-backed record studio’s signing on. I’m pleased that Apple has dropped the price of the DRM-free music to be the same as the DRM-music on iTunes. I think they should stop offering the DRM options, but at least it doesn’t cost more any longer.

When I make Ogg Vorbis versions of the songs that I have purchased, I do so, that I am able to listen to them more conveniently. Either on the go using my Neuros Audio Computer, or so that I can have all my music at my fingertips via Rhythmbox. I don’t rip my CDs and resell them. I don’t share my files with people outside my household.

I am not a criminal. I am a consumer interested in choice, and disgusted with the old guard. The record companies seem to have come-around on this issue. The RIAA has not, and the record industry needs to cut ties, and put an end to the litigation of the consumers.

Go after the Howell’s for sharing their music over KaZaA. I agree with that not being fair use. But them ripping them for personal use? I hope a Judge rips you apart, Ms. Schwartz.

Microsoft Opens it's Security Window

Microsoft has decided to post a new blog from their security vulnerability research and assessment team. It’s new, so there isn’t much there yet, but this is a huge step for Microsoft on the road to full disclosure.

Full disclosure is vitally important to security. Last August, Bruce Schneier had the opportunity to hound the TSA about it, and Bruce would know. He’s a cryptologist and a security expert.

In Cryptography, it’s commonly accepted that an encryption algorithm that isn’t available for public consumption and consideration isn’t worth using. If it wasn’t for full disclosure in cryptographic algorithms, we wouldn’t be aware of a potential NSA backdoor in a current encryption algorithm. If not for full disclosure, we wouldn’t have reason to know that blowfish and AES are as secure as they are, and that the people testing the algorithms certainly have all the necessary information to test them.

Microsoft’s new blog isn’t quite full disclosure, but it gives us a glimpse into their thinking, how they work, and what they discover. For a company with a history of silently fixing holes, and not acknowledging their security shortcomings, this is an important step.

In addition, it gives us some insight into the tools Microsoft is using internally. For instance, in their post on insecure smb2 signing, Microsoft reveals to us that they’re using Open Source Software internally. The images reveal that Microsoft clearly uses wireshark (formerly Ethereal) in house for their protocol analysis.

And why not? Wireshark is an excellent protocol analyzer, particularly when running in on stored data captures (live captures are potentially dangerous), not only do the images belie their use of Free Software, but the fact that the traffic dump posted is clearly generated by libpcap is another sign.

I’ve commented before about Microsoft becoming more open, and while I’d always like to see them go further, this is a great step. But guys, give a shout-out to the software you’re using when you demonstrate it like that. I’ve no doubt there will be more tools mentioned in passing over the life of your blog, and people really may be interested.

Encryption Passphrases may be Protected under the 5th Amendment

A Federal Judge in Vermont ruled on Monday that Encryption Passphrases are proteced under the 5th Amendment right to prevent self-incrimination. The case is interesting because officers know that the defendant’s laptop contains images of Child Pornography. A Customs agent saw them when the volume containing the images was left unencrypted when he was arrested, but automatically encrypted when the laptop was shut down during his arrest.

The authorities know this man has child pornography on his laptop, but can not access the data again without a passphrase that Boucher (the defendant) has refused to supply. Ignoring the type of crime, which is detestable, this becomes a question of the nature of the passphrase. Is it the same as a physical key? Or is it the contents of a persons own mind and divulging it would be testimony?

On Slashdot and Bruce Schnier’s Blog discussions regarding this issue, people seem awfully split on the issue, and everyone is still wary. After all, this is the kind of situation that will almost certainly need to be examined by the Supreme Court at some point, and I’m not well versed enough in the case law history of our Justices to even hazard a guess as to that decision would be.

Professor Orin Kerr evidently feels that Judge Niedermier in this case was wrong to deny the subpeona that would have forced Boucher to enter the passphrase that would have given authorities access to his data. He seems to feel that part of the reason this decision being incorrect was that Boucher has already proven that he knows the passphrase, and in so doing, provided some amount of evidence to the authorities prosecuting him in this case. Mr. Kerr’s post leaves me a little unclear how he’d stand on this issue if Boucher had never demonstrated knowledge of the passphrase necessary to decrypt the pornography on Boucher’s computer.

The dicussion is most interesting due to the lack of firm precedent in this case. Niedermier’s decision speaks of a 2000 Supreme Court decision (United States v. Hubbell, 530 U.S. 27, 43 (2000)) which held that while turning over a key to a lockbox (something physical) was not protected as providing testimony, the combination to a safe is considered testimony, and a person can not be forced to turn it over. Some of Kerr’s commentors (and myself) feel that even if he’s required only to enter the passphrase, and not actually to ‘disclose’ it to police (as they’ve attempted to force) is not significantly different from forcing him to tell it to police directly. The man was foolish to decrypt the files for the border control guard, certainly, but doing so then does not compel him to do so now. The use of the passphrase is functionally identical to the disclosure of it.

Also of interest is an 1886 decision (Boyd v. United States, 116 U.S. 616 (1886)), referenced by one of Professor Kerr’s commentors. The decision stated that under the 5th Amendment a person could not be compelled to provide private documents of an incriminatory nature. In my opinion, the files on a persons computer are covered under the description of ‘private documents’, and these days we happen to have more means to protect that information.

The only compelling analogy utilized in the comments was one comparing the passphrase to a combination to a safe which contained a physical key (a fair comparision). Can a person be compelled to provide physical evidence which is protected by intellectual evidence? It’s an interesting point, because in this case, the prosecuters know that the only thing the ‘combination’ will provide direct access to is the ‘key’ (a number) which can be used to decrypt the documents they are interested in. Is the fact that the passphrase enough to protect him under the same guise as a combination? Is the fact that the ‘personal files’ which the key would provide access to are potentially incriminating enough to save him from forced disclosure? Or, because the evidence being requested is analogous to a physical key (which would provide access to incriminating data) enough to justify forced disclosure?

At the end of the day, I disagree with Professor Kerr and agree with Judge Niedermier. I believe that the passphrase is analogous to the Combination in US v. Hubbell, and that turning it over would in effect be turning over incriminating documents as in Boyd v. US.

This may also come down to the fact that Boucher was foolish enough to provide the passphrase once. In doing so that first time, he may have waived his right to keep it secret. I’m not sure how this is going to play out in the courts, and I think Boucher deserves some time in prison for the child pornography witnessed to be on his computer. If he is compelled to unlock the data on his computer, I pray it’s solely because of the initial unlocking he did at the border.

*Disclaimer: I am not, nor have ever claimed, to be a Lawyer. I am a software engineer, with a large amount of interest in Information Security. I have experience working in judicial processes (though not the US Courts), which has given me some experience on how to think about these issues, though I lack formal training in either Written or Case Law. I respect Professor Kerr’s knowledge of the Law, and my disagreement with him is based on my interpretation of the decision, the news surrounding this issue, and the comments on his blog and elsewhere. *

And the runner-up for the 'Net's Worst Browser is....

People aren’t going to like this, but I’m going to have to say Safari. Safari is typically well regarded. It’s standards-compliant, it’s Javascript and rendering engines are light and fast (which is why WebKit is gaining ground on mobile devices), but what’s great about Safari is everything that’s right with WebKit, what Safari does on it’s own is kind of horrendous.

First and foremost, Apple released a version of Safari 3.0.4 for Windows a while back. This would have been great, except for two things. First, Safari looks terrible on Windows. Rather than integrating the browser with whatever platform it’s running on, Apple forces Safari (and iTunes, actually) to look exactly as they do on Mac OS X. One of the most common problems I’ve always heard with GTK+ on Windows is th at the widgets don’t quite look right on Windows, which makes the application look out of place. iTunes and Safari look completely out of place on Windows, creating a disjoint experience for the user. Not that I think Apple cares about this, I’m pretty sure they’re just trying to use these products to try to convert users.

The other problem with Safari on Windows, is that it simply doesn’t work the same as Safari on Mac. There are pages that load fine on the office Mac Book, that won’t do anything on my Windows installation of Safari. I’ve yet to track down this particular problem, but it simply doesn’t make sense, and it suggests that this beta is not nearly as ready as Apple would like users to believe.

The second major problem I encountered may actually be a problem with WebKit, I haven’t had a chance to run a Konqueror test yet. Safari appears to concatenate an extra CR-LF pair to the end of upload files. This led to problems with our CSV decoder, as it saw that extra CR-LF as part of the file contents and made the false assumption that the file used DOS Line endings. I’ve changed the behaviour of the decoder to be more resilient, but with some binary formats, that could have been a major problem. The bug has been reported to Apple, however.

However, these flaws are not as bad as most of IE7s. At least I was able to report the bug to Apple, something Microsoft makes nearly impossible, and it works well on it’s primary platform. I know IE8 is apparently going to improve it’s standards compliance, but I can’t judge it until it’s closer to release.

Web Development in the Modern World

In the old days of the Web, sites could choose to support a specific version or family of web broswers. These days, we don’t have that luxury. As Web Developers, we can’t know what our users will be using, so we have to support as much as we can. Of course, there are far, far too many choices. There are well over a thousand different web browser versions and brands out there, and it’s completely unreasonable to try to test every last one of those platforms.

YAHOO! talks about Graded-Browser Support in their documentation for the YAHOO! User Interface Library (which, incidentally, has become my favorite Javascript framework). Of all the methods to determine support that I’ve seen, this seems to be the best. It gives the most freedom, while still serving as a stringent guidlines. Whitelist all the A-List browsers you want to give full support to, but acknowledge that C-List browsers still exist.

There have been times where I’ve had to use lynx “Wikipedia”) or w3m to browse the web, cell phones are becoming far more common as means to access online data (our online schedule of classes get’s an amazing amount of cell phone traffic) ), and I’m reasonably certain that web browsers designed for the blind or otherwise disabled would fall into this category. A lot of fancy web application stuff would be really hard to use for people who lack all the physical capabilities of the ‘normal’ user.

Much of this has gotten far better. The text- and mobile-based browsers have better CSS support, and are better about ignoring tags that they don’t recognize. It really isn’t necessary to do this anymore:

In my personal opinion, any browser (version) that doesn’t know how to handle or ignore script tags should be classified as X-Grade these days. Ten years ago, it made sense, but not anymore. We, as web developers, need to get over the idea of the web application being more important than the web page. We need to determine a minimal level of support, and design our applications beginning at that level.

There are, therefore, two approaches to supporting C-Grade browsers. Progressive Enhancement and Graceful Failure. Planning for Graceful Failure is a flawed methodology of design. If nothing else, reread the last sentence. Planning for Failure will always succeed, but not usually in the way you want. When planning for graceful failure, you design your system to work with all these whiz-bang features, and then try to go in and try to make things work without them. This is hard, because you almost certainly made assumptions in your initial design that simply won’t hold true for the failure case.

Meanwhile, Progressive Enhancement allows you to design the for the most limited enviroment, and then slowly add in those features. You still need to keep in mind the capabilities of the ideal environment when designing the system, but those considerations are back-end issues and almost certainly won’t be problems your users will face in the front end. You might argue that it will make your code less clean, having to squeeze those advanced features in later, but that isn’t completely true. Good design will be highly modular, and the new code can leverage the modularness of the old code in manner that shouldn’t require much refactoring.

In my opinion, we need to begin our development with no CSS or Javascript. Cookies are fine, as I can’t think of anything I’d consider a C-Grade browser that doesn’t support cookies. Also, a lot of applications depend on session states anymore, and I believe that it’s best not to put the session key in the query string. It seems that a lot of people disagree. In a recent episode of FLOSS Weekly, Avi Bryant of seaside, a Smalltalk-based web framework, pointed out one of the benefits of Seaside was how well is saved a developer from having to deal with design issues, and that the application could be made to look pretty from the get-go, I really wonder how well Seaside degrades for C-Grade browsers.

In the end, I believe it’s important to maintain that C-grade experience, and improve upon it for the A-Grade browsers (Currently IE6+, Firefox 1.5+, Opera 9+ and Safari 2+). The best way to do that is to start with the most basic functionality you can afford (or are willing to support) and expand from there. I’ve been lucky in this respect, I’ve been handed several very large systems and challenged to make them something better. Most of my work has been maintenance, yes, but the point is that the product was designed for the Web 5 years (or more!) ago. Sure, some of the refactoring necessary to take the software to the next level isn’t always completely trivial, but in the end, the user doesn’t care what the code looks like, and I know the pages will be more accessible to all my users, not just the ones using A-Class browsers.

Internet Explorer innerHTML Failure

I’ve been working, lately, at making AJAX work within the framework initially laid out at work, ten years ago. It’s not completely trivial, but luckily the framework was flexible enough that I’ve had a fair amount of success. As usual, however, the stumbling block has been integration with Internet Explorer.

On this project, I’m using the Yahoo User Interface (YUI) library to do most of the heavy lifting. Of all the major Javascript frameworks I’ve seen, I think it’s the most flexible. I’ve been happy with it. For one of my XMLHttpRequest calls, I’m just taking the response, which is a list of

This worked, at least, until I went to test it on Internet Explorer 7. The XMLHttpRequest call succeeds, but for some reason my select boxes just went empty. Using the IE Developers Toolbar, I could see that the DOM was malformed, the initial opening

So, I turn to Google, and search for “Internet Explorer Select innerHTML” when what should the first result be? But a Microsoft Knowledge Base Article outlining the problem. And dated 2003. Why Microsoft hasn’t managed to fix this in 4 years and dozens of patches and a major release later, I’ll never know. But at least they posted a work around.

Building the select element, and doing an outerHMTL set was going to be a pain, but I could deal with that. Until it didn’t work. That’s right, the suggested work around from 2003 doesn’t work in IE7. Fantastic.

Now, I have no choice but to write something that dumps XML or JSON, and parse it out at the client, which adds complexity, and takes more time. This probably wouldn’t irritate me so much, if the bug wasn’t so old, it didn’t seem to trivial to fix, and the situation hadn’t worsened in the last four years.

This sort of thing is probably what irritates me the most about IE’s prevalence. I know of no web developers that do their initial testing in IE. Everyone I know uses Firefox or Safari (mostly Firefox, by far) to do their initial building, and then port it over to IE. It just goes to show how lousy IE really is, that it, being the most widely used browser, isn’t the primary development environment for web applications.

It would be like if Windows application were all developed in Linux and then grudgingly ported over to Windows. Of course, I tend to do this anyway, since it’s far more pleasant for me to work in Linux, but I know that I’m the exception to that rule by far.

Unfortunately, us web developers have dug this hole, and we’re going to continue to drown in it. By catering to poor standard’s compliance throughout the browser wars and into today, we’ve told the browser companies that it’s okay to not follow the standards, we’ll deal with it on our end. Had we taken a stand when IE was the new, broken-ass kid on the block, and refused to bend over backwards for IE, I bet it would have been a far better product. As it stands, Microsoft has no incentive to fix these sorts of bugs, as we’ve told Microsoft that we’ll take care of it.

I’d love to have the application flake out and not always work right on IE. I really would, but unfortunately, the majority of our users are IE, and it’s not worth the calls we’d get if we went that route. Today I’m having to bite my lip and implement code far more complex and error prone than I’d intended, but I’m left with no choice from Microsoft.

I wonder if there is a relatively easy way to force IE users to use postbacks, while Safari/Firefox users can use the AJAX stuff… I wonder if it’s really worth implementing…

Movable Type Open Source

Last week, Six Apart finally released the Open Source version of the Movable Type platform. The impending release of MTOS was the main reason why I’d selected Movable Type when I was planning to migrate away from b2evolution several months ago.

With MTOS, the code is licensed under the pure GPL, so while Six Degrees still intends to sell an enhanced version of Movable Type, the core of their software will always be free. I’m not sure what they intend to sell, if it will just be support and a set of plugins, or what not, but they’ve chosen to give the community a very mature and powerful Perl-based platform.

While quite a bit of my initial decision was based on wanting to use Perl as the base language for my new system. I wanted something new, certainly, and quite a bit of my more recent work has caused me to prefer Perl to PHP. With it, came several other benefits, however. My blog performs far better than it used to, and takes much less effort to serve off my servers. This is largely due to a major change in methods used between the b2evo and MT.

In b2evo, and with almost all PHP apps, pages are ‘fried’, meaning that they are processed and prepared out of the database for every single visitor. This takes time, and processing power. Living on a shared system, as I do, this is a problem. Actually, a while back, my b2evo site had become so inundated with Spam, that it was actually taken off-line because the database was out of hand.

With Movable Type, pages are ‘baked’, meaning that each post is generated only when I demand it (ie, the post is edited, a comment is added, or I republish everything). Then, it’s stored on the filesystem as a static file and served up quickly and easily. The only fried pages are the ones that need to be, like search (which I’ve mostly offloaded to Google). In the end, I get far fewer database hits, which means that the database is safer, and the system performs better.

I’ve been happy sense I migrated to MT. The administrative interfaces are better, and it’s easier for me to customize the templates than it was with b2evo. The only thing about the MT Open Source release was the selection of the GPLv2, which doesn’t have any specific protection for web apps. Since web application code isn’t actually ‘delivered’ to the customer (the browser, in this case), you don’t really have to honor the GPL for web-based applications. You can keep your changes to yourself. Luckily, this doesn’t apply to Six Apart, since they are delivering to a client’s servers, so their code will still be held to the GPL.

Not that I care much about what Six Apart does right now. At least I’ve got a powerful blog tool that I can do whatever I please with.

Developing SQL

At the Registrar’s Ofice at Washington State University, the web programming that I do requires almost as much T-SQL as it does VBScript. In fact, I spend a fair amount of time deciding every day if a task would be better accomplished on the database or on the web server. Often times, the choice is obvious. Sometimes it isn’t.

For instance, it’s grading time at WSU, and one of our reports we make available for the administration provides statistics on grading; what courses have been graded, what courses need to be graded, etc. The problem we ran in to, was that the mechanism used by that report to determine who was instructing the course was different from the mechanism being used in other parts of the system, like our schedule of classes.

In the mechanism that was being used, the instructors name was being looked up in a table received in a nightly download from IT. This download has two problems. First, it does not resolve crosslisted or conjoint course relationships (a course listed with more than one course prefix and number). Second, for courses taught to students on an individual basis (research seminars, mostly), the data we had from IT would just choose an instructor. The more robust mechanism resolves the crosslist/conjoint relationship and also makes a notation for individually taught courses. Most of the code was already written, but by doing a lot of that code on the server, the queries run pretty quickly.

The problem, is that the mechanism that we’re using can’t resolve relationships beyond one level. So a child-of-a-child reports the wrong parent. I wrote up an attempt to do this lookup via a recursive SQL function. This was a disaster, raising the running time of our Sections view from about 2 seconds, to almost thirty. This view is invoked thousands of times a day. Clearly, this sort of a slow down was unacceptable. Interestingly enough, I can offload this lookup to the webserver, sacrificing about a meg of memory to cause that lookup to take almost no time at all. The problem is, this solution won’t work everywhere. In some places it would result in more calls into the database, which could end up slowing things down as well. Ideally, our data from IT would have all children point to their ultimate parent, but we can’t depend on waiting for a solution from them arriving anytime soon.

However, occasionally recursive SQL works really well. A more recent project is moving our Contact Us pages into the database. One thing I wanted were subcategories for a few listings. I needed to get the data out of the database in a sorted order, because I knew the SQL would be faster at sorting the data. However, resolving the children was proving difficult. Even using SQL Server 2005’s Common Table Expressions didn’t work, because it didn’t preserve the sort order I required. Luckily, recursion works very well in this case, probably because the data set is far smaller than the data set for our section data.

FUNCTION [dbo].[GetChildEntries] 
(
        @ParentId int
)
RETURNS 
@ReturnTable TABLE 
(
        ParentSectionId int, 
        SectionId int,
        SectionName char(64),
        ShowTitles bit,
        EntryName char(64),
        EntryTitle char(64),
        NetworkAddress char(128),
        Phone char(10)
)
AS
BEGIN
        DECLARE ChildSectionsCursor CURSOR FOR 
                SELECT Id FROM ContactUsTest.dbo.Sections
                WHERE ParentId=@ParentId ORDER BY SortOrder;
        DECLARE @CurrentId Int;
        OPEN ChildSectionsCursor;
        FETCH NEXT FROM ChildSectionsCursor INTO @CurrentId;
        WHILE @@FETCH_STATUS = 0 
                BEGIN
                        INSERT INTO @ReturnTable SELECT * FROM
ContactUsTest.dbo.GetEntryList(@CurrentId);
                        INSERT INTO @ReturnTable SELECT * FROM
ContactUsTest.dbo.GetChildEntries(@CurrentId);
                        FETCH NEXT FROM ChildSectionsCursor INTO
@CurrentId;
                END
        CLOSE ChildSectionsCursor;
        DEALLOCATE ChildSectionsCursor;
        RETURN 
END

The final solution uses one Stored Procedure (for ease of calling), and two functions, of which this is the recursive one. The more development I do with SQL, the more I’ve grown to love writing Stored Procedures and Functions. They are far easier to read while in the website code, and best of all, they abstract away the implementation details, giving me freedom to modify the underlying structure without having to rewrite a large amount of code.

However, Microsoft SQL Server (I haven’t done quite as much direct development in MySQL or PostgreSQL), has some caveats that can make writing code a bit of a pain. For instance, the SQL Server tends to report strange line numbers for errors. I’m still a bit fuzzy of what the interpreter ignores in the preamble to the procedure when reporting error line numbers. This is a hassle because it makes the “go to line” option nearly useless in debugging the error. Also, I found that if an error occured within a function called by a function, SQL Server would report the error at the line of the function called. Needless to say, this isn’t as helpful as it would be if the actual source of the error were identified.

Still, all that can be worked around, once you’ve learned the behavior. Inconvenient, yes. But still workable. What I really have trouble with, is SQL Server not handling recursive calls very well. As I said, there is overhead, in this case, the dataset is small enough (and will remain small enough) that I’m not worried about it. But making modifications to recursive functions is a hassle. If I change the return type or arguments at all, I have to comment out all the recursive calls, recompile, and the uncomment them and recompile, otherwise SQL complains because it can’t recognize that it’s modifying the object that no longer matches the definition. This would be a fairly trivial change on Microsoft’s part, and would make writing these tasks far easier. I would also like to see the overhead of calling functions reduced, but I can not say yet if that is a problem with Microsoft’s SQL server, or SQL servers in general.

I know recursion is often viewed as harmful, due to the added complexity it can add to code. In some circumstances, I’ve found that the overhead involved is highly harmful as well, but sometimes it really is the easiest and clearest way to implement an algorithm.

Copyright Updates Bonanza

A couple of the events that I’ve written about recently have had some updates, so I’m just going to do a quick bringing together of those stories.

First, last week I wrote on Copyright, and Canada’s DMCA, which would have been far more restrictive even than the one here in the US. The legislation has been postponed from being presented to Canada’s parliament, likely due to the enormous outcry from Canadian bloggers and traditional media outlets.

This is a huge success for Canadians, and shows that public outcry and action can still influence lawmakers to better consider the needs of the people they represent. The bill is still going to be introduced at some point, which is fine, as long as it affords reasonable protections for fair use. Michael Geist was the blogger to break this story, and one of the leaders of the movement against the bill. The Canadians not through the woods yet, however. It’s possible the pulling of the bill is merely a delaying tactic, to try to squeeze it through when the people are looking. Remain vigilant, but congratulations Canucks.

In regards to yesterday’s post about RIAA lawsuits against file-sharers, it appears that at least one company has managed to convince labels that maybe things aren’t all bad. As presented on Slashdot, imeem.com has signed deals with the big five of Record Labels; Warner Brothers, Sony, BMG, EMI, and Universal, to allow their users to share MP3s with one another. The labels are recieving a cut of advertising revenues gained from the site.

I don’t see the RIAA going after file-sharing networks where they don’t stand to get a financial cut, but at least the premise that file-sharing can be legal and endorsed by the labels is being tried, and hopefully it will be proven. I’ve posted about not being a fan of Social Networking sites in the past, but I’m curious about IMeem now, and might actually register for my first social networking site, at least to check it out.

It’s kind of strange to write about good news for a change, but it’s a pleasant one. Let’s hope that the good news continues to flow.

RIAA File-Sharing Lawsuits and MP3 Creation

In a move that hearkens back to the days of cassette tapes, the RIAA via Atlantic Recording has filed suit against Pamela and Jeffrey Howell in Arizona regarding their production and distribution of MP3 files on the KaZaA file-sharing network. Currently, there is a moderately violent debate at Slashdot, over whether the RIAA is claiming the crime is the creation of the MP3 files, or the distribution of those files over KaZaA’s network.

It appears that the Court has a similar question, as shown by the second question this brief references from an Oct 3 Order issued by the Court:

Does the Record in this case show that Defendant Howell possessed an “unlawful copy” of the Plaintiff’s copyrighted material, and that he actually disseminated the copy to the public?

Please note, that there is no doubt that the Howell’s violated copyright by distributing the digital forms of the media over the Internet, at least when considering contemporary interpretation of copyright law. The above quote raises the issue that perhaps the claim is being made that even the creation of the digital copy constitutes an ‘unlawful copy’.

As an aside, the US Copyright Act defines Publication as follows.

[T]he distribution of copies or phonorecords of a work to the public by sale or other transfer of ownership, or by rental, lease, or lending. The offering to distribute copies or phonorecords to a group of persons for purposes of further distribution, public performance, or public display, constitutes publication.

The RIAA has taken to taking the language which reads “The offering to distribute copies or phonorecords to a group of persons for purposes of further distribution,” as the basis for their suits against File-Sharers. With the way most File Sharing systems are set up, placing a file up for share does constitute distributing copies for the purposes of further distribution. Based on the wording of the law, would it then be legal to participate in a file-sharing network where users didn’t put up their own downloads for redistribution, but only offered the files they had created themselves from source media be legal? If it is, I know it wouldn’t remain so for very long before the law was amended. Incidentally, I disagree with the RIAAs intepretation of the law that merely offering to distribute copies is illegal, as that is not what the law says (and would have made the Mix Tape culture of the 80s and 90s quite illegal), the law clearly indicates that the purpose or intent of the offer to distribute is important. The Copyright Office’s agreement with the RIAAs interpretation of the law, does not change the law from what it is, that office doesn’t have that power, though their support may likely influence Judicial opinion.

All that said, there are several instances of legal precedent that participating in a file-sharing network is a violation of copyright law. The brief has a several more references than those above. I don’t agree with the precedent, but I also acknowledge that either a Judge or a Legislator need to do something about reversing the precedent. I am not either of those things, and I don’t have a lot of faith in those men to reverse the precedent either. After all, our Legislators passed the DMCA, and our Judges have issues amazing damages for these infractions. In this case, the RIAA is requesting $750 for each song shared on KaZaA. I’d love to see how the arrived at that figure for the damages done to them.

All this case law supporting the premise that the use of file sharing networks to transmit copyrighted materials have provided a clear interpretation of these laws. Unless the law is changed, or the interpretation of that law is changed (something that would need to be done by the Supreme Court at this point), we are left with users being responsible for the files they make available on file-sharing networks. Ultimately, I am not entirely against this interpretation, but I do believe that a reality check is required on the damages being awarded in many of these cases. I am also disturbed the equating of a person’s ability to do something, with their intent or purpose to do it.

Indeed, Defendant's conduct in this case has subjected Plaintiffs' valuable sound recordings to ongoing "viral" infringement. See In re Aimster Copyright Litig., 334 F.3d 643, 646 (7th Cir. 2003) (observing that "the purchase of a single CD could be levered into the distribution within days or even hours of millions of identical, near-perfect … copies of the music recorded on the CD"). When digital works are distributed via the Internet, "[e]very recipient is capable not only of … perfectly copying plaintiffs' copyrighted [works,] … [t]hey likewise are capable of transmitting perfect copies of the [works]." Universal City Studios v. Reimerdes, 111 F. Supp. 2d 294, 331-32 (S.D.N.Y. 2000), aff'd, 273 F.3d 429 (2d Cir. 2001). The "process potentially is exponential rather than linear," which means of transmission "threatens to produce virtually unstoppable infringement of copyright." Id.

And this is where the idea that the RIAA may be trying to form precedent for the villification of MP3 (or Ogg Vorbis, et al) production comes into play. Certainly, they are quoting other decisions here, but the argument that if I distribute digital copies of distributed works for which I have not been granted distribution rights, that I am guilty of distributing them for the purposes of further distribution is dangerous precedent. I agree that the people to whom I’m distributing have the ability to further distribute the work, but it seems to me that the law would require that it be proven that I encouraged such behaviour. Certainly, on traditional file-sharing networks, further distribution is encouraged, as they will typically automatically share any downloaded media. This again raises the question that if a file-sharing network were created which expressly disallowed the sharing of media by a user who had downloaded said media from the network, would it also be constituted as illegal under this same justification?

Ultimately though, I haven’t answered the question that I began the post with. Aside from a few strange wordings:

Defendant intentionally uploaded digital music files to his computer and that those files were being distributed to other KaZaA users from Defendant's KaZaA shared folder without Plaintiffs' permission

Defendant possessed unauthorized copies of Plaintiff's copyrighted sound recordings on his computer and actually disseminated such unauthorized copies over the KaZaA peer-to-peer network.

Both the above quotes say the same thing, but the way the sentences are structured makes it seem to imply that not only the distribution, but even the act of creation is questionable by RIAA standards. The rest of the language throughout the supplemental brief suggest otherwise though, and unlike the person who originally sent the story to Slashdot, and some of the Slashdot posters, I don’t beleive that the RIAA is trying to vilify this just yet. I understand why they’re going after file-sharers. I disagree with their intepretation of the law, and believe it places too much extrapolation on the intent of the distribution, but I’m willing to live with that. What I really take issue with is the damages being requested by the media firms. $40,500 for sharing 54 songs seems unreasonable to me, and until I see where that figure is derived from, I will continue to question those damages.

I am not a supporter of the RIAA. I feel that they go too far to try to limit our ability to use the media we purchase from them. And it’s not simply the RIAA, but all the major media houses, really. I think fair-use means something, but in this instance, with the file sharing, I don’t see a good argument for the RIAA fighting against fair use. Nor do I see any good evidence of a strong attempt to vilify MP3 creation, though they don’t appear to make a good attempt to answer the question of the Court which crowned this post. It seems that the RIAA is differentiating between an MP3 residing on a file system, and an MP3 residing in a folder which will be shared over the internet. The distinction is interesting, but very important. As long as the RIAA sticks to that distinction, I see nothing changing in their war on file-sharers.

Document Management Alternative

Last week, I wrote a post about Sharepoint outlining some of the reasons why I felt that developing a reliance on the technology was dangerous, and then lamenting the lack of anything comparable anywhere else.

I still haven’t found anything that brings all the unified community pieces (discussion forums, document shares, blogs, wikis, etc) together under a single application, but the piece of the puzzle that I knew of no alternative for, Online Document Management, apparently has an alternative.

Knowledge Tree is an open source Document Management System which offers all the document management features that Sharepoint has, in a cross-platform, portable container. In fact, anywhere you can an AMP (Apache, MySQL, PHP) setup, you can easily set up Document Management with Knowledge Tree.

Once it’s configured you have the same ability to upload files, read files, check-in/out files, search files, and view differences between versions that Sharepoint offers. And like Sharepoint, you have WebDAV access into the repository. Unfortunately, as of right now, this is a read only access, but I’ve noticed writing to WebDAV with Sharepoint is a little wonky, unless it’s a brand new file.

Knowledge Tree comes with a full stack installer that contains Apache, MySQL, and PHP make it a cinch to drop onto a new server, which is good, because it still depends on pre-PHP5 versions, which will make it hard to run along side other PHP apps. The team is hard at work trying to fix this, but for now, running a virtual server for Knowledge Tree may be the easiest answer.

Despite these inconveniences, based on my test installation on my Mac Mini, Knowledge Tree shines as a document management system, because that is all it was designed to do. Documents are arranged in folders which Users can be assigned permissions to, there is no fussing around with creating ‘sub-sites’ like there is in Sharepoint, creating a new folder is as simple as creating one on your desktop.

Unfortunately, a lot of the more advanced features, like Office Integration and WebDAV extensions are not available in the Open Source project. Still, as a pure DMS, I think Knowledge Tree Open Source is a good project, and I unlike Sharepoint it doesn’t suffer from trying to do too much.

If any of my readers are aware of any other Document Management systems available today that work well across platforms, I’d love to hear about it.

Roy vs Jeff : A battle over OOXML

Alright, so the Linux.com-sponsored live podcast between Jeff Waugh of the GNOME Foundation and Roy Schestowitz of Boycott Novell wasn’t actually as confrontational as the title of this post might imply. Still, it made Mr. Schestowitz look only slightly less foolish than I’d already believed him to be.

I’m just going to get the reasons that I think that Roy is a lunatic out of the way first.

  1. Though there are two editors on Boycott Novell, I’ve never seen anything posted by Mr. Coyle.
  2. The site is called Boycott Novell, a company that has done far more good for the Linux community than harm. To be fair to Mr. Schestowitz, he claims that the name was Shane Coyle’s decision, not his own.
  3. He posts at least a half dozen ‘stories’ a day. These stories are almost always filled with enormous amounts of paranoia.

For his credit, dubious as this may seem, it became clear over the course of the interview, that Mr. Schestowitz is a true Free Software fanatic, outclassed by only a very small handful of people more devoted to free software than he is. And, in regards to the OOXML issue that was the topic of the podcast, his main beefs with the format are similar to my own. It’s a lousy format, not suitable for use internationally, and the ECMA standard is not what Microsoft released in Office 2007. Mr. Schestowitz is also incredibly concerned with the software patents that encumber OOXML and the .NET platform. Currently, Microsoft has deals with Novell that say it won’t enforce those patents (on Moonlight and Mono at least), but software patents are still a cause for worry, especially where Microsoft is concerned with it’s poor track record.

In the end, Jeff Waugh sounded far more articulate, and he demonstrated that at the core of their beliefs, both Waugh and Schestowitz stand for the same things. Support for OOXML is merely a means to an end for the GNOME Foundation. Whether we like it or not, most of the developed world uses Microsoft products, from Windows to Office. The best way to coax Windows users into switching to Linux is to show them how painless it can be. OOXML support is vital for that. As Waugh points out, the figures of how many OOXML and ODF exist in the wild aren’t very useful, as the vast majority of created documents are never published to the wild.

I particularly liked Mr. Waugh’s reason for why the GNOME Foundation had become active in the OOXML ISO Standardization process. As I’ve blogged about before, OOXML is not very far removed from the binary formats the Microsoft has used in past versions of Office. By getting as much information as we can from Microsoft about OOXML, we’ll be in a better position to edit and save as prior Microsoft formats. Also, Open Office and gnumeric are only planning to open, not save, OOXML documents.

Ultimately, this interview showed the battle was one of pragmatism vs. fundamentalism. I completely believe in the fundamentals of free software. Still, sometimes, we need to bother ourselves with less clean technologies in order to accomplish our goals. Even Richard Stallman recognizes this, as he did when he spent all those years in the early days of GNU reimplementing UNIX tools, to make sure his versions were free of the copyright and licensing issues that plagued even the BSD folks. We need to convince people to use Free Software, by making it better than the commercial alternatives, not by fighting a holy war about software wanting or needing to be free.

I am a believer in the tenets of Free Software, but we need to be realistic, and we need to coexist in the heart of the software world in order to win this war. We lack the resources to battle them head on, we must win our converts where we can by being better and faster and more reliable. People like Stallman and Schestowitz, with their powerful beliefs, ignoring any technical acumen they might possess, hurt our cause far more than they realize every time they speak out against the evils they perceive. A subversive war is won by working with the enemy closely enough to debase them, but not so closely you become them. It’s our best chance to win, and realize the free software dream.

The Danger of Sharepoint

Here at WSU, everyone is getting really big into Sharepoint. Our office is even starting to use it as the primary method of transferring secure documents, because it’s password protected, encrypted, and has a decent ACL model. Of course, this doesn’t really make up for the glaring problems inherent in Sharepoint.

Even though Sharepoint appears to be a ASP.NET application, there is a surprisingly large number of P/Invoke calls into Windows-only DLLs. Of course, this means that Sharepoint will never run on Mono, which was undoubtedly Microsoft’s plan all along. Even as a web application, Sharepoint is severely lacking. It’s almost entirely dependent on ActiveX controls, which limit it’s usefulness almost entirely to Internet Explorer. Sorry Firefox/Opera/Safari users, you’ll have to make sure to use IE for this one single application. Even on Windows, it’s a hassle because you’re going to be prompted to install almost a dozen ActiveX controls.

Using Sharepoint is interesting. Every user is expected to create their own personal page, called a “MySite”, which they can populate with anything they want, provided a “WebPart” exists for it already. If it doesn’t, WebParts can be created in any .NET language. I can not yet speak to the ease or difficulty in creating WebParts, though from what I understand it is fairly easy. With Sharepoint 2007, Microsoft has made available a large number of WebParts, from RSS readers, to Blog Software, to Calendars, to Document Lists. All modifications to a site are handled via the web, adding content is a matter of dropping a new Web Part on the page and setting some configuration options. It’s actually pretty slick.

Unfortunately, it’s not as easy as it could be. If you’ve got a list of web parts on a page, and you want to reorganize things, you’ve got to edit obscurely named (to the average user, at least). Plus, there doesn’t appear to be an easy way to move a webpart from one column to another. Google’s iGoogle Gadgets should have been the model for how to handle moving of elements around in the page.

Still, Sharepoint is gaining in popularity because it is fairly complete groupware, and if you’re already a windows house it’s integrated support for Microsoft Office files, including revisioning, is excellent. As a collaboration tool on files, particularly with the WebDAV support it offers for mapping Sharepoint sites as if they were drives. It’s a decent, if cumbersome, community-oriented tool.

However, unless Microsoft drops all the ActiveX controls, something they’re unlikely to do, it will be incredibly hard to use for anyone who doesn’t use Internet Explorer. Do yourself, and your company a favor. Avoid vendor lock-in. Avoid Sharepoint.

Tin Man

Last night saw the conclusion of the SciFi Channel’s most recent Miniseries, Tin Man, a reimagining of sorts of the classic Wizard of Oz story. The story opens with DG (a not-so-hidden play on Dorothy Gale), a girl growing up in a small rural town, working a dead end job, knowing she is destined for something more than this small-town existence. It probably doesn’t help that her father goes on an on about his childhood in Milltown, a small town where he felt everything was perfect. All in all, DG’s life was boring, except for the nightmares she’d have nightly about a strange world far removed from her own.

That strange world exists of course, as the Outer Zone (the OZ, get it?), and DG’s really from the OZ, but has been living on the “other side” to protect her as she grew up. When the Dark Sorceress, Askadelia, sends troops after DG, her parents throw her into the tornado the shock troops created for their own travel, and DG ends up in the OZ, alone.

The rest of the story has many parallels with the original Wonderful Wizard of Oz, first DG meets the Munchkins (who are tiny savages), then her first companion is a man who’s lost his marbles, then the Tin Man, then the Cowardly Lion. The Scarecrow, or Glitch, as he’s known, had his brain removed because of what he knew, though it’s a punishment often reserved from criminals. The Tin Man was a sheriff working for the resistance, who get locked in a tin suit and forced to watch his family get taken from him over and over again for ten years. The Cowardly Lion is from a race of savages who are psychic, and can read minds. All small twists on the original theme. Catherine described it as the difference between what really happened, and how it got written down. The changes are different and subtle, but do add a level of ‘realism’ to the world.

Overall, the miniseries was entertaining, though the writing was subpar, and the actress playing DG was terrible. The final act felt rushed, as if they either needed to do a 4-part miniseries, or simply structure the show differently, and the ending was predictable. What I did really, really like about the miniseries, is that it did a great job of focusing on the heroes finding that which they’d lost as they travel. The Tin Man eventually finds his heart, allowing him to show pity on a man he’d dreamed of killing for years, even if he still seeks justice. The cowardly lion finds the strength to be brave. DG learns how to make right a grave mistake from her childhood which caused all the problems the OZ was facing, and Glitch learns how to function without his marbles (they do find his brain, and presumably put it back after the movie ends).

In all, it was entertaining, but I was disappointed at how much more it could have been. It was a great mix of science fiction, old west, mythological and fairy tale elements, but the subpar writing, poor casting in key roles, and awkward pacing makes it far from a classic.

3 out of 5

Unsurprising Insecurities: Wireless Keyboards

A team at Dreamlab has uncovered a serious vulnerability in Microsoft’s (and quite possible Logitech’s) Wireless Keyboards that operate with 27 Mhz Technology. There has already been some discussion on this topic at Slashdot and Bruce Schneier’s Blog.

This is yet another example of a company developing a proprietary encryption method, which hasn’t been heavily scrutinized by the cryptographic community. While I doubt that Microsoft is considering a 1-byte XOR to be anything but obfuscation, it becomes necessary to ask what the point is. The data is still transferred over the radio, and the rest of the unencrypted protocol makes it terribly easy to filter out who is typing what. The whitepaper published by the team has a good description of the basic protocol (though they’ve left out the details), but they were able to typically figure out the XOR key within 20-50 characters being typed by the target. My old Palm Pilots could easily crack this code in very little time.

I imagine it would be fairly trivial to build a small 27Mhz receiver which could be attached to a Palm Pilot, dropped in a pocket and walked through an office where these sorts of Wireless keyboards where in use, and could glean an amazing amount of information, including the possibility of login credentials.

The real problem I see is that this isn’t exactly an unknown possibility. Sega’s Phantasy Star Online Episode’s I & II game for Nintendo’s Gamecube included a warning in the manual about not entering credit card information for online play into the game using a Wavebird controller, because of the possibility that it could be sniffed by someone else who had a gamecube, the game, and a wavebird receiver tuned to the correct channel. Even Neal Stephenson, in Cryptonomicon talks about van Eck phreaking, which is specific to video displays, but the same principles apply.

It is simply irresponsible of Microsoft to not acknowledge to their users that the use of their Wireless keyboards is insecure, and could be sniffed for private information. All electronics are going to emit some RF signals that can be picked up with the correct equipment, including my wired keyboard. The difference is that an attacker would have to get their signal device within a few inches of my wired keyboard to sniff it, while a 27-Mhz keyboard will be sniffable within several dozen feet, possibly more if the antenna has good gain and their is a good noise reducer in the system.

Admittedly, wireless peripherals are convenient. I even use a wireless mouse from Microsoft at home (which no doubt operates at 27-Mhz), but I believe that scanning Mouse traffic is far less useful than keyboard traffic. So, what’s the answer? As of yet, Bluetooth is still the best answer. It’s yet to be unbroken, and it’s encryption is reasonable. That certainly doesn’t mean that Bluetooth is perfect, and doesn’t have it’s share of problems. It’s still a much better answer to the wireless device question at this time, however.

So, before you drop money on that brand new wireless keyboard, ask yourself how important it is for you to keep your keystrokes private.

eBooks, DRM, and Amazon's Kindle

1 Comment

There are days I wonder if I would be as vehemently against DRM as I am, if I didn’t use Unix as my primary operating system. I suspect I would be, as typically, I don’t really miss the things that avoiding DRM technologies causes me to be unable to use. So, when Amazon announced their Kindle last month, I didn’t really care. I love eBooks, and I think they’re a great way for people who would otherwise be unable to publish to get their ideas out there. But, I’m really careful about who I buy from.

Steve Jackson Games’ e23 store has made a point of only offering their downloadable eBooks in PDF, without DRM (actually, I have downloaded several files from there that had ‘anti-copy’ restrictions to prevent copying and pasting, which were easily circumvented). Admittedly, there are a large number of writers that won’t publish through e23 because of the lack of DRM, but that’s fine, because I don’t need to buy their material. By specifically lacking DRM, I don’t have to worry about whether or not a file will open on my computer today, or in the future. Furthermore, they acknowledge that sometimes we lose files, or maybe we need to free up some space, and I can download any file I’ve ever bought from my account until I either lose my credentials or they go belly up. It’s not hard to prevent users sharing logins, because it’s pretty obvious when you have a lot of users logging in from very geographically disparate locations. People who are sharing files, aren’t using shared logins anymore, it’s just too easy to transfer the files using other means.

In a blog entry on the future of reading, blogger Mark Pilgrim discusses how eBooks, as they are being implemented today, stand to kill the way we can use books. Several of the points that he makes (and are made in the comments) are quite frightening. He specifically targets the Kindle, but any DRM laden technology will suffer from similar problems. DRM takes away the ability to resell something. Don’t like a book or CD enough to keep it around? Too bad, it’s yours and yours alone. Plus, DRM technologies almost always ‘phone home’, allowing the vendor to track amazing amounts of information about what you are up to. It’s one thing for a company to make advertising decisions based on what I’ve bought, but to track what I’m reading and listening to, in order to make continued analysis of my personal profile? I don’t think so.

It’s part of the problem with this interconnected world we live in, and part of the reason why internet advertising is so scary to so many people. They stand to link a browsing history to each and every IP (better Identification is also possible), in order to form a picture of a person’s browsing habits. Admittedly, I do have Google’s ads on this site, but I’ve typically seen the ads are related to the content of the page. This doesn’t mean that Google isn’t using their unique position to form a picture of user’s interests and browsing habits, but I haven’t seen any good evidence of such tracking. Google is typically against DRM, but the point is that, like DRM-based content providers, they are in a position to harvest a frightening amount of information about users. Just imagine what Microsoft is capable of in that same regard.

Typically, I’ve been against DRM because of the limitations it places on when, where, and how I can use my media. But the comments made regarding the Kindle terms of service raise the other frightening issue, both on tracking, but also the clause that allows Amazon to functionally stop your Kindle from working, if they feel you’ve used their product outside of the terms and conditions they’ve placed upon it. It’s a kill-switch, and a time-bomb, and as a user you’re going to be pissed when it’s triggered. Of course, Apple has done the same thing with the iPhone, and somehow those restrictions haven’t stopped the iPhone from being one of the most demanded devices this year.

As it stands today, DRM has won. Users don’t value their freedom enough to fight against it. Only by educating the users about why DRM is ultimately going to hurt them (which will also include instilling them with a sense of value for their privacy), will this get any better. Speak out, certainly. But do it clearly, and be able to support your position. We need to convince the common man, and not the content providers, that it’s time to do the right thing.

Failure of the Software Machine

As with so many things, the theory of Software Engineering and the Practice tend to differ greatly. Jeff Atwood has his Recommended Reading for Developers list on his Blog, and I do agree with several of the books he posts. I’ve nearly finished reading Code Complete, and I’ve it includes many good guidelines on what constitutes good software, and how to make it happen within a programming team. Pete Goodliffe’s Code Craft contains many similar suggestions, but with more of a “from the trenches” kind of feel. Fred Brooks’ The Mythical Man Month identified a great number of problems that still plague software to this day, nearly three decades after it was written.

With all these books available describing “Software Engineering Best Practice”, and with their tendency to agree of 90% of their key points, why is it that we still have such an enormous amount of lousy software? This isn’t even a Microsoft vs. Unix problem any longer, even though most best practice in this field tends to follow Unix’s Seventeen Rules, there is still plenty of bloated ugly software for Unix. Even rhythmbox, which I’ve contributed small amounts of code to, failed in part to meet the goal of those rules, in placing it’s Podcast code into the mainline program, instead of abstracting it out as a plugin (to be fair to ryhthmbox, there is a bug open to take care of this problem). Programs should be as absolutely small as possible, and clean plug in architectures are the best way to add functionality beyond the ‘core’ functionality the software must serve.

Ben Collins-Sussman recently made an exaggerated point about an 80/20 split between developers, basically saying the 80% of developers program for a living, and 20% live to program. Ignoring the ensuing debate about the perceived elitism between his post and Atwood’s, it’s an interesting point. Some people are just more apt to grab ahold of the newest technology and try it out and try to make it shine, while the job developers aren’t inclined to seek out new tools. There is nothing wrong with either approach, but there is a lot of elitism on the part of the ‘20%’.

Collins-Sussman was posting primarily about how Distributed Version Control Systems, like git are not accessible by the ‘80%’, and therefore most of corporate developers. Now, I love git, I think it’s a great tool. I would never use git for corporate development, for several of the reasons Collins-Sussman blogs about. It makes more sense for there to be a central repository for corporate source, both from a security and integrity perspective. I would argue that the reasons that DVCS doesn’t make sense for corporate software, is because it is trying to solve a different problem.

DVCS is particularly well suited for open-source development, because you’ve got a lot of developers who are greatly geographically diverse, who may not be able to provide reliable hosting for a source repository, and who typically want reasonably complete features before they share their patches. This is where git, in particular, shines. It allows a developer to take full advantage of source control without fidgeting with the changes on a server that may or may not be available. It is fully possible for a large project to be successfully managed via git without having a server set up at all. There may be a lot of commands, but it is quite a paradigm shift, and some new language will need to be adapted.

I would argue that Subversion, while a good answer for teams of software developers (though I don’t think it’s Windows integration via TortiseSVN is really all that good), I believe it lacks features that hinder it as an SCM for open source projects, namely the ability to work on something locally, with the full benefits of source control, without a network connection or the need to share unfinished ideas or code. Apparently Subversion is working on adding features that allow for more and better offline editing, but right now, they aren’t there, and that makes me wonder if subversion is really appropriate for open source development.

Collins-Sussman seems to indicate in his post that both the DVCS’ on the market, and the more traditional VCS’ should be trying to ultimately serve the goals of all users. My only question is why. Both methods of managing source solve different problems for different arenas, and if they try to become one-size-fits-all applications, we will end up with hopelessly bloated and insecure systems. Certainly, implementing features inspired by one system or the other is okay, provided they make sense within the model you’re software is encouraging.

Bruce Schneier posted a conversation between himself and Marcus Ranum about security in ten years. The foregone conclusion: Things are going to get bad. Very bad. We’ve been locked in this ‘patch-and-pray’ development model for decades, and people have become so accustomed to poorly designed and insecure software that they don’t see any other way things can be. Schneier even goes on to talk about how this model will be utilized to enforce a code-signing system, and the day that happens will be the death of software as we know it. When all the doors are locked, software will stagnate and die, and innovation will go out the door with it. Imagine what a company like Microsoft could do if they could choose not to allow a developer’s software to run on their platform by refusing to sign it?

It’s time software developers began to actually practice best practice. To design with security in mind, to design small simple packages that can be combined to solve fantastic problems. To resist feature creep. Features are the enemy. Commercial developers try to lock consumers into an endless cycle of upgrades, that continue to add 99 features they’ll never use for a single one they might. It’s not fair to single out commercial software, however, some open source projects fall prey to the ‘one-more-feature’ mania. The software industry has dug itself into an awfully deep hole by focusing on jack-of-all-trades software, and it’s bad for software and business in general.

Now we’re talking about Software-as-a-Service, and locked platforms. Rather than solving the fundamental problems in software today, we’re just trying to offer a false sense of security, and greater vendor lock-in. The future is going to be frightening.