Mad, Beautiful Ideas
Code Review Practices

A while back, I saw an ad on Stack Overflow for a free book from Smart Bear Software entitled "Best Kept Secrets of Peer Code Review". While Smart Bear does sell their own Peer Code Review tool, CodeCollaborator, the majority of the book doesn't stress how great their software is, but instead lays out a series of arguments on why you should consider doing Peer Code Review in the first place.

It's generally well recognized that to become a better programmer, we all need to be reading code as much as possible. Jeff Atwood, over at Coding Horror, talks about this as developing your nose for "Code Smells", or learning to identify those pieces of code which simply need to be refactored or replaced. A lot of what makes code "good" can only come from experience. This isn't to say that the new is necessarily bad, but that as we read more code our ability to determine what is good versus what is bad generally improves.

Peer Code Review, which has been practiced informally in the Open Source World forever, also ties into Eric Raymond's axiom in his work "The Cathedral and the Bazaar": "Given enough eyeballs, all bugs are shallow." Basically, with enough people looking at the code, someone is bound to be able to fix any problems that might exist there. In practice, it's not that simple, and the fact that this peer code review in use is so informal, means that sometimes code isn't reviewed as strenuously as it could be, for one thing, a maintainers code is very rarely as exactingly reviewed as the code being added by a previously unknown contributor.

I'm not really advocating Formal Peer Code Review in the open source world, but many organizations lack even the informal methods used in the Open Source world, but due to their nature, they might benefit more readily from formal code review process. Part of this comes down to the economics of commercial software. It is well understood that detecting a defect early makes that defect far less expensive to correct. By catching a fault before it gets to the customer, a company can often save thousands of dollars per defect (taking into account the support personnel's time, developers tracking an error message to the fault, etc). Catching an error before the QA team at the company finds it, can still result in an impressive cost savings, as the developer doesn't have to task switch to solve a problem.

Peer Code review is simply one tool to catch defects sooner, rather than later. The case studies in this book do an excellent job of starting at the beginning of Code Review theory, namely that code review was best conducted in regular meetings, highly formalized, and ran with strict rules and guidelines. These methods worked well for many companies, particularly those who worked in software for mission- or life-critical industries (don't want a bug in your X-Ray machine software that causes the system to inadvertently dump fatal levels of radiation into a patient 0.01% of the time). However, in many situations, it's hard to convince people of the need, and such formal meetings don't allow for developers to schedule their time in the manner most productive for themselves.

Enter the tools. Beginning with E-mail, we had the ability to perform review (formally or informally), while allowing reviewers to choose when they would review, allowing them to choose times when they might be able to review more effectively. While it may not seem as convenient as having all the reviewers in a room together for an hour a day, it has the potential to be much more effective, as developers won't be forced to interrupt effective development time with the meeting, allowing them to review when they need a break from writing code.

However, as the book does stress, E-mail doesn't provide any methods for collecting metrics. You don't know how long a person has spent performing a review. How many defects they're finding per hour, or per thousand lines of code (kLOC). No one likes having an enormous amount of pomp and circumstance around what should be a fairly simple process, particularly when the payoff, in the form of statistics about the process, aren't obviously beneficial to the person who's supposed to be collecting them. A fair portion of the book is to tell you how great CodeCollaborator is at collecting these sorts of metrics for you.

I'm not planning on pushing code review at work just yet, though I'm giving a lot of thought to thinking about doing so. And while there was little in this book that I hadn't heard before (having studied software engineering at a University not too long ago), but it is probably the best overview of the problem of Peer Code Review that I've been able to find. If, by the end of the book, you're not at least thinking about how you could use Peer Code Review, then I posit you'll never be convinced. With that said, the latter part of the book is certainly trying to sell you on their specific product.

It certainly looks like a good product, but lately, I've begun looking at an Open Source alternative: Review Board. Review Board was originally developed by VMWare, but has since begun being used by some teams at Yahoo! among many others, which certainly lends it some "if it's good enough for them..." clout. It's also designed using Django, which makes it fairly easy to deploy to most any operating system you might have available.

It's a similar project to CodeCollaborator. It integrates in with your Source Control. It allows online browsing and commenting on a patch. Unlike CodeCollaborator, Review Board doesn't allow IM-style communication, doesn't collect the same types of metrics, and doesn't seem to allow line-level comments. However, it is free software. Allowing you to freely extend it, and not requiring a lot of financial outlay at the beginning. As a developer, and given this is a tool for developer's, I certainly prefer that. Others will disagree. Plus, since Review Board is MIT licensed, you're not required to share your changes (though often, it will likely be better to do so).

I like what I've seen of Review Board, and I'm not convinced that the additional features that CodeCollaborator offers (today) are worth whatever they want for them (Smart Bear doesn't list licensing rates on their website). If you're not sure if you're interested in Code Review, order this book. If you're trying to convince management to use Code Review, order this book. If you're looking for a code review product... I'd try Review Board first, myself.