April 2010 Archives

Notes on the Terry Childs Case

This week, Terry Childs, the [San Francisco Sys Admin who refused to turn over passwords to the city network to his superiors because he felt they were incapable of properly managing it[(http://www.cio.com.au/article/255165/sortingfactsterrychildscase/?fp=&fpid=&pf=1), was found guilty of felony denial of computer services. First off, I think Mr. Childs is absolutely guilty of his crime. I’ve never worked in a place where I was the only one with key access to core systems. Hell, I usually insisted on a password sheet (or a USB key with password data) stored in the company safety deposit box. Of course, I still wanted to know what the definitions in the case were, so I had to go to California’s Penal Code, who’s website is pretty awful, so I’m just going to quote here:

502. (a) It is the intent of the Legislature in enacting this section to expand the degree of protection afforded to individuals, businesses, and governmental agencies from tampering, interference, damage, and unauthorized access to lawfully created computer data and computer systems. The Legislature finds and declares that the proliferation of computer technology has resulted in a concomitant proliferation of computer crime and other forms of unauthorized access to computers, computer systems, and computer data.

The Legislature further finds and declares that protection of the integrity of all types and forms of lawfully created computers, computer systems, and computer data is vital to the protection of the privacy of individuals as well as to the well-being of financial institutions, business concerns, governmental agencies, and others within this state that lawfully utilize those computers, computer systems, and data.

(c) Except as provided in subdivision (h), any person who commits any of the following acts is guilty of a public offense:

(5) Knowingly and without permission disrupts or causes the disruption of computer services or denies or causes the denial of computer services to an authorized user of a computer, computer system, or computer network.

(d) (1) Any person who violates any of the provisions of paragraph (1), (2), (4), or (5) of subdivision (c) is punishable by a fine not exceeding ten thousand dollars ($10,000), or by imprisonment in the state prison for 16 months, or two or three years, or by both that fine and imprisonment, or by a fine not exceeding five thousand dollars ($5,000), or by imprisonment in a county jail not exceeding one year, or by both that fine and imprisonment.

Okay, whew. Section 502 in the California Penal Code is kind of interesting, though I hate that computer crime has a seperate set of statutes than other crimes. Is ‘denial of computer access’ really any different than, say, boarding up all the doors and windows of a persons home (or place of business, so there is a more obvious financial burden), denying them access to that resource? I don’t think so, and I don’t think computer crime should be treated differently. Mr. Childs absolutely denied access to his superiors, who absolutely (though the Jury did debate this) was authorized to use the system.

My issues with this case, aside from the double standard of computer versus physical crime, is with how it was handled. Mr. Childs was held on $5 million in bail. Five million dollars. Incidentally, child molestors, arsonists and kidnappers in San Francisco, have their bail based at a mere one million dollars, at least in San Francisco in 2008, the time at which Mr. Childs bail was set. Of course, since he’s been in Jail for almost two years, his sentencing should offer credit for time served, and his additional time in jail should be very short, if any at all. And given that he’s been imprisoned since 2008, I would be surprised if, under Section 502, he recieved an additional fine.

But we’ll see. I have no idea what the legal basis for $5,000,000 in bail was, though this is certainly not something that was on the bail schedule. I just fear that the over-reaction in setting his bail is going to translate into an over-reaction in his sentencing.

Sys-Admins have a tendency to feel like gods of their own little domains, and Childs’ actions are highly indicative of that. I do have a small amount of respect for the sheer level of conviction displayed by the man, but it was misplaced. And now, he’s lost nearly two years of his life to that misplaced conviction. Does he really need to lose any more?

Ubuntu 10.04 "Lucid Lynx" Released

Ubuntu 10.04 becomes final today, and this weekend, many an installfest is going to be started around it. If you’re in the Seatlle, WA area, the Washington Loco Team will be having a gathering this evening. Regrettably, in my own area, there are no events that I’m aware of, though I’ve been running with the Lynx for months now. It’s a good release, despite all the drama about themes, and other decisions. Of course, themes can be replaced, buttons can be moved, and if you dislike some of the new defaults (which I do take issue with some), then you can always change them back (though, admittedly, moving the window buttons is harder than it should be). For me, the buttons being moved is a problem, if for no other reason than the fact that I have to sit in front of a Windows box eight hours a day.

What I appreciate most, is that Canonical, and the rest of the Ubuntu Community are still trying to focus hard on supporting newer, or non-technical users. As part of this, the Ubuntu Manual team has put together a new manual, Getting Started with Ubuntu 10.04. If the manual has any weakness, it’s being too thorough. I have no idea how many users that would try to install Ubuntu couldn’t download and create an install disc, or wouldn’t be trying Ubuntu without the encouragement of a friend who does know. But that’s neither here nor there, I suppose. It’s a good book, with a ton of useful information for someone who is new to Linux, or computing in general.

Now, there are a few things that they haven’t addressed, that are covered fairly well in this video about Linux things that still suck, some of which Ubuntu may well address by 10.10. I’m hoping Audio gets worked out. I bought Shadowgrounds: Survivor a short while ago, but for some reason, the audio plays around 30 seconds later than the action happens on screen. Which is…not ideal. I’ve got a support request into LGP, and I hope that they’re making some progress, because I’m stumped. Anyway, that’s a general complaint, as audio issues are unfortunately common throughout the Linux world today.

One of my favorite features in the new release is the Ubuntu One Music Store, which integrates directly into Rhythmbox (and a Banshee plugin is underway), and it’ll sync directly to your Ubuntu One account, allowing your music to follow you, and not be lost in the case of catastrophic computer failure. It’s almost perfect, almost because I can’t use the payment method I’d prefer, but that’s a fairly small thing. This is likely to replace my use of Amazon’s MP3 store, if for no other reason than even if I didn’t use U1 to store my files, I can download from the provider 3 times, instead of Amazon’s 1.

With the official release complete, I might be able to convince my wife to let me upgrade her laptop now…

Apple, REACT, Gizmodo, and Search and Seizure

As has been all over the web, Gizmodo paid an unnamed source $5000 to get their hands on a prototype iPhone that an engineer had left at a bar six weeks or so ago. The source had apparently tried to contact Apple several times to return the phone, and Apple had remotely killed the phone almost immediately after it had gone missing. Since then, a lot of people have comedown hard on Gizmodo, and their parent company, Gawker Media, for “Checkbook Journalism”.

First off, I don’t think that Gawker did anything wrong, per se. They saw an opportunity to get a story, and they did what it took to get that story. Unfortunately for Gawker, the incident happened in California.

Why’s that a problem? Well, in the State of California, taking possession of lost property, is the same thing as stealing it. Fucking ridiculous.

Admittedly, in the State of Washington, you could be accused of theft if you didn’t make due diligence to return the goods. In California, you are legally required to turn the goods over to the Police for a period of time, in Washington I don’t see any such requirement, but I’m possibly missing something.

One this is clear. Gizmodo hasn’t broken trade secrets law. The moment the new iPhone left Apple’s property, even if it was disguised, Trade Secret protection vanished. However, Gizmodo did quite clearly under my understanding of California statutes regarding theft take possession of stolen property, and given the ridiculous sum they paid for it, they knew what they were doing. No way Gawker would have cut a $5,000 check if they didn’t think it was a legitimate scoop. For that reason alone, Gawker could be in pretty bad legal trouble if Apple chooses to press charges.

Which they have. On Friday, California’s Computer Task Force, REACT, searched Giz Editor Jason Chen’s house, taking possession of every computer and storage device they could find. Kicked in his front door to serve it, since he and his wife were out to dinner at the time.

Now, it’s entirely possible this was an illegal search and seizure, because of Chen’s status as a journalist and the fact that his home was his office. Of course, as the story above points out, that may not be a valid argument, but it would be intersting if all the evidence that has been collected is non-admissable.

This is a story to watch. Surely, Gizmodo’s behaviour was pure douchebaggery, but Apple builds such a mystique with their privacy, that it’s little surprise that Gawker (or someone else) would go through this sort of trouble. Frankly, I don’t think things are going to go well for Gawker Media on this one. Apple’s pissed, and California law seems to be on their side. However, I do think Gawker and Gizmodo will be able to use this situation to further bloody Apple’s nose with bad PR. And that’s where the fallout from this is going to be most interesting.

2010 Garden

Catherine and I spent most of Sunday working on our Garden at the Pullman Community Gardens. It’s our third year at the garden, though last years was….not precisely successful due to our priorities changing away from the garden (unfortunately). This year, aside from having two weeks in the midwest during the hottest part of the summer, we’re committed to making sure we have an effective harvest this year. Given that we’ve both stuck with a major lifestyle change (daily gym trips) for over six weeks. Admittedly this is a habit that isn’t fully formed and risks breaking (the real test will be after returning from those 14 days in the midwest), but the committment shown there, especially on my part, makes me hopeful that focusing on the garden won’t be an ordeal.

Our garden will not fulfill all our vegetable needs by any stretch, however, it should give us a fairly substantial bounty, especially since a lot of our seeds were either saved from last year (or two years ago), or are from commercial seedpacks we already had from last year. Catherine started many of our plants from seed several weeks ago, including tomatoes and peppers that aren’t going out into the garden just yet. For the seed starts, we just used plastic seed trays (that we intend to reuse). We used plastic instead of peat for two reasons. First, we dug up a lot of peat pots this year that probably had a negative impact on our plants last year. Plus, the environmental impact of peat is a lot higher than we’d known. In the future, we’re considering making pots out of newspaper. We also invested in a $30 seed mat which gently warms our seedlings, which has caused them to really take off. We’re considering waiting until garden stuff goes on sale at the end of this season and picking up a second.

We aren’t completely done planting, obviously. but we did get a lot of stuff in yesterday. Here’s a rough sketch with our garden as it stands:

Garden - 2010.04.25

Our paths are raised above the beds, and this year we’ll be putting down newspaper and straw to try to keep the weeds down. I had to dig out some BIG weeds, and we’re still fighting some grass and other rhizomitous weeds. By having the paths above the beds, it’s easier for us to water, and with the high clay content in our soil, the ground tends to stay wet pretty well. It’s a later start than we wanted, but at least it’s all in before Mother’s day, so we should be alright.

C# Style Question: Exception Handling in Callbacks

Recently, a coworker expressed frustration over the way he was having to handle errors in WCF Asynchronous callbacks. For Synchronous WCF calls, fault handling is simple:

var webService = new WebServiceWrapper();

try {
    var answer = webService.callMethodSynchronous(arguments);

    // Handle answer 
} catch (FaultException) {
    // Handle Specific Exception
} catch (Exception) {
    // Handle General Exception
} finally {
    // Clean-up
}

Prety basic and normal exception handling code, textbook really.

However, in WCF, when using an async callback the suggested code looks more like this:

var webService = new WebServiceWrapper();

webService.methodAsyncCompleted += (sender, response) => {
    if (response.Error == null) {
        // Handle Success Case
    } else {
        Type exceptionType = response.Error.GetType();
        if (exceptionType == typeof(FaultException)) {
            // Handle Specific Exceptions
        } else { 
            // Handle General Exception
        }
    }
    // General Clean-up
};
webService.callMethodASync(arguments);

The if-statements can be more concisely written as follows:

var webService = new WebServiceWrapper();

webService.methodAsyncCompleted += (sender, response) => {
    if (response.Error == null) {
        // Handle Success Case
    } else {
        if (response.Error is FaultException) {
            // Handle Specific Exceptions
        } else { 
            // Handle General Exception
        }
    }
    // General Clean-up
};
webService.callMethodASync(arguments);

I jokingly suggested rethrowing the Exception:

var webService = new WebServiceWrapper();

webService.methodAsyncCompleted += (sender, response) => {
    if (response.Error == null) {
        // Handle Success Case
    } else {
        try {
            throw response.Error;
        } catch (FaultException) {
            // Handle Specific Exceptions
        } catch (Exception) {
            // Handle General Exception
        }
    }
    // General Clean-up
};
webService.callMethodASync(arguments);

When we wrote this up really quick, our reaction was similar. It really feels like something we shouldn’t do, but it also seems a bit cleaner. On one hand, a Catch block is a great way to determine the exact type of any class deriving from System.Exception (.NET doesn’t allow you to throw Non-Exceptions, other platforms may not have that limitation). But is it worth the potential style faux-pas? And are there any other downsides to this. To see, I checked the IL of a simple application.

For the if-else typeof case:

IL_0000:  ldarg.0 
IL_0001:  callvirt instance class [mscorlib]System.Type class [mscorlib]System.Exception::GetType()
IL_0006:  stloc.0 

IL_002b:  ldloc.0 
IL_002c:  ldtoken [mscorlib]System.ArgumentException
IL_0031:  call class [mscorlib]System.Type class [mscorlib]System.Type::GetTypeFromHandle(valuetype [mscorlib]System.RuntimeTypeHandle)
IL_0036:  bne.un IL_004f

For those not fluent in IL, I’ll give a line-by-line: 1. Load the first argument onto the evaluation stack, prepare to use it. 2. Call System.Exception::GetType(), which returns an instance of System.Type 3. Store the return value of the previous operation into location 0 4. Load the value in location 0 onto the evalution stack 5. Load the token representing System.ArgumentException onto the evalution stack 6. Call GetTypeFromHandle to get a Type reference to the token loaded previously, this will use the top of the evaluation stack, and will put the return valeu on it. 7. If the two values are not equal (bne) than jump to IL_004f, which happens to be the instruction where our Else block begins

For the if-else is case:

IL_0000:  ldarg.0 
IL_0001:  isinst [mscorlib]System.ApplicationException
IL_0006:  brfalse IL_001f
  1. Load argument 0 onto execution stack
  2. Take the top of the execution stack, check if it’s an instance of System.ApplicationException, put a boolean on the execution stack
  3. If false is on top of the execution stack, go to IL_001f

And now for the try-catch version:

.try { // 0
 IL_0000:  ldarg.0 
 IL_0001:  throw 
 IL_0002:  leave IL_0046

} // end .try 0
catch class [mscorlib]System.ApplicationException { // 0
 IL_0007:  pop 
 // Do whatever
 IL_0017:  leave IL_0046

} // end handler 0

Breakdown: 1. Load the value of the Exception on the evaluation stack 2. Throw it 3. This will never be hit in this example. It would jump to the finally block. 4. Clear the value off the top of the stack 5. Go to the Finally block.

It certainly is more IL, but from a pure IL perspective, it doesn’t seem…bad, except for the fact that it does create a LOT more blocks, and leaving those blocks is a more expensive operation, since the Try and Catch blocks are considered ‘protected’ regions of code. Setting up and entering these might be more expensive, I’m not really sure, and I’d love someone more familiar with .NET Runtime internals to post their take on it.

After looking over both the IL and the C# code, I’m left thinking that using the ‘is’ operator is the best option in this case. It generates the least amount of code. The simplest code. The only downside is that it has you testing exceptions using code that doesn’t resemble exception handling anywhere else. For that reason alone, the Try-Catch mechanism may really be best, though I am leaning toward simply using ‘is’.

What are your thoughts?

Nerd Test Meme

It’s time for one of my rare Wednesday Meme posts! Yeah…I know.

On Planet Ubuntu the last few days, people have been taking the Nerd Test and posting their results. I was curious, so I gave it a shot.

I am nerdier than 95% of all people. Are you a nerd? Click here to take the Nerd Test, get geeky images and jokes, and talk on the nerd forum!

Now, at the time I did this, everyone was only posting scores in the 60s and 70s, so I felt I had to verify on the newer Nerd Test 2.

NerdTests.com says I'm an Uber Cool Nerd God.  Click here to take the Nerd Test, get nerdy images and jokes, and write on the nerd forum!

I knew I was a nerd, but I didn’t think the number would be quite this high…

Network Neutrality

Earlier this month, as you’ve likely already heard, a panel of Federal Appeals Court Judges issued a decision in favor of Comcast, ruling against Net Neutrality. For now, Comcast (of whom I am not, but my parents (and sisters) are, a subscriber), has promised not to modify their existing network policies that are meant to only throttle ‘heavy’ users (I don’t think they’ve provided a solid definition of ‘heavy’). While the FCC still wants to regulate Broadband, and is in favor of Net Neutrality, this decision stands to set legal precedence crippling the FCC’s ability to do that.

Why is this important? Because currently in the US, Broadband infrastructure is largely controlled by classic media companies, usually Cable providers, who still view the Cable TV business as their primary source of income. However, media consumption patterns are changing dramatically. I watch virtually no television anymore that isn’t time-shifted (there are two reasons my wife and I have Cable TV at this point: 1. A few shows aren’t available for online streaming (MythBusters mainly), or aren’t available in a timely fashion (South Park). 2. My wife likes to turn on MTV or VH1 for Reality TV background noise to relax or when doing other things), I DVR much of what I do watch, or I stream it on Hulu, if possible. For a very small amount of content (primarily BBC shows that don’t come to the US for months, and then tend to be cut up to 15 minutes shorter. I’d pay for iPlayer access, if you’d let me BBC) I will acquire copies through other means, however, I prefer to stream, since it provides at least some advertising revenue for the content producers. That has another set of problems, but I’ve addressed them before.

And that’s ignoring the made-for-the-Internet content I watch. Like Revision3, or The Guild, or my beloved Dr. Horrible. This is content that was made for the Internet as it’s distribution mechanism. Much of it is content that would never make it to more traditional media platforms, which is a shame because much of it is pretty fantastic. With Net Neutrality rules in place, we have a huge potential for a renaissance of media. Without it…well, it’s going to be difficult to move forward.

Generally speaking, I believe that government intervention in business is a bad thing. My problem with the recent health care reform bill (ignoring the fact that it displays a complete lack of understanding of the reality of the high cost of health care in this country), was that I don’t believe the government can possibly be as efficient as private industry. As a state employee, I see countless signs of governmental inefficiency on a daily basis. However, there are places (and admittedly, Health Care could be one of these places) where regulation is warranted. There is a huge amount of regulation over utilities (electrical, water, etc), and other scare resources (RF Spectrum, ie, why the FCC exists in the first place). And that’s important.

I’m of the opinion that we need to begin treating bandwidth as a governmentally-regulated utility, akin to electrical and phone service in the US. This has it’s weaknessed, a lot of electrical companies are struggling because electrical rates on the open market have climbed faster than their ability to raise prices to compensate (incidentally, an event caused by loosing government regulation), however, it does help ensure that important services remain available and affordable. Not that Internet access is more important that electricity (though my father, who works as a bill collecter for a power utility has had people pay their cable bills before their power bills on occasion), but lack of affordable bandwidth is going to become an issue.

Perhaps we need formal legislation to empower the FCC to enforce network neutrality. Perhaps the states could take the lead and begin enforcing this within their own borders first (which my anti-Federalist leanings would love to see). At the end of the day, we’re struggling. Our largest bandwidth providers are also content providers, and their interest is going to fall in line to protect that business, even as the Internet evolves around them.

Poor Practice: E-Mailing Passwords

A couple of weeks ago, I sent out the following Tweet:

Tweet about Emailing Passwords

This prompted a short conversation with Marc Hoffman about how it can seek to balance between security and convenience, the convenience factor being that if a user has their password in their e-mail, it can limit the necessity of password reset requests, and that sort of thing.

In my opinion, there is no reasonable argument for convenience. Most users utilize a very small number of passwords anyway. Those who don’t, usually take advantage of a password-safe application, in order to keep things straight. Which is fine. I don’t consider it the real answer to the password problem (that would be OAuth), but it allows you to securely store passwords (though if this is better security than in index card in your wallet is debatable), and manage the complexity.

E-mailing a copy of the user’s password, when you’ve already required them to enter their password into your site twice, does not help either of these use cases. Safe-users will have already saved their database. Password-repeaters already know their password.

Plus, it always gives me a nagging feeling that my password is going into their database the same way it came out of that e-mail. Plain text. That may not be true (and I always pray it is), but if they’re already expressing (what I consider) a lackadaisical attitude about using my password, it doesn’t give me a whole lot of hope.

Marc feels it’s an act of balancing between convenience and security, and certainly, all security advice is a trade-off. However, I don’t see any benefit to this. The odds of a user typing in their password wrong twice and needing to be reminded of it in an immediate e-mail. The odds of a user using an unfamiliar password, and not storing it somewhere secure. The odds of these things seem very high against, and they seem to send a strong message that the password isn’t something of any value. For many sites, it isn’t, but given the tendency of users to reuse passwords…

If anyone can provide me a strong use case for e-mailing a user the password they just entered into your site to create an account, I’d love to hear it.

The Problems with Responsible Disclosure

For a long time the security community has been talking about responsible disclosure, however, while the Wikipedia entry describes the process as one that depends on stakeholders agreeing to certain period of time before a vulnerability is released, this can still lead to months of a vulnerability going unpatched, and with ‘responsible’ disclosure, users usually have no idea, even if the vulnerability is being exploited in the wild.

Now, in some cases, it makes sense. Dan Kaminsky’s DNS Cache Poisoning discovery had not been seen in the wild at all, and the community was able to use the opportunity to upgrade almost all DNS servers in one fell swoop. The vendors were very willing in this case.

I’m not advocating that a security researcher immediately post their findings publically (aka, full disclosure), though I’m not opposed to that. I think that sometimes there is value to the pain some researchers express at doing the responsible thing. The vendor should absolutely be the first one notified, but in my opinion, public disclosure, needs to happen faster than it tends to. If vulnerabilities aren’t posted to sites like The Open Source Vulnerability Database, then a lot of work is duplicated, but furthermore, customers aren’t even able to properly protect themselves.

Some developers will simply not attempt to fix security issues, or will propose workarounds that aren’t acceptable to all users, as in the rant linked above.

The real reason I don’t think responsible disclosure works, is that many times vulnerabilities that are already in the wild aren’t being publicized properly. With full disclosure, customers can help prioritize fixes. Customers can institute work arounds that might provide them the temporary security they need. Intrusion Detection Systems can be outfitted with signatures that can help prevent live compromises. A lot of things can happen that are likely to make us safer.

Then, there is the side of the coin, that disclosure doesn’t make vendors any better at software development. It doesn’t make them any less prone to the same old mistakes. At the Pwn2Own 2010 competition at CanSecWest this year, the researcher who exploiting Safari on the Mac to root a Mac did so for the third year in a row. In minutes. Using the exact same class of exploit that he’s been using all along. The same mistake just keeps happening.

This year, he choose not to reveal the specific exploit, instead running a session on how he scans for the vulnerabilities, with the hope that vendors will start doing this themselves, since it’s mostly done via automated fuzzing. While I’m not arguing for no disclosure, as was done in this case, at least Mr. Miller presented on his techniques, such that Apple and others can finally get their act together on this all too common class of errors.

Visual Studio 2010 Gotchas

2 Comments

A quick update about trying to upgrade to VS2010:

  1. VS2010 does not have a version of BIDS (Business Intelligence Development Studio). This means you can not modify certain classes of Database Projects in VS2010, like any SSIS tasks. I was lucky, all my SSIS tasks are in seperate solutions from the rest of the projects for my code. If your’s aren’t, you’ll need to extract them. VS2010 will happily update an SLN file and simply disable any DTPROJ files in the solution.
  2. VS2010 can not cross-compile .NET 3.5 Test Projects. If you’re doing any Unit Testing, your Test projects will be automatically updated to .NET 4.0, even if you’re trying to stay on .NET 3.5 or lower for a little while. You’ll need to either disable your Test projects, or be prepared to upgrade everything to .NET 4, even if you’re don’t want to (or are unable to)
  3. Not a VS2010 issue, but Workflow Foundation 4 doesn’t support State Machine Workflows (yet), so if you’re using State Machine Workflows currently, you can’t update to .NET 4 at all.

Actually #3 is a huge issue, since there is no timeline on the state machine workflow support for .NET 4 (other than it’s been stated that it is being worked on). Basically, we’re down to the choice of forking our solution files, so that we have some that are VS2010 solutions running .NET 4, and some that are VS2008 solutions running .NET 3.5, or simply not upgrading to VS2010/.NET4 at all, since we have interdependencies that complicate the issue (we strive to adhere to the DRY principle). I was really looking forward to VS2010, but the reality of the situation has tempered my enthusiasm drastically, between pointless incompatibilities (unit testing) and feature regression (state machine workflows), it might be a while before we’re able to upgrade.

I’m disappointed. Really disappointed.

Closing the Doors on McDonaldland?

The new Senate Appropriation’s Bill contains language requiring the formation of a Working Group investigating how food is marketed to children (defined as anyone under 18). This working group only has until July 15 to report on their findings, but some think that this is the beginning of the end for child-centric food advertising. Some people are comparing this to the late-90s ending of the “Old Joe” Camel advertising regime, but I don’t think that’s really a fair comparison. It’s not yet illegal to sell hamburgers to children, after all.

I’m not going to spend any time defending Ronald McDonald. I grew up during a very active time in the McDonaldland Saga, and I suspect my general impression that it’s not that bad anymore stems more from the fact that I don’t follow things related to that target demographic than any actual reality.

The argument for retiring Ronald McDonald is pretty straight-forward. The ad campaign is designed to cause children to nag their parents to visit the McDonald’s, but also to make it seem that, by saying no, the parent is, in some way, showing that they don’t love their child. The figures in the report suggest that 79% of respondents to their poll, want Ronald to be retired, many of them strongly favoring the idea.

I am not a parent, and don’t foresee myself becoming one for several more years. I do have plenty experience around small children, from cousins in the past, to now the children of friends and family. It’s not the same, admittedly. If I say no to a child, I usually don’t have to spend the rest of the day, week, month, or year around them nearly all the time. It can be seen to afford me the opportunity to be the jerk, without having to deal with the long-term consequences.

However, while I understand that saying ‘no’ against the force of the marketing behemoth being aimed toward children is exhausting, and certainly that conceding from time to time isn’t the end of the world, I’m not sure I’m convinced that government intervention here is necessary, or even desirable. Do I think the ‘free speech’ defense of such child-focused advertising is appropriate? Not really, but I don’t necessarily see it as being invalid.

Advertising is only going to become more insidious. Product placement is becoming more and more common. Hell, have you watched an episode of Chuck lately? Between all the trips to Subway, and the Windows logo prominently on practically every piece of computer hardware on the show…

And I know for the ‘tweens’ things are just as bad. How many young girls want some new piece of clothing because Hannah Montana is wearing it? How long until we start to see her and her friends meeting regularly at McDonald’s to chat over a cheeseburger and fries?

At the end of the day, retiring Ronald McDonald and ad-icons like him are only going to drive those advertising dollars to different locations, and probably places where it’ll be even harder to do anything about. Saying no to a child isn’t going to get any easier, and certainly there is necessity to use saying ‘no’ as a learning opportunity, or to occasionally acquiesce. What’s the other option? Lock your kids away from all media that you haven’t reviewed first? Not likely.

iPhone OS 4.0 Announcement

Yesterday, Apple held a press conference announcing some new features for the iPhone OS which will launch this summer. By and large, I watched the live blogs and kept thinking that I’d had that in Android for longer, or that it’s done better on that platform in many ways.

Multitasking is great to have, but the demos made it appear to be more or less a direct clone of Android Activity Lifecycle with the pause and resume functionality. And support for services seems to be limited, though it’s unclear how at this time. I did really like that the lock screen could have widgets associated with it, like in the Pandora demo, however. That was a really nice feature.

However, a lot of what they announced were obvious features that they should have had a long time ago. Folders and Customizable Backgrounds being the big two there. There are, no doubt, some nice features here, but there were only two things that stand to be really interesting. Namely, the new Social Gaming Network they’re building, as well as iAd.

The Social Gaming Network makes sense, a lot of people feel the iPhone is a great gaming platform, and Apple boasted that the iPhone has about 10 times as many games as the Nintendo DS. Of course, they’re very different kinds of games, many of them aren’t very good (though there are apparently some gems), and it’s highly unlikely that it’s generated anywhere near the amount of revenue, but this was a marketing speech, so whatever. Actually, I’d rather Apple had made a deal to integrate with XBox Live (wouldn’t happen with Windows Phone 7 on the horizon), or the Playstation Network. We have way too many social networks these days.

iAd, on the other hand, is an interesting advertising platform, and they demoed some really interesting demo ads. They were rich, and could do things like drive App sales directly as well as interface with maps and everything else. They were all HTML5, but the look and feel appeared to be very much that of a native app. I disagree with the idea that clicking on the ad took you out of the app you were in, since they appeared to take full control of the display (which is not unreasonable), but the ads were interesting. I’m not convinced they’ll drive more clicks that a solution like AdMob, but we’ll see.

Plus, you’re not required to use either of these services. If you’d rather use a third-party advertiser, you can. I suspect most people will just use iAd, but at least the choice is still there.

The biggest problem with the new platform, wasn’t mentioned at the presentation, which isn’t a surprise. Luckily, John Gruber was willing to publish about it. Namely, the iPhone Developer Agreement now mandates what language you develop in. And it does it in a way that, not only neuters Flash CS5, but also MonoTouch and Unity3D, both of which are in use in major iPhone applications today.

Apple’s restriction on only using Published, non-Private APIs was understandable. I disagreed with it, but I understood it. However, mandating the language used to create an otherwise compliant piece of code is ridiculous. So far, I only seen one such product express concern over their own future on the iPhone platform. Most are, understandably, remaining quiet until such time as Apple clarifies.

My suspicion is that Apple will lift this restriction within the next few weeks, however the very fact that they attempted this should cause concern among developers for this platform. The lack of transparency and Apple’s history of removing programs from the App Store after the fact are bad enough, but not Apple is showing that they’re willing to dictate tools, even if you comply to all other parts of their requirements, even when it would have no impact on users, and I can’t see any impact it would have on the ecosystem, allowing people to use these alternate tools.

Then, there was Steve’s response when asked about ‘unsigned applications’, which was really code for ‘applications not from the iTunes App Store.’

Q: Are there any plans for you to run unsigned applications, like on Android? A: There is a porn store for Android to go to. You can download them, your kids can download them. That’s a place we don’t want to go. We’re not going to go there.

First off, why in the hell did he have to jump straight to pornography? That’s a bullshit argument, and I really hope everyone in the room at the time took it as such. Second, this is honestly part of what I like about Android. There is room for apps of all kinds, even if Google restricts them from their own Market. This single-store mentality pervasive in the iWhatever ecosystem right now is unhealthy, and it mandates some third-parties idea of what is appropriate on a piece of hardware that you’ve purchased, even when it has zero effect on them.

The Android Market is the easiest way to get applications on an Android phone, and you do need to specially allow non-Market apps, but at least the opportunity is still there. The real answer is that Apple doesn’t want to provide a side-channel for apps, and they’re using a bullshit argument about Porn to justify that position.

There were a few things in this announcement that were interesting. The expansion to the Mail app; merged inbox and thread view, are really nice and I’d like to see them on other smartphones. However, there was nothing announced that made the iPhone any more compelling to me than what’s going on in Android today (even if I’m still stuck on Android 1.6 on my own device). In many ways, I still think the device is less capable than other smartphone platforms. However, it was by and large a good step forward technically, though the arrogance of the policy shift should be enough to make developers think twice about hitching their wagon to a platform run by a company who makes such insane demands.

StyleCop in Hudson, revisited

As noted in the comments to my post on Tuesday, the original solution I had to this ended up being a lot less flexible than I thought. For one, StyleCop will apparently only parse out StyleCop.Settings files above the folder from which you began executing StyleCop. So, any StyleCop.Settings files below the directory with my controlling MSBuild file won’t actually be processed. That also means that the suggestion by Jason Allor on that post, to set the StyleCopEnabled Property in the CSProj file, didn’t have any effect, StyleCop doesn’t parse out CSProj files.

It turns out that the suggestion that I’d based my solution off of was a little too simple. It didn’t allow me to prevent StyleCop from running on a given project. It didn’t allow me to prevent StyleCop from running on a given set of files or projects. It wasn’t properly letting me modify rulesets for different projects. In short, it just flat didn’t work for anything but the most simple of circumstances. On the one hand, I can’t blame Craig Berntson, who gave that Boise Code Camp presentation, for over simplifying the situation, but on the other hand, it didn’t work well.

Of course, Mr. Allor from Microsoft has already posted on his blog about the real solution to this problem. Basically, run the code in every single CSProj file. This is a little shitty, since I had about 30 projects to modify, and Visual Studio does not make it trivial to modify the core templates that are used to create new projects. In the next month, I’m going to put together some tooling to make it easy to add custom code to every template matching a certain pattern in VS2010 (we’re going to be switching as soon as we can). This might be an external tool, but I’m looking at seeing if I can take advantage of VS2010’s extensibility model. I’m only in the earliest stages of contemplating this.

For now, I’m hand managing the CSProj files to insert the Include that is from the link above, but we did have one special requirement. At least one member on our team didn’t want StyleCop running on every build. My experience with StyleCop’s caching is that it’s not really an issue, but MSBuild makes this pretty easy to handle. We have a three-target build process: Debug, Test, and Release. When run from our controlling build script, Test will push to our Test server, and Release to our production server. Generally speaking, we always run Debug from Visual Studio, Test from CI, and Release only when we’re pushing stable builds, so the following block in all our CSProj files:

    <Import Project="$(MSBuildExtensionsPath)\Microsoft\StyleCop\v4.3\Microsoft.StyleCop.targets" />
    <PropertyGroup>
        <StyleCopEnabled>false
    </PropertyGroup>
    <PropertyGroup Condition="'$(Configuration)' == 'Test'">
        <StyleCopEnabled>true
    </PropertyGroup>

This ensures that StyleCop only runs when doing a “Test” build. For projects that I don’t want to run StyleCop against, I simply exclude the second PropertyGroup (and usually the Import as well, why bother importing what you’re not using?)

I’m not saying that you should exclude StyleCop from developer builds. But it is nice to know MSBuild makes it pretty easy to do.

Integrating StyleCop into Hudson Build

5 Comments

Note: The following doesn’t quite work correctly. Check out tomorrow’s post for more information

After Craig Berntson’s presentation at Boise Code Camp this year on expanding Continuous Integration usage, I decided to take the plunge and implement StyleCop on most of our projects on the CI server.

The problem with StyleCop is that it’s default set of rules are pulled directly from Microsoft’s Design Guidelines for Developing Class Libraries. While many of these guidelines lead to excellent code, they don’t all make sense for every developer. For instance, in our shop, we pefer an indenting style based on the K&R Rules, but StyleCop uses rules more akin to the Allman style. For now, we’ve had to disable several of the styling rules, but I haven’t been able to find (or write) custom rules to fill that need.

That said, it did take me a couple of days of pretty solid work clearing out StyleCop violations before I was comfortable putting it into our rules. Save for disabling all the documentation header rules (we’re likely to apply them, but it will be on a project-by-project basis), I only ended up disabling another fourteen rules in our global stylecop file. Incidentally, one of my favorite default StyleCop features is that it will work it’s way up your directory structure looking for Settings.StyleCop files, which it can parse out for configuration. To facilitate this, I put a StyleCop configuration file at the root of our Perforce depot, which constitutes our global configuration, which still allows us per project configuration if we need to customize anything.

After a few days of work, I had our code down to about 1300 stylecop errors, which was actually a pretty huge code change. The goal, with any code analysis tools, is that they’ll make our programs more maintainable, which is important to me, because I don’t plan to retire from my current employer, and I want to ensure that the codebase I hand off is as healthy as it can be.

Hudson already has a plugin for parsing StyleCop output, among other tools, so it was trivial to configure the addition to our build server. Plus, StyleCop offers an MSBuild task, which is also really convenient. The following code in our core MSBuild file was all it took:

  <UsingTask AssemblyFile="$(MSBuildExtensionsPath)\Microsoft\StyleCop\v4.3\Microsoft.StyleCop.dll"
        TaskName="StyleCopTask" />

    <Target Name="StyleCop">
        <CreateItem Include="..\**\*.cs">
            <Output TaskParameter="Include" ItemName="StyleCopFiles" />
        </CreateItem>
        <StyleCopTask
            ProjectFullPath="$(MSBuildProjectFile)"
            SourceFiles="@(StyleCopFiles)"
            ForceFullAnalysis="true"
            TreatErrorsAsWarnings="true"
            OutputFile=".\stylecop_results.xml" />
    

StyleCop only works on C-Sharp code, but that’s fine for us. VB.Net programmers are currently SOL, but if Microsoft really believe that VB is a first class citizen in the .NET world, I expect we’ll see StyleCop for VB someday. The only other thing to do was make the StyleCop Target a dependency of my full build project, and now StyleCop runs on every build in hudson automatically, and I configured the Violations plugin for each of our projects in Hudson to open ‘**/stylecop_results.xml’, as the StyleCop path and now we’ll have a chart (which will hopefully start trending downward) on our Hudson dashboard telling us how healthy our projects are from a StyleCop perspective. Right now…some are healthier than others.

On a side note, we do have a few open source projects, like the Silverlight Toolkit, which are part of our build process. Since we didn’t write this code, but we still build and link against it, we really didn’t want it to taint our StyleCop score. Excluding a project is as simple as dropping a Settings.StyleCop file in the root of the project that excludes all stylecop rules. Unfortunately, I think you have to name every rule individually, but I am on the lookout for a ‘Exclude this subfolder’ flag for StyleCop.