Mad, Beautiful Ideas

Search

About this Archive

This page is an archive of recent entries in the Bad Design category.

Books is the next category.

Find recent content on the main index or look in the archives to find all content.

JS Array lastIndexOf

Categories

Recent Entries

Recently in Bad Design Category

Windows Communication Foundation on IIS Deployment

At work we’ve been putting together a new schedule proofing experience for our campus (and possibly the rest of the University system) which would allow the schedule proofers to do all their work via a web-based interface. As we’re a primarily Microsoft-based house, the entire system is being built upon newer Microsoft platforms, for better or worse (usually the latter). We’ve been building the system using Silverlight 2.0 for the front end, Windows Workflow for the middle-layer, and Windows Communication Foundation for the communication between the two.

WCF is an interesting technology, because it makes producing web-based or application-based services pretty simple, and the framework modifies it’s behaviour based on how you deploy it. Need XML-based output? It’ll do that. Binary output more your style, feel free. If you want it, WCF can do JSON as well. The technology is handy, because it makes it so that you can focus on the implementation details of your web service, rather than worrying about the intricacies of SOAP or JSON data-exchange. The technology is compelling enough that the Mono project founded Olive to bring WCF to Mono.

The technology has a lot of really cool potential, but it suffers from some inherent design flaws that are hideously unfortunate. First, it seems frighteningly difficult to put more than one WCF web-service in an IIS instance. This has had all sorts of implication for people doing ASP.NET in Shared Web Host environments. Requiring unnecessary complexity to work around a bad behavior. While our problem is a little different, and I’ll get into it in a moment, I’m a bit confused as to why how the solution linked above, which is to define a Custom ServiceHostFactory object, even works, since the Factory attribute doesn’t seem valid, at least according to my VS 2008 instance. I’m not going to pursue that direction, however, as our issue stems from a slightly different, but I would argue far more common position.

We currently have three web-services designed. All three have their own SVC files in our web-project, and all three are properly defined WCF services. They all work properly on the local test server, and they were created as each their own services, because they each deal with different sets of data. Two are used to query into certain data systems, and in the interests of proper code separation, as well as the potential for reuse, we wanted to keep them separately. Even the old ASP.NET Web Services would have allowed this. Not WCF, though.

WCF_Error.png

That’s right, try to host more than one WCF service in a single IIS application, and an exception is thrown. Great work, Microsoft. I probably wouldn’t be so annoyed if Microsoft wasn’t trying to defend their position on this.

Wenlong Dong posted in the above thread link: Unfortunately the behavior is by design. WCF does not support multiple IIS bindings for the same protocol (here it is HTTP) for the same web site. This is for simplicity, especially it did not seem to be an important scenario. Is this a very important scenario for you? Can’t you host different services in different web sites (with different ports of course)? If this does block you, we may think about revisiting this issue again.

SIMPLICITY?!? DIFFERENT WEB SITES?!? WITH DIFFERENT PORTS?!?

You have got to be fucking kidding me.

Damn it Microsoft, as far as I can tell, if I’m hosting in IIS, I have to reference the service by it’s full URL anyway (typically ending in .svc), so the Service already has a uniquely identifying endpoint. That’s really all you should care about, that some sort of a URL can identify the location of the service. If I want to host them all on the same port, but with different URLs, why on earth should that matter to your framework? The URL is already telling you which code to run, how can you possibly have any confusion over this?

Luckily, Dan Meineck, a .NET Web Developer from the UK, has come up with a solution. Is his workaround complicated? Yes. Is it unreasonable? Yes. Does it work? Apparently.

The solution boils down to this: 1. Using .NET’s partial classes, put all your webservices in one class, but each discrete bit in different files, each file should only indicate the WCF interface which that file defines. 2. Modify your Web.Config (or App.Config) file, and in the sytem.serviceModel section, for the service, define endpoint blocks for each of your services, you can specify the specific contract Interface on each endpoint, so that only the methods you want are available on that endpoint.

Ultimately, this provides the exact behavior we want, but it’s really not very clean, and forcing users into this particular model is confusing and pointless. I understand Microsoft feels that the interface specifies the behavior, and is therefore the important part of the definition, but this decision will make it far more difficult for me to integrate a web service from one project into a second project, and frankly if this was simpler for anyone (even Microsoft from an implementation standpoint) it suggests to me that there are deeper design issues in the way that WCF works.

I’m refactoring the code today, to match Mr. Meineck’s suggestions, but it just seems so unnecessary, and pointless. Please fix this, Microsoft.

Bad Design: Microsoft Virtual Server

At work, we’ve been moving quickly in the direction of Server Virtualization. There have always been benefits to reducing the attack exposure of a server by minimizing the number of services running. This has traditionally been immensely impractical. Servers take up space, use quite a bit of electricity. The fact has always been that most servers sit around mostly idle, most of the time. So, given that many servers usually aren’t working very hard, why not combine many servers into one? This is exactly what Server Virtualization is all about.

This space has traditionally been dominated by VMWare, though several Open Source options, like Xen have become available in recent years. On the desktop, I’ve been using VirtualBox OSE, which does a decent job, at least until I can justify the purchase of VMWare Workstation (which frankly, isn’t too expensive at ~$200). Between these Open Source Tools, and Microsoft’s Virtual Server and Virtual PC both being free to licensed Window’s users, it will be interesting to see how long VMWare can keep their prices where they are.

To be fair, VMWare is still the superior product, but the cost savings, and our SysAdmin’s tendency to prefer all things Microsoft, led us to use the Microsoft solution. And it’s worked out pretty well for us. Virtual Servers are easy to create, backup, and redeploy. The only problem we’ve had is that these operations take a long time due to the fact that we’re not using a NAS device, but the software has done pretty well for us despite this. 10+ hours to transfer a 150Gb image over Gigabit Ethernet pretty much sucked, but it did work.

So, why is this software being palled for Bad Design? Simple. Deploying a Virtual Server image out of the backup, deletes the backup. Without any option to do otherwise. Our Sys Admin has been in the process of rebuilding all of our servers with Windows Server 2008, and last weekend was his opportunity to rebuild the Virtual Server Hosts (I’m not sure why we’re not using Hyper-V, don’t ask). The rebuild went fine, aside from the re-deployment of some of the Virtual Servers being slow, but again, a NAS will fix this, and it’s an in-progress purchase.

Due to the staggering amount of time it took to do the redeployments, immediate backups were not performed. It was assumed that waiting until this weekend would be fine. Anyone who has done Systems Administration knows what happened next.

The Virtual Server Host lost the largest virtual server. The only one that wasn’t a part of any standard backup scheme, because we had been told that it was for temporary storage of image data as it was being cataloged. It was being used for more. Much more. Attempts were made to recover the images, including running several undelete tools on the server in question. The only thing not done, was immediately taking the server offline and imaging the drive for analysis and possible recovery. The Sys Admin felt it wasn’t necessary, and I lost a much sought opportunity at forensic analysis. :(

Sure, we should have had that backup. However, if best-practice dictates that you should immediately backup a virtual server deployed from the vault, why does the software delete the version it’s deploying? Who in the hell thought that was a good idea? Drives fail. Software fails. We backup to protect ourselves from that. It was entirely possible that the failure that lost the Virtual Machine could have happened in the window between the image being deployed and the backup being completed, and in that case, who’s fault would the failure have been?

One of the first rules of writing software is that it must be resilient. Due to the decision to move an image while deploying from the library rather than copying it, that server was lost. This is not resilient programming. This is not resilient design. This is not resilient software.

For the most part, our experience with Microsoft’s Virtualization technology has been fine. I had a bit of trouble booting Ubuntu 8.04 inside of it (tip: add the ‘noreplace-hypervirt’ boot option), but it’s mostly worked pretty well. Still, this particular bug is so egregious that I can’t imagine what the person who decided on that behavior was thinking.

Don’t be completely afraid of Microsoft’s tech. But do be clear to do your backups religiously.