Mad, Beautiful Ideas
On the Internet, Not All Arguments are Strings

The Internet is not strongly-typed. It never has been. On our forms, we take in an enormous variety of data: strings, dates, times, numbers, even files. However, due to the nature of HTTP, that data transfers from client to server as ASCII text, which the server will most often happily convert into Strings. Regretablly, this leads to any number of problems when dealing with that data back on the server.

I’ve long held the view that any data that comes from the user should be treated as suspect. HTTP is a stateless protocol, so we have to trust some data to the User, but we must strive to minimize what they user is able to manage, with proper access controls. The Session objects support by most modern web platforms are a help in this, as they allow us keep the real data on the server, under our control, while having a mechanism to identify who the users are. Changing session variables is typically so inconvenient, that most users wouldn’t even dream of doing so. However, I have encountered web applications where the cookies being used to maintain session values were easily identifiable, and easily guessed. I know of one e-commerce site where a cookie with the shopping cart id is sent to the user, and the user can change that cookie to view any historical or current shopping cart. This same site, until fairly recently, controlled applciation access to the administrative interface based on cookies that were sent to the user. Therefore, any user who had logged in could choose to change their access level. Actually, if you knew the name of the keys set, you could bypass the entire login process.

Unfortunately, it seems that many web developers implicitly trust the data coming across the wire. Shopping Sites that took the price-per-unit from a hidden field on a page, etc. The trick to web development is that you should only provide the user with the minimal amount of data that you need to identify their intent on postbacks. Anything else needs to come from an authoratative source, which never includes the user. I suspect the reason for some of these design decisions is to reduce memory usage (keeping sessions small) and database accesses (which takes time). Unfortunately, more input from the users machine provides more potential attack vectors, more data over the wire, and more opportunities for errors to occur. Plus, it requires more work to validate input from the user than it does to pull that data from a reliable source. The few milliseconds that might be saved by serving up input to the user and pulling it back in, simply don’t justify the security holes.

Stepping away from the admittedly horrible software that I’ve been using as examples, the fact that all data is transferred between client and server as ASCII has some interesting trade-offs. Since everything is a string in HTTP, we are required to do a lot of parsing at the server side, which leads to it’s own challenges. If HTML Forms were strongly typed (something that is closer to being possible in HTML 5), than these Web Frameworks that we work in today could be extended to save us from dealing with many issues that we face today. I wouldn’t have to worry about dates, for instance, if the Web Browser was supposed to convert them to ISO format before submitting them, if they failed to parse through the framework, than that becomes not-my-problem anymore.

Regrettably, we’re so far gone at this point, that developers are going to be forced to deal with these sorts of issues for as long as the Web as it exists today persists. Many users simply don’t see the need to keep their browsers up to date. Most Web Developers will still support IE6. It’s ironic that many like to tout the Web as the best mechanism to provide a homogenous experience to users when there is so little commonality between the major browsers. CSS and Javascript all have to be tweaked based on the browser that is being used by the user, creating an unacceptable challenge for developers. Flash and Silverlight mitigate this by running in sandboxes where they are able to provide far more control over the execution space.

Since the Web is not strongly typed, should we use strongly-typed languages to process it? Of that, I’m not so certain. Most of my web-experience has been with dynamic languages like PHP and Perl. Frankly, I’ve always liked that experience. I don’t have to worry about catching exceptions or other error conditions during the program setup, but I do have to do a lot of more work on the back-end to verify the data. Lately, I’ve begun working in the ASP.NET MVC Framework, writing C# code. The impression that I’ve been given in the work I’ve done so far, is rather than describing expected form data, the data is simply placed in a dictionary of strings, dropping all the benifits of type-checking up front, and requiring me to do a lot of type-casting that feels strange.

What we need, is a strongly typed language for the web. In this system, on our form callbacks, we would define the properties (Get and/or Post) that we were accepting from the user. The user could send more data, but we would simply ignore it and not make it accessible to the rest of the package. This gives us strong-typing up front. The challenge is how we deal with invalid form information. For instance, if we’re expecting a numeric value, and recieve an alpha string, what shuld the runtime do? I’m not sure what the answer is for this. I would consider registering callbacks with the arguments that would define the failure conditions. These callbacks could perform further attempts to parse a value, or simply a mechanism to hijack the control flow so that you can do error reporting. Incidentally, a standard MVC framework works really well for this sort of circumstance, as the error handler could simply pass control off to a “Error View”, in a seamless manner to the user.

It would not be particularly difficult to implement such as system, the only difference would be that before any arguments were parsed, something would need to register the types and names first. This would likely be easy to put into the ASP.NET MVC framework (you could do this in Catalyst as well, but Perl isn’t strongly typed, so why bother?), as Microsoft has done a very good job so far of making their framework extensible. However, until we have good standards on the client side for standard data representations, I think the entire discussion is largely academic.

The web wasn’t designed for applications. And while HTML 5 is trying to correct some of these limitations, it will be hard to support, as developers won’t necessarily be able to depend on their users using a compliant browser, and the attitude in the web is one of inclusion. Which it should be, denying users is always a risky proposition, as people who have negative experiences are for more vocal than those who have positive ones. Choosing not to support a browser any more is dangerous, even if that browser is five years old. Some people just don’t like to change. Hopefully some day we’ll be ready to move forward with the web.