April 2009 Archives

.NET Strangeness: Generics and Casting

Ran into an interesting problem with .NET and it’s Generics earlier today that really, really surprised me. The problem that I was trying to solve was fairly straightforward. I’m working on porting a large collection of Classic ASP Web Applications to a collection of ASP.NET MVC Web Applications, in the process upgrading the applications to more advanced web technology (and I’m not just talking about .NET).

One particular function that we deal with regularly is needing to authorize users for subsets of data, however, often the permissions of a user may differ between various applications. As such, I wanted to create a simple data type to help me display these authorizations. Now, I’m simply going to call the app App1. It has a database table called App1Authorizations which manages the special authorizations for this app.

This led to some an interesting compile-time type-casting error where .NET refused to upcast the App1Authorization class to the IAuthorization interface (a perfectly legal move). I fought this for about twenty minutes, trying to use LINQ to force the cast, before I finally found documentation describing why I was encountering the error.aspx#csharpgenericstopic5).

The C# compiler only lets you implicitly cast generic type parameters to Object, or to constraint-specified types, as shown in Code block 5. Such implicit casting is type safe because any incompatibility is discovered at compile-time.

Essentially, this means that you can only cast to Objects (the most basic class), or a class which you’ve explicitly told the compiler is a legitimate cast ahead of time, because (the article claims) that’s the only way to provide type safe compile-time checking.

I don’t buy this line. The compiler should already know the relationships between the Interface I’ve declared and it’s child, and it should be able to handle that at compile time. Now, I’m not trying to claim to be smarter than Anders Hejlsberg. It’s highly likely I’m missing something, but it seems to me that this should be a solvable problem.

Okay, so I don’t like that what appears like a perfectly reasonable use of generics doesn’t work. But, what does it take to make it work? Well, we have to go back to Generics, and add generics where I really didn’t want to.

The only other alternative would be to implement a custom List class which would allow the IAuthorization cast. That’s basically unreasonable. This works, but in my opinion it’s harder to read and slightly confusing. In short, I think this smells. It was non-obvious what was wrong, and then what the solution was.

It’s unclear to me whether this is an issue with the .NET Virtual Machine (like my problem with Java’s Generics are), or with the C# compiler, but my hopes are that this is a correctable problem (maybe I should crack open the Mono C# Compiler source).

Now, because I’m sure it will come up, there is a single reason supporting this that I can think of. As I said above, I am writing an MVC Application. The LINQ query is in my Model, which is called by my Controller, which is then passed via the ViewData to a view for display. By forcing me to do this extra bit of casting, I can ensure (with compile-time assurance) that the right kind of IAuthorization object can make it into the View.

As I said, there may be a perfectly good reason for this design decision that I’m not aware of. If there is, I’d love to hear it. Now that I’ve seen this, I know to look out for it. Do I wish that it could be different? Absolutely. But hopefully this write-up can help someone else with this problem.

DRM and eBooks

Cory Doctorow (sci-fi author, blogger, and technologist) is a well-known Anti-DRM speaker. He even released a book of his writings on the subject, which is available via a CC-BY-NC-SA download. Now, this book, Content, is one of Doctorow’s that I haven’t bought yet, but that’s largely because I haven’t found it yet (hey, I like to buy books in actual bookstores), but even though I’ve read it, I still plan to buy the book to share, as well as support Mr. Doctorow.

We finally won the DRM war in relation to the Music industry. I can’t think of a single major player in the downloadable music game (except for game-related audio, like Rock Band Downloadable Content) who still uses DRM on any of their downloads. Now, the prices are still high, in my opinion, and the signs are not good in that respect, but the removal of the DRM is a huge win.

But that victory is only one of many that is needed. In the Video industry, you have Hulu trying to stop aggregators like Boxee from replaying their content (which would have still display Hulu ads), you’ve got the BBC using DRM on their iPlayer. But I don’t think that’s the next battle we’re going to win. Video is great, but the people who control the content are, in my opinion, far more stubborn than the Music industry ever was.

Like Doctorow, I believe the next front on this war that’s we’re likely to win is in the Publishing industry. eBooks are starting to become a really big deal. I think a large part of this is due to the fact that for a long time, eBooks simply weren’t convenient. They’re not portable, the displays for reading them were relatively low contrast compared to the printed page, and eyestrain was common. That’s changed in recent years with the advent of the e-ink display which powers (among other things), the Amazon Kindle.

The Kindle is a great piece of technology. Great high-contrast display. It’s light. It’s battery lasts for weeks. And it’s got a built-in cellular modem for over-the-air purchases and updates (all without a monthly fee). However, all these great features are, in my opinion, outshone by one rather ugly feature. It’s DRM. Recently, Amazon disabled the accounts of several users Amazon felt were returning books too often. Not only did this deny the users access to buying new Kindle books, but it also denied them access to already purchased materials that were already on the device. It’s unclear exactly what happened, but it seems likely that these users were downloading content, reading a few pages to decide if they wanted it or not (browsing, essentially) and returning some of it because they decided they weren’t interested at the time.

Now, the returns issue is an interesting one for the digital media world. On the iPhone, it’s almost impossible for a user to ‘return’ an app that they decided they don’t like, while on Android, if I buy an app, I can “return” it to Google within 24 hours. I actually really like how comic publisher iVerse has handled this on Android. Each comic runs about 7-8 MiB, which is a fair amount of the App storage on an Android phone (unfortunately). Due to customer complaints about not being able to have very many comics on their phones (since Apps can’t be stored on the SD Card), iVerse put together an application that allows you to ‘save’ your comics to the SD Card, and read them back using a separate, tiny app.

Of course, with Android’s 24-hour return policy, there was the fear that users would simply download the 99-cent comic, save it to their SD-card and ‘return’ the app, which would not include the content they’d saved out. As a compromise, the iVerse comics disable the save functionality for the first day you purchase them, allowing you to read them to your hearts content, but not letting you save them until you’ve passed your return window. Does this prevent you from reading and returning? No. But it at least prevents on-going access to the media if you do choose to return it. Now, I don’t know how iVerse’s solution to this problem stands up to piracy, and I’ll be investigating that soon, but it shows a reasonable compromise on the part of iVerse.

Back to eBooks, Cory Doctorow recently presented at O’Reilly’s “Tools of Change for Publishing” Conference about why DRM is a bad idea for eBooks. Below, is the embedded video of this presentation.

What I think the takeaway message from Cory’s talk is what he calls Doctorow’s Law:

Anytime someone puts a lock on something you own, against your wishes, and doesn't give you the key; They're not doing it for your benefit.

Ultimately, the DRM companies, who push the idea of piracy are trying really damn hard to lock you, and your customers, into their product. Up until recently, this was my problems with the iPod as a music platform (note: this is stil a problem with the iPod as a video platform). And more and more, the vendors are using this to promote lock-in. I don’t believe Steve Jobs when he says that he never wanted DRM on the iTunes Music Store. iTunes became big because it was easy and provided strong integration with the players, but iTunes was able to stay big because the DRM locked the users into iTunes. And Amazon is today trying to do the same thing with the Kindle. And Audible (an Amazon subsidiary) is doing the same thing with audio books. Don’t fall into this trap.

Doctorow ends his talk reasonably, beseeching the listeners to make sure the choice to use DRM on their content is their choice, and not the vendor they’re working with’s choice. In the end, I think that DRM will always be the wrong decision long term, and the decision to use DRM will always negatively impact my decision to do business with a company. I may still end up doing business with them, but if I can find the same (or at least similar enough) media from a non-DRM provider, I will always go with the non-DRM solution.

Gardening Season Begins

Saturday marked the beginning of our Gardening Season with the first public work day at the Pullman Community Gardens. Catherine and I went for a couple of hours and helped clear weeds from the main path, and go through the new gardener’s orientation we didn’t get last year.

We also discovered to our chagrin that our plot was not where we thought it was, since it turned out our neighbors from last year. Who were good neighbors, didn’t have a 20’x20’ lot, but rather they were using a 20’x30’ lot (not to mention the roughly 5 square foot melon mound in our plot), so we had to remeasure. Luckily, we are getting ground that was worked last year, so aside from the weeds, it’s proved fairly easy to work.

Our plan this year is to dig our our beds lower than our paths, so that we can practice flood irrigation on our beds. Our hope is that will make the watering not only easier, but also require less water. However, this is a fairly large change from last year, so it is requiring a fair amount of soil moving. I expect many more days of sore muscles before we’re done.

In other Garden News, we’ve started our first set of seeds in our apartment. Plenty of salad greens and chard, some tomatoes, peppers, a bit of corn, and a few other things. We’re hoping to have the salad greens in the dirt before the end of the week, but I’m not sure when we’ll get everything out, since frosts can hit at the garden until late March, but even if we can’t plant everything, we should have a really good start this year, and once we get the beds built, I think our workload should be fairly light this year, save for waterings.

Crappy Customer Service

Okay, this may be stretching the definition of sustainability a bit, but it is probably the single most important considerations for business’ continued success. Below are two stories of particularly heinous customer service experiences I’ve had recently. I’m going to try to make this more than simply a ranting post, but you’ve been warned about the content to come, and I won’t begrudge anyone for giving up on this post. The two organizations I’m planning to drag through the mud today are the Restaurant, Dupus Boomer’s, and the Magazine Organic Gardening (and their publisher Rodale.

First, Dupus Boomer’s. Dupus is a fairly local chain restaurant that opened up a location in the Compton Union Building at WSU. It’s the only non-fast food option in the CUB, and while it’s a bit more expensive, I would occasionally like to go up there for lunch. However, since the first two weeks of them being open, the service has been consistently degrading. It’s long been the case that to get faster service, you’d want to go grab a seat in the bar area, and generally you’d fairly quickly get some member of the waitstaff to take an order. However, as the last few months have gone by, every time I’ve been in the service has gotten consistently worse.

This came to a head last week. I sat down just after noon, knowing that I had a meeting to be at by 1pm. This is normally plenty of time. The Waitress approaches me immediately after I sit down, and I ask for water, and a few minutes to read through the menu. Fifteen minutes later, the waitress finally returns to take my order, but not deliver my water. That happened to be the last time I saw my waitress.

Now, it was reasonably busy in the restaurant that day. Almost every table was full, but due to the relatively slow kitchen, the turnaround was not such that the waitstaff should have had a lot of trouble keeping up with the crowd. Perhaps it’s because I used to work the lunch rush at a bar about the same size as Dupus, but I really have no sympathy for waitstaff feeling rushed during lunch. Most people only have an hour for lunch, that’s the way it works, and those people need to get in and out of the restaurant as quickly as possible.

My food finally arrived with about ten minutes before one. I ate, quicker than I really wanted to, and tried to get the attention of a member of the wait staff so I could get my bill. Oh, and that water which hadn’t arrived yet. A few minutes passed, I was unsuccessful, and I was forced to leave without paying. This is the first time I’ve ever walked out of a restaurant without paying. Even at my least happy with the food and/or service, I’ve never been forced to leave without paying. Some might argue that I shouldn’t have gone if I had a deadline, but I’ve always been of the opinion that a restaurant needs to be able to get a customer in and out within about thirty minutes. If you can’t manage that, you shouldn’t be open for lunch. Period.

This was, for me, the last straw in terrible service. I will not be returning to the restaurant, and I’ve been vocal in my being fed up with their lousy service. I already know others that were growing tired of Dupus Boomer’s, and I hope that they figure out a way to improve their lousy service, because a full-service restaurant in the heart of campus should be a lot better idea than it’s turned out to be.

I almost feel bad going after this second business, Rodale Publishing. They publish a lot of health related magazines and books, and honestly there seemed to be lot to like about the Organic Gardening magazine that I’d ordered a free trial issue of for my wife. Please note that this was a Free Trial Issue, which in my experience has always meant that if I decided I didn’t want the subscription, they’d stop harassing me about it. Sure, I’d probably get a few statements in the mail, but they’d give up within two or three months. Since I wasn’t entirely sure when I placed the order that we would subscribe, I just put in for the

This didn’t happen with Rodale and Organic Gardening. Now, mind you, we did fully intend to subscribe to the magazine. It probably didn’t help that it was the middle of winter, but we just didn’t get back to Rodale right away. We continued to get statements about once a month, and I’d continue to remind my wife that she should send in a check if she wanted to subscribe (which she did). Then, we started getting bills marked with things like ‘final notice’, culminating in an actual letter from what appeared to be a collections agency (it didn’t quite pass the smell test, so we’re unsure if it was a real collections letter).

A Collections Letter because we hadn’t paid for a subscription on the back of our free trial issue? Are you serious?

Needless to say, what had orignally been a subscription we just hadn’t gotten around to paying for yet, has now become a magazine, and a publisher, that we intend not to do business with. Unfortunately, this means that Catherine is currently in the market for a gardening magazine that actively discourages the use of chemical fertilizers and pesticides. If anyone has any suggestions, please suggest away, we’re in the market.

So, here are two companies that have lost my business. And, I don’t doubt, won’t have their reputations tarnished by me telling of these experiences. In the case of Dupus Boomer’s, it was due to continued poor service on the part of the waitstaff. Having worked the back of house at a restaurant before, I sympathize with the problems a bad front of house can cause, but ultimately utter failure on both sides of that equation can cause major problems for a restaurant. As for Rodale… threatening me as a customer is the fastest way to cause me to drop my service with you. I’m not talking about cutting of my service because I haven’t paid. I earned that. But it’s like stores that have a no backpack policy. I understand the reason for the policy, but I’ll still leave the store without buying anything if I’m asked to turn over my bag. Respect me as the customer, and if you don’t think you can for whatever reason, be prepared to lose me.

YUI Now on TaskSpeed Benchmark

3 Comments

This last two weeks involved some work by a few YUI community members, and the YUI team to bring YUI to the TaskSpeed JavaScript benchmark. TaskSpeed is based off of SlickSpeed, and both are designed to test the CSS Selector engines used by popular JavaScript libraries, but TaskSpeed adds basic DOM manipulation tasks in an attempt to determine the speed of that JavaScript Framework for common operations.

I was asked to lend a hand with the TaskSpeed tests for YUI, and while I did contribute some code, the bulk of the work was done by Eric Ferraiuolo, with some performance cleanups offered by Luke Smith of Yahoo!. The YUI 2.7.0 version is already up on the TaskSpeed site, and we’re waiting on some fixes to land in the YUI 3 branch on GitHub, namely a few weaknesses surrounding YUI 3’s CSS Selectors that need to be addressed. The small amount of work I did was around the YUI 3 tests.

What I found to be greatly impressive was how well YUI 2 holds up compared to, say, JQuery. YUI has a reputation among a lot of web developers for being really bloated. At least with YUI 2, this seems to have little basis. The YUI-270.js filed used in TaskSpeed weighs in at only 44K, easily the most compact of the frameworks (JQuery 1.3.2, comes in second at 56K). Not only that, but in several cases it significantly outperforms the other frameworks (to be fair, in a few tests, it’s beaten out by JQuery handily).

This is probably one of the most significant comparisons of Framework performance that I’ve seen, since it’s one of the only benchmarks I’m familiar with that specifically targets the Frameworks, and not the underlying browser engine. Due to this, it serves as a good benchmark for decided what kind of DOM operations are most important to your app, and what engine will likely do best under those circumstances. On the other hand, the tests are somewhat contrived as they usually involve a significant number of repititions on a test, which is somewhat less realistic, but still interesting.

Ultimately, should you choose your JavaScript library based on this test? Probably not. All of the libraries featured in TaskSpeed perform fast enough that under normal circumstances you’d probably not notice major differences in application performance. More important, is if the engine enables you to write what you consider to be ‘good’ code. Plus, these tests aren’t always consistent. I ran TaskSpeed on my workstation on Windows Server 2008 in Workstation Mode on Firefox 3.0.8, and on an Ubuntu 8.10 VMWare VM on the same workstation on Firefox 3.0.3. The removeclass test for the qooxdoo 0.8.2 library took 20 ms in the VM, and 181 ms on my workstation. And yes, normally the workstation was 25-40% faster than the VM. I’m on an quad core machine which isn’t doing a whole hell of a lot, so all I can figure was that the browser got swapped away from for some reason at an inopportune time or something.

It will be interesting to see how YUI 3 fares as some of the bugs in the framework (which is still alpha) get worked out, and the speed improved. YUI 3 is definitely a syntactic improvement over YUI 2, but I hope that the YUI team, and the Community surrounding them (particularly now that code can be more easily contributed to via github), will work to make YUI 3 a faster and better product than YUI 2 is today. While I may be downplaying the significance of benchmarks in general, I do feel this is significant because it shows that YUI really does hold it’s own among other frameworks.

Software and Copyright Law

One of the meetings I attended at Boise Code Camp this year was Brad Frazer’s talk on Copyright Law as it applies to Software. This was an interesting session, at least in part because it was presented, not by a software guy, but by a lawyer, which for many, seemed to be a different take on the issue than most people are familiar with. Having followed the Open Source world for so long, and having a mind which finds Law somewhat interesting, I got the impression I was more prepared than many, but even then, it was interested to hear Mr. Frazer’s take on these issues.

The discussion began with defining Copyright. Beginning of course with the fact that Copyright, is not a verb. You don’t “copyright” something. You can “create copyright” on something, like I am as I write these words right now. You can “register” copyrights, like I would if I sent the contents of my Blog to the Federal Copyright Office. But Copyright is not a verb. And Copyright can apply in interesting ways. These words are copyrighted because I am writing them in a tangible form. However, if I was simply delivering a lecture on these issues, and not actually even writing it down, it wouldn’t, because air is not a tangible medium.

So, anytime you write a code, and commit it to a tangible medium (aka, you’re hard disk), you’re creating copyright around that material. However, who owns that Copyright? This issue is a lot less clear. Generally speaking, when I create copyright, I’m the sole holder of that copyright. Even if I create code for a client, on a for-pay arrangement, that copyright is absolutely mine (unless, of course, I’ve assigned the copyright to them). However, if I write the code for my employer, the copyright belongs to them. So, the copyright to any code I write for my current employer is automatically held by Washington State University, and I have zero claim to that copyright.

This can be tricky, becuase if I were to get a job at, say, University of Washington, doing the same things that I’ve done here at WSU, I could get my new employer in trouble for implementing code too similarly to how I’d implemented it at my old employer, because the copyright that my previous employer held on a particular method of implementing an idea. Which is another good point, copyright does not cover ideas. It only covers particular representations of ideas. So, if I develop a new algorithm, anyone can implement that algorithm as long as they don’t implement it the same way I did, and not run afoul of copyright law. If I want to protect the idea, I’d have to patent it. Please, I’m not trying to start an argument about Software Patents, I happen to dislike the current state of software patents, I’m just making a point.

What was more interesting was Frazer’s claim that if you don’t register your copyrights, you’re basically completely unable to defend them. I wasn’t sure I bought that, so I made a comment on not sure I believed that to Frazer on Twitter. He directed me to the the US Code, specifically Title 17, Chapter 4, Section 411, which basically states that if you don’t register your copyright, you’re basically unable to sue to defend it. Huh.

What’s even more strange to me is how Copyright applies to Open Source projects. Some Open Source projects, generally those which are corporately backed, do require Contributor License Agreements (CLA), which generally contains a clause which assigns your copyright to the project when you contribute to it. How many of those projects register these copyrights? I have no idea, though I’d be curious to see. However, I know that most projects never bother. When Frazer first talked about this registration thing, I got the impression that he felt that FLOSS was not defensible in court, a statement which I know to be untrue.

Frazer did also address this issue on twitter, referencing a Federal Appeals Court case Jacobsen v Katzer, which addresses the defensibility and protection of Open Source Software rather well. In short, Open Source software is defensible in court, and don’t let anyone tell your otherwise.

Copyright is a serious issue, and one which has become increasingly confusing over the last several decades. If you really think you may need to (or hell, even want to) defend a copyright in court, make sure you register it. It’s likely to be the only way you’ll ever successfully defend it.

Beef Alternatives

A few weeks ago, there was a ton of discussion in the blogosphere about a recent study that suggested that grass-fed beef produced 50% more carbon emissions than grain-finished beef. With all the discussion regarding the environmental damage of the feed-lot system, this news really surprised a lot of people. The reason of course, was simple. Grass-fed cows live longer, and therefore fart more. It doesn’t help that Cows are particularly awful at the task of converting feed into meat.

Of course, in my opinion to look at the CO2 and methane emissions of a single cow over it’s lifetime woefully oversimplifies the environmental picture of the food system. The vast lagoons of animal waste so toxic that no one would dream of spreading them on food crops. The relatively high-acid rumens of grain-fed cattle which encourages the growth of acid-tolerant strains of bacteria (E.coli in particular), the high levels of antibiotics present in feedlot beef. At least with grass-finished cattle, they tend to be raised in smaller herds and can therefore provide a closer-to-real-nature ecosystem.

However, when compared to other animals, perhaps the real problem is simply stated: There are simply too many cows. Go to your local supermarket, and you’ll generally see three main kinds of land-based meat prominently displayed: beef, pork, and chicken. And it’s not uncommon for the beef section to be larger than the pork and chicken sections combined. Now, I love beef. And we eat a fair amount of it. Compared to chicken and pork (and the wide variety of other animals we could choose to eat), we eat more beef than any other meat. However, we are looking at converting our beef to locally raised heritage cattle (Belted Galloway’s), which will support a healthier poly-culture in the beef world, as well as hopefully having less environmental impact than the meat-factory breeds favored by most cattlemen today.

Still we need to be looking at other sources of meat as well. I’ve yet to find a local source of Pork, but the same person who sells the Belted Galloway’s also sells chickens at reasonable prices. When we have our own house, my wife and I certainly plan to raise chickens (mostly for eggs), and have also discussed raising rabbits for meat as well. The rabbits thing is particularly interesting, not because I’ve been told I would be the one responsible for the actual slaughter, but rather because it wasn’t terribly long ago that rabbit was a reasonably common meat. My father remembers raising rabbits for food when he was young, but my only experience with rabbit was in the form of a cassoulet I had at a Pullman restaurant over a year ago. Luckily for me (I think), my University offers a helpful document on raising rabbits in Washington State, which actually mentions raising Rabbits to supplement your family’s meat supply. Not surprisingly, the document is originally dated 1914.

Like I said, most people these days have never really thought about these animals as food, I guess.

I understand a lot of people would have a mental block with eating rabbit meat. I would have some trouble being comfortable with eating dog or cat. For me, the greater point, is that we need to diversify our meat production. Personally, I think it’s worth doing some of this raising on our own, but then I think that most people are simply too far removed from their food.

Above all, just look at different alternatives. Rabbit, Lamb, Pork, Game, so much more. There are so many tastes our there that we’re potentially missing out on, and frankly, the more diverse our food system is, the healthier both ourselves, and quite possibly our planet, is likely to be.

Daemon by Daniel Suarez

A few months back, I heard a few people going on about a book by a new novelist, who just happened to be a software developer by trade. Suarez wrote his first novel about our world, but with a computer program written by a dead genius who is putting into action a plan he’d generated before his death.

Daemon opens with Detective Sargent Peter Sebeck, who works in a sleepy Southern California town, responding to a suspicious death at a mostly vacant lot. A programmer at CyberStorm Entertainment was going on his normal motorcycle ride through a field in Thousand Oaks, California, when suddenly, he catches his neck under his helmet on a steel cable, cutting his carotid artery. As Sebeck investigates this, another CyberStorm programmer is killed by an electrified door frame as he leaves CyberStorm’s server room. Something is clearly going on with CyberStorm, who’s CEO Matthew Sobol has recently died of Cancer.

It is soon revealed that Sobol wrote a Daemon, a piece of software which sits waiting for a certain thing to occur before taking a particular action, which is carrying out an unknown will of his. What’s most impressive, is that Suarez writes each individual piece of the Daemon such that it’s actions seem possible given today’s technology. Could one person write a large scale distributed process like this that infiltrated systems all over the world without anyone noticing? Probably not. But then, like I said, each individual piece is wholly plausible.

What’s really amazing is that as technologically advanced as the Daemon is, using really advanced Text-to-Speech to communicate with people most of the time, it’s limitations are really apparent as well. The Daemon generally offers choices, but it can only accept Yes or No responses, and constantly through the story, people get confused and try to communicate with the Daemon as if it were another person, and it constant has to remind people it can only accept Yes or No answers. Astonishingly, this idea, which is repeated often, never really gets annoying, because it’s hard to fault the characters for the error.

The only thing is the book that really required me to suspend my disbelief was Sobol’s amazing ability to have seemingly planned for a nearly infinite number of possibilities. While this didn’t ruin my ability to enjoy the book, at times it is a fairly painful source of disbelief. As the story goes on, the Daemon’s resources grow and the Daemon’s Operatives get access to some very, very cool technology. Some of which I don’t believe is available today, though if any of it is, it’s clearly unreasonably expensive.

If I have one problem with this book, it’s that it’s clearly a setup to future novels. In fact, Suarez has another book, Freedom, coming out next year. Because of this, the ending sort of leaves you hanging. It’s basically a huge To Be Continued at the end of the novel. I’m fine with more books coming. Hell, I’m glad there are. But I really wish that the ending of this had been just a little more complete and satisfying for this novel. As it stands, Daemon doesn’t quite stand on it’s own, and I’d have been more satisfied had I felt better about the ending. Maybe if the last chapter had been an Afterword instead of a chapter, that small disconnect would have fixed it for me…

Daemon is a great book, and a great first novel for a new author. It’s really nice having a novel that deals with technology in an immensely realistic way. Even the things that I’m not sure exist today, had reasonable explanations based on modern scientific research. If you’re interested in a good modern thriller, I honestly don’t think you could do much better than Daemon. It’s well researched and written, and it’s a fun read. Pick it up, it’s worth it.

Confusing Code And Data Is Dangerous

At Code Camp last weekend, I attended a session on the Spark View Engine for ASP.NET MVC (which was just released under the MS-PL!). Actually, there were a lot of things I liked about the look of Spark, and I might consider using it on future projects. I plan to talk a little more about Spark later, but this post is more about something I heard time and again from the Presenter, which, while I understand why he was saying it, I consider to be a dangerous mistake in that it promotes dangerous thinking.

The thing that bothered me when he started talking about handling JSON data. The example he presented involved a simple table of names that could be updated dynamically via JavaScript, essentially he’d send JSON data which would be used to build new table rows. The JSON data resembled the following:

[
    { firstName: "Bill", lastName: "Paxton" },
    { firstName: "John", lastName: "Doe" },
    { firstName: "Malcolm", lastName: "Reynolds" }
]

The problem I had, was that he kept referring to the JSON data as “JavaScript”.

Okay, I know that JSON is short for the JavaScript Object Notation. I know that JSON can be eval’d inside of a JavaScript engine to result in the data object that it represents. But, JSON is not JavaScript. And thinking that is such is incredibly dangerous.

Why is JSON not JavaScript? For one, the JSON Specfication does not allow code. It is a pure data format. For another, while JSON may have originally been borne out of the JavaScript world, it has grown into a common data representation used for data transfer from any number of languages. But most importantly, executing JSON without verifying it follows the specification is amazingly dangerous. There is a reason why all the major JavaScript frameworks I’m aware of offer special JSON parsers. JSON is data, and treating it as code opens you up to any number of attack vectors.

JavaScript is a Programming Language. JSON is merely a data format. Yes, JSON is a subset of JavaScript, but it is far more strict than JavaScript is. Use a library to parse your JSON, instead of just eval’ing it, and remember to keep a strong mental distinction between your code and your data. Just because they might resemble each other, doesn’t mean you should treat them the same.