October 2008 Archives

Short Hiatus

I know I’ve now missed two days worth of posts. At work we’re going to be launching our Schedule Proofing Application on next Monday, so we’re putting in long hours to get that done. I may still get some posts up later this week, but there are no guarantees at this point.

I want to thank my readership for bearing with me, and I guarantee, I’m getting some material that should make for some interesting posts.

Language: Evolving or Devolving?

Over at Ten Pound Hammer, those guys seem really excited about Microblogging, so much so that they seem to be arguing in favor of the 140 character limit inherent in existing Microblogging formats (which, it needs to be pointed out exists because of the Microblogging idea being tied to Cell Phone Text Messaging). Now, don’t get me wrong, I enjoy Microblogging too, if I didn’t I wouldn’t have my TWiT Army updates in the sidebar of this here blog, but I disagree with the idea that the 140 Character limit is a ‘good’ thing.

The limit of 140 characters has forced normal people, bloggers, and everyone else to become more concise and innovate the way they communicate. Parents and teachers have been sounding the alarm about the decline of their children and students ability to write. I believe their ability is improving, evolving, and becoming more efficient. Why spend time writing out common phrases, when the point can be communicated and read far more quickly. The old mantra used to be a minimum amount of words or a minimum page size, I believe the twenty first century will be about efficiency and speed, not about arbitrary sizes and formality.

The above is straight from Ten Pound Hammer, and is the part I, in particular take issue with. The idea is a simple one: As communication mediums evolve, so must the language used to communicate over them. The Text Message, and Microblogs that rose out of it, the argument goes, has led to a period of rapid evolution of the English language, which the author of Ten Pound Hammer believes to be a good thing.

I will begin my argument by holding that what has led to the change in language used in Text Messaging has far less to do with the 140 character limit, and more to do with the poor text input which was available on mobile phones. Even with technologies like T9, which is common on most cellphones today, typing complete words is non-trivial, while T9 can be quickly trained to handle the shorthand which has become common in Text Messaging.

And shorthand isn’t a bad thing, Secretaries all used to know how to take dictation in shorthand, Court Stenographers still use shorthand in taking down court cases. But the Shorthand exists to fill a certain need, namely to transcribe spoken word quickly into a written form, something which at which standard English is admittedly weak. The shorthand developed for Text Messaging (which is itself derived from the shorthand used in Internet Chat Rooms for several decades prior to the Text Message), developed for two reasons, first to deal with the poor input, and second the 140-character limit, which exists today merely as a strangling limitation from a day gone by.

But, to address the Hammer’s point more directly, is the 140-character limit making us better communicators? Absolutely not. The problem with Slang and Abbreviations, both of which excel in modern shorthands (but have been common in language forever), is that they are only directly accessible to people from a similar social background to yourself. Sure, other people can understand, but it requires a slight context switch, making the language less effective. For a good example of the Slang issue, go watch Episode 1 of Series 11) of the BBC’s Top Gear. Particularly the segment on “Building a Better Police Car”. Each of the three men; Clarkson, Hammond and May, refer to Police as something unique among them, but common in the part of England that their from.

Admittedly, you can usually figure out what the slang means, but it takes some time, may require inquiry, and altogether slows down the communication process. Even if you are familiar with the slang, the pattern recognition parts of your brain, still needs to map the slang to the shared meaning. All the meaningful evolutions of language in the last century have generally been built around shortening the distance to the shared meaning. And technology has aided that, it is now easier to do the necessary research to figure out what is meant. Slang can travel farther and more effectively than ever before. People can communicate today with people they never would have dreamed of communicating with fifteen years ago.

Technology has gone a long way to evolve language, to make us all more capable communicators. But it has done this by enabling communication, not limiting it. Being concise isn’t about saving characters, it’s about communicating in a way that reduces the cognitive dissonance experienced by your audience. It’s about not repeating yourself, which a minimum page size supposedly helps with. Writing is hard. Writing effectively is harder, and effective writing is about using enough space to say what needs to be said. Imposing arbitrary character limits on this is ineffective and counterproductive. Mostly, though, being concise is about expressing your idea fully, while making the fewest assumptions possible about the knowledge of your audience, and still expressing yourself well.

The 140 Character limit imposed by Text Messaging and Microblogging is making us worse communicators, because it is making us have to think harder to both encode what we’re saying, and decode what we’re writing. It doesn’t allow effective communication, because we are forced to assume too much. Admittedly, this is fine for casual conversation, which is what Text Messaging and Microblogging was designed for. However, the way we communicate in formal contexts requires the formality that the Hammer seems to be against. Writing isn’t as expressive as Speaking, and it never will be, but it is far less ephemeral, and if we’re unable to communicate our ideas effectively in the written word, our ideas are far less likely to outlive ourselves.

Minifying JavaScript and CSS with GNU Make

At Yahoo! the Exceptional Performance team, has posted a series of guidelines for scalable, performant websites. Unfortunately, this is the only major web company I’ve encountered to date who has codified these suggestions in this fashion, but if you look over the companies who’s primary business is the Web, you’ll see the majority of these practices. If you want to test your own sites, Yahoo! made available the excellent YSlow, which plugs directly into Firebug.

One of the key suggestions is to minify all your JavaScript and CSS. This serves the primary purpose of making the JavaScript as small as possible for sending down the wire. Additionally, it might improve the speed of interpretation of the JavaScript and CSS, but that difference will be negligible and hardly worth noting. Luckily, Yahoo! has also released a tool for this.

The YUI Compressor is a Java application built on top of Rhino. The Compressor processes the JavaScript and CSS files and produces a far smaller version for distribution over the network. Most of the time, I’ve seen between a 20 and 50% reduction in filesize, and that’s before applying gzip. Not bad at all.

The biggest problem I’ve had is that the YUI Compressor was not designed to batch together a bunch of files. I use a syntax-highlighter script for code samples on this site, but I found that it was larger than I needed. Needing to compress the JavaScript, I chose to use my familiar GNU Make to automate the process.

When I say familiar, I’m being a bit facetious. I’ve written Makefiles before, but GNU Make knows how to build certain types of files, and I’ve never had to told it how. Luckily, it’s fairly simple:

This runs the YUI Compressor (from the path saved in JSMIN), with a JavaScript file as it’s argument, and the file with a MINSUFFIX (I use -min, like YUI does). Now, when I build a rule like this:


minify: ${MINS}

All the JavaScript files get minified, but only if they’ve changed. I pair this up with a method that concatenates the files after minification, but that’s optional. The rule for CSS is almost identical, just replace .js with .css actually.

It’s important in building applications to keep in mind the need for an automated build process. GNU Make works great for this, if you’re on Unix. Even though JavaScript is an interpreted language, a simple compilation step, like YUI Compressor, goes a long way to improving the performance of the website overall. Automating this process makes deployment much easier, and is therefore worth pursuing. The above Make rule should help, not only with ‘compiling’ JavaScript, but with using Make for other languages that it doesn’t support natively.

Custom ASP.NET Authentication: Claims-Based Authorization

1 Comment

This is part four of my articles on writing Custom ASP.NET Authentication. The first article served merely as introduction, but the second delved into writing a Membership Provider and what that entails. The third described building Role-Based Authentication.

Role-Based Authentication is great for many people’s goals in handling authentication, but often, particularly as an application grows, you find that you also need to control access to specific resources and not just activities. This is where the need for Claims-based Authentication rears it’s head. What’s important to understand is that you’re often going to end up using a mix of the two, with a leaning toward one or the other.

With .NET 3.0 and Windows Communication Foundation (WCF), Microsoft has finally released a Claims-based authentication mechanism. To be honest, I hate it, but it’s there. Like with many Microsoft technologies, I feel it’s unnecessarily complex, so while I’ll be talking about Microsoft’s Claims-based authentication, I’ll be talking about the one we’re using as well. I’ll hold my judgement on code-name Zermatt, but what’s currently available in System.IdentityModel is too much for me.

Essentially, you begin by declaring a Claim in your code. The claim is a data-type which represents a resource you want to control access to. The object is simple:

public class Claim {
  public string ClaimType;
  public object Resource;
  public string Right;

The ClaimType is a URI, which tells you what kind of Claim is being made. The Resource property identifies the resource being requested, and the Right is either custom defined or from the System.IdentityModel.Claims.Rights class to determine what the user wants to do, but we’ll get there in a minute. So far, fairly simple.

Next we, move into ClaimSets (this is where I feel things fall apart)

public class ClaimSet : IEnumerable<Claim>, IEnumerable {
  public abstract ClaimSet Issuer { get; }
  public abstract Claim this[int index] { get; }
  public static ClaimSet System { get; }
  public static ClaimSet Windows { get; }

Okay, the System and Windows ClaimSets are provided by Microsoft, Windows being Windows Security, and System being the OS level Claims. Depending on your application, these sets may or may not be important, but it’s the Issuer that’s most likely to be important. The Issuer can be used to determine the source of a claim, for instance a Web Application versus a Web Service could be different Issuers.

Since the above class is Abstract, you either have to derive a new class from it, or create a default Claim Set to test against. Ultimately, this is going to require a fairly complex custom structure to ensure that your claim sets are accurate to whatever you’re trying to accomplish in your application.

In order to test your claims, you call FindClaims against your ClaimSet with a claim you want to test for, and you can use that information to handle authentication. The complexity, and it’s painful complexity, is in building the ClaimSet. I’m not wholly convinced it’s worthwhile, which is why I went a slightly different route to solve this problem. If you are interested in Microsoft’s Claims-based Authentication, MSDN Magazine has an article on it this month.

For our purposes I built Claims-based Authentication into our Role-Based code, buy adding methods to our custom Principal. For us, we need to control access to Courses, Prefixes, and Terms, as well as determine if the user is authorized for a given action for that given Course, Prefix, or Term.

I’m not going to bother with code for this, as it was implemented purely by adding new methods to our implementation of IPrincipal. It works great, allowing us to verify our permissions, as well as a check those permissions to send that permission set to our client. The only downside is that we need to explicitly cast our Principal object in order to access our custom methods. Still, it provides the ability to test claims, without diving into the complexity of the ClaimSet implementation Microsoft is currently providing.

Authentication and Authorization are incredibly important steps in any web application, and Microsoft has done a pretty good job of providing a framework that is very customizable, while being fairly hard to break in an incredibly dangerous manner. However, I had been unable to find very many resources for doing this work, which I suspect many people will find themselves requiring. I hope that this series helps someone, as I know it would likely have helped me.

Sustainable Living: Co-ops

Cooperatives have been around for centuries to serve as a means for people to work together in a business context in a democratic context in order to serve some common need for all their members. They’ve been common in agricultural areas, as a means for farmers to combine their resources to get better prices at market, and are still quite common in that area.

This is not the kind of Co-op I’m talking about today, but it’s a similar idea. Many areas have begun forming Grocery Co-ops for people who are interested in buying local, organic, and sustainable. For us, in the Pullman-Moscow area, we’ve got the Moscow Food Co-Op, which is now in it’s 35th year of operation. In Bozeman, we had a coop that had opened around 2002, but I like the Moscow one far more.

Co-ops serve an interesting community place. Yes, they’re specialty grocers who carry a lot of organic foodstuffs, but more than that, they often serve as sort of gateways into the community. Moscow’s Food Co-Op offers live music, gourmet food, community-driven cooking lessons, and access to local producers.

Admittedly, I don’t buy into the “Organic” keyword they way many people do. I prefer food produced in a more natural way, with less chemistry. Non-hormone pumped beef, raw milk, fresh fruits and vegetables, etc. I love it, but unfortunately, the laws regarding ‘certified organic’ have changed in recent years such that many growers who grow in what most people would consider to be an ‘organic’ method, can’t get certified because the paperwork, fees, and other expenses are too high. Organic has been legally taken by large farms who can more easily afford the extra certification, which has also led to the added expense of ‘organic’ food. Talk to your growers, read food labels, and don’t just believe in the hype surrounding the words. Organic has become a marketing term, and organic food isn’t necessarily good for you. Hell, tobacco is organic, it’s grown without pesticides, and often without chemical fertilizers, but it’s still not good for you.

But I digress. Co-ops are great, and definitely worth checking out. We buy most of our bread at our Co-op, and some of our snack-type food since it tends not to have high fructose corn syrup, MSG, and various other chemicals we’d like to avoid. We’re planning to start getting more involved in the community activities, as we’ve been members of our co-op since June (It was $10, and we get every 11th loaf of bread free, it’s worth it). Plus, it’s the only place in the area we can get non-homogenized whole milk. No [raw milk] unfortunately, while it’s not technically illegal in Idaho, the state hasn’t licensed any Raw Milk producers in a very long time, and the Washington producers are closer to Spokane. Actually, I haven’t done a Milk post yet, so that’ll be coming.

Anyway, even if you’re not into the whole ‘Organic’ thing, there is a very, very good chance that your co-op offers something you can’t get at your normal grocery, or stuff that may well be higher quality. Plus, the community focus of most co-ops is important to living a sustainable lifestyle, feel yours out, it may well be worth it for you.

Identica Statuses Widget

1 Comment 1 TrackBack

A while back, I wrote about Laconica, the microblogging software used to power Identica and the TWiT Army among other sites. If you want a breakdown of Laconica, please go to my previous post.

When the TWiT Army first launched, Laconica was in version 0.5, which was missing an undocumented feature of the Twitter API, namely the ability to specify a callback function for JSON returning API calls. I filed the bug, and submitted a quick, ugly, hacky patch which was turned into a far better patch and committed into the 0.6 codebase. I’m not upset, my PHP foo has slipped a bit in the last year, and I didn’t have a laconica instance to test, and their solution was better.

Anyway, as readers of my blog are no doubt aware, once I got bit by the Twitter bug, I put my Twitter statuses in my sidebar. Well, once I discovered Laconica, which is like Twitter, but more free, I moved almost entirely to the TWiT Army. Not entirely mind you, since there are still people I’m interested in following on Twitter, but this idea of many microblogs that can all communicate is really intriguing to me, and I am pleased to be able to follow it.

So, since I’ve converted to the Army, I wanted to be able to display by Army posts, what we call μs, on my blog. I used the Twitter model as a starting point, but I’d like to think I’ve improved upon it somewhat. I built a JavaScript object, Foxxtrot.Widget.Identica.

I named it Identica for two reasons. First, Identica is the most popular Laconica instance, and Laconica is going to be renamed Identica, similar to how Wordpress can be used to refer to both a piece of software, a company, and a service. It’s kind of confusing, but ultimately it’s easier than trying to explain the difference between the two.

The object is also dependent on YUI 3, partially to familiarize myself with that library, and partially to spread YUI. Including YUI is simple, just include the following in your page before my object.

<script src="http://yui.yahooapis.com/3.0.0pr1/build/yui/yui-min.js" type="text/javascript"></script>

I would ask that you don’t leech this script from my site. I don’t really have the extra bandwidth, but I will be making available a minimized version of it at some time in the future.

Using the script is easy. After loading YUI and Foxxtrot.Widget.Identica, do something like I do on this very page:

   { count: 3, user: 'foxxtrot', service_url: 'http://army.twit.tv' });

The first argument is either a DOM reference, or a CSS3 style selector that YUI can use. This should be a reference to a List, and I identify by a DOM id (armyupdatelist), and I suggest you do too. The second argument is wholly optional, but it provides configuration details. Currently, this includes the following:

  • count: Number of status updates to retrieve (default: 5)
  • user: User name of the user to pull statuses for. If this is not specified, we’ll pull back the Public Timeline
  • service_url: The base URL of the service you’re pulling from. Defaults to Identi.ca

I do assume that the API is hosted under /api/ under the service_url. If this is not the case, let me know, I’ll add the option behavior.

So, once you’ve done that, the code will automatically build the Status Update List, and fill the list you’ve provided a reference to. It works well, but it’s not perfect yet.

Known Issues:

  • Doesn’t link directly to users you’ve directed a comment at (@)
  • Doesn’t convert URLs to Links
  • Doesn’t print out the full date for very, very old updates (not sure this is necessary)

Features I want:

  • I plan to put the TWiT Army logo in the spot where the μ’s go, and then fade it back using YUI before putting the text over it. The image under the updates will be set in CSS.
  • I’m considering doing periodic updates if the user stays on the page. This would be optional, and off by default

That’s all I’ve got for now, but if anyone has any suggestions, please post them in the comments, and let me know who is using it where. The code, like Laconica is licensed under the GNU Affero General Public License.

Neal Stephenson's Anathem

anathem.jpgI’ve been a fan of Neal Stephenson’s since I first picked up Snow Crash. His books have all just been excellent sci-fi adventure stories, where he creates a compelling world to set the action in and fills it with interesting characters. Unfortunately, many authors miss one or the other of those points, so I feel it’s important to stress. Even The Big U, which was probably the weakest of his novels, did a good job with both sides of this equation. So, when Anathem was announced, and the trailer was released, I was excited.

Now, I had almost expected that this novel was going to follow in a similar theme to Cryptonomicon and the Baroque Cycle, in that it would be historical fiction centered around the Shaftoes and the Waterhouses. While my understanding is that Stephenson intends to revisit that timeline, I was pleasantly surprised to find that Anathem had nothing to do with that timeline and was fresh and new and refreshing.

So what is Anathem? As the trailer says, it takes place on a World that is not Earth, in a Time that is not Now (okay, so that’s a thinly veiled rehash, luckily, I don’t think Stephenson wrote it). It takes place on a world called Arbre, which is similar to ours, or rather, it appears it was similar to ours at one point. At some time in the distant past on Arbre, the average joe became too afraid of Science, and what was happening in the world, so they took all the smarter-than-average people (and orphans) and crammed them into huge citadels, known as Concents, where these people, who became known as the Avout, were free to pursue their scholarly desires, outside of the control of the outside (Saecular) world.

Further, the Avout had separated themselves based on dedication, as well as taint by the Saecular world into four sects. One sect could leave the Concent every year, and was similar to a college in that wealthier, educated people tend to come for a while to study and then return to normal life. Another leaves every decade, another every century, and one every millennia. Further, these groups can only intermingle during a ten day yearly celebration (but only if their group would be able to leave the grounds in the first place). The story follows one young avout, Erasmas, and begins the day before the new year in 3890. Erasmas, being a Decinarian will be able to leave the Concent for the first time in ten years.

The world inside the concent (Intramuros) seems very monastic. Everyone wears robes, they study and discuss, they sleep in cells (rarely in the same cell from night to night), they grow their own food, and take turns at the chores necessary for the running of the concent.

The world outside the concent (Extramuros) seems awfully similar to our world of today, with bright screens, cell phones everywhere, and huge interest in movies and television. The only difference is that many of the people who would do science are hidden away, where the public feels they can do no harm. At one point, Erasmas comments on how very little progress has been made Extramuros in the nearly 4000 years since the Avout were locked away from society.

However, the world needs it’s educated people from time to time, and as this would be an awfully boring novel otherwise, the book is centered around one of those times. The Saecular world becomes aware of an alien spaceship orbiting Arbre, and calls for a Convox, a gathering of Avout from across the globe, to determine what to do about it. Erasmas and his friends are called into this Convox.

The book, like Stephenson’s others, takes a fascinating direction in that it occasionally digresses into Mathematical proofs and the nature of the universe. Unlike Stephenson’s other novels, some of the longer proofs, which are generally less important, are put into the appendix of the novel, however, I would suggest reading them. It’s a peculiarity of Stephenson’s. Whereas Tolkien would take a dozen pages to describe a landscape and some bizarre historical tidbit which carried little relevance to the story at hand, Stephenson will occasionally do that with Mathematical Proofs. While some of the exposition can seem tedious, it’s absolutely worth following it through for the story.

Anathem is not Stephenson’s best work. For that, I’d probably have to say The Baroque Cycle. But the Baroque Cycle was three novels the length of Anathem, and that may well have something to do with it. However, while it’s not his best, it ranks easily on the high end of my scale of Stephenson’s novels. It’s a great read, an interesting world, and the philosophical questions asked regarding the nature of the universe, and how we can describe it are interesting.

The climax of the story has a rather frustrating element, surrounding a Millenarian by the name of Jad, but while it feels as if there were loose ends surrounding that character, it was clearly done purposefully. Stephenson loves to introduce characters who have a seemingly mystical understanding of the Universe, and he never goes as deeply into that character as you might wish. In Cryptonomicon and the Baroque Cycle, it’s Enoch Root, here it’s Jad. Many will likely interpret these characters as having traces of the Divine, it’s certainly easy to do so.

I’m not sure that’s the only interpretation. It seems to me that Stephenson is arguing for the ability of Man, if Man could completely understand the universe, to possess a level of control over the Universe that exists outside of technology. As if, by possessing a certain level of understanding, Man could be as God. To go Biblical with the idea: Adam and Eve, in most translations I’m familiar with, ate but a single bite each of the fruit of the Tree of Knowledge, giving them some understanding of the world. The serpent had told them that to eat of the Tree of Knowledge would make them as powerful as God. The little bit of knowledge they’d accepted had scared them, and they stopped, but if they had kept going…

Even if you’re not interested in the Math, Science, and Philosophy presented in the novel, the story is still a fun story, with interesting characters. But the meat of the story is definitely in the questions it asks, or perhaps more so in the questions it chooses not to ask directly.

Silverlight 2.0 RTW

Silverlight 2, Microsoft’s attempt at a Flash killer, has been officially Released to Web today, almost two full weeks ahead of when I expected to see it. This is great, in that all that RC0 code that we’ve been working on can go public. This is horrible in that the API is now locked in, broken as it is. Literally dozens of bugs have been reported against RC0, many of which are strange broken behaviors. For instance, if you have a DataGrid, and want a control in the DataGrid to be directly editable (like a checkbox), you have to wrap it in a special content control in order to get that behavior. Otherwise, the user has to click on the checkbox twice to get the expected behavior.

This is just the tip of the iceberg really. And, there is a good chance many of these design decisions won’t be corrected.

This is the last time Silverlight will not be backwards incompatible - from now on when we update, your code should continue to work. (at least, that’s the plan)

That’s from the official release guide. And no where in that Guide, nor in ScottGu’s annoucement is any mention given to any fixes between RC0 and RTW. Fantastic.

Silverlight has some nice things as a platform, particularly if you’re already a .NET Developer, but I’m not convinced the platform was ready for release. There are too many issues, too many obscure workarounds. Hell, even if they’d taken until PDC as I’d expected them too, I can’t be sure that I’d consider the product to be complete.

I won’t call Silverlight as DOA. Flash has always had some issues from a developer standpoint, and the ability to use .NET languages and the Dynamic Language Runtime, is attractive. But I’m not convinced that Silverlight will ever live up to it’s promise. It lacks many controls that I don’t think should have been sacrificed in the name of small download size, and it’s got a frightening number of eccentricities to work around. For many, I’m not sure there is enough here to really give Flash much to fear, particularly because people are far more likely to trust Adobe (perhaps mistakenly) than Microsoft.

There is some interesting work being done in Silverlight, and will continue to be, but I wonder if Silverlight will ever be much more than a niche platform in a niche market. So much better to use JavaScript, DOM, and the like when you can.

Sustainable Living: Beets


As we wind down Gardening (and thus Canning) season, there is still a bit more to discuss on these issues. This week, I’m going to be talking about Beets. Personally, I’ve never been a big fan of beets, but Catherine loves them, and so we planted a bunch at the garden this year. And boy, did we get some monsters. Below is a photograph of the biggest one we got, sitting next to my 8” Chef’s Knife.


I’ve talked about Beets before, briefly, on the post regarding Kvass, but with this batch of beets, we opted for a different approach. Beets get their deep red/purple color from betalain pigments, which is different from most red/purple plants. And beets are all different, we planted two varieties, one which was a deep purple, while the other was more red with some white striping. Incidentally, the deep purple was far, far easier to peel, but we’ll get there in a few moments.

Just as important as the beets, are the greens coming out of their tops. If you want to harvest these (and you really should), you’ll need to cut them off as soon as possible. They can be cooked and served as Spinach, or served raw in a salad. The danger here is that, while the roots can remain at room temperature for a fair while (particularly if that room is a root cellar), the greens should be washed and refrigerated quickly. Unfortunately, we lost a lot of greens because we were busy last week with the tomatoes.

For preparing all the beets, we opted for pickling, specifically canning pickles. To start, we washed the beets as thoroughly as we could (root vegetables get dirty, go figure). Then I chopped them up into smaller pieces, that were roughly equitable with the smallest beets we had (and we had some small ones, thin out your beets when you plant from seeds people!) Put them in simmering water for about thirty minutes. This was tricky as even the biggest pot I had could barely handle all those beets and water. After about thirty minutes, I dumped out the water, and poured ice onto the pot, and refilled the pot with cool water.

While I should have begun peeling immediately, Catherine got home with food for dinner, so we sat and ate for a bit before getting back to work. Once we finished, Catherine started peeling the beets, while I began chopping the beets into roughly half-inch pieces before packing them into jars. We got 5 quarts and 7 pints packed. Quite a bit of beets. With that done, I began warming the jars and making the brine. The brine was a simple brine.

  • 3.5 cups Vinegar (I used Apple Cider)
  • 2.5 cups Water
  • 2 cups Sugar
  • 1 tbsp Allspice
  • 2 cinnamon sticks

Bring all that to a rolling boil, then drop the temperature to a simmer for 15 minutes. This was enough for about 3 quarts, so I did have to adjust the recipe a bit, but scaling is fairly easy. Once the jars were warm and the brine was ready, I pulled the jars off the heat, and filled them with the brine. Beets and Brine should leave about 1/4” of head-space in the jars. Once all the jars are filled, put hot lids on them, screw them down, and drop the jars into boiling water to heat process for about 30 minutes.

Two days later, the contents of the jars is a deep, deep purple, and we’ll be leaving them to age for about a month before we start to open them, but this should result in a slightly sweet pickle which will be good in salads or just as a snack. Plus, the pickling should fortify the vitamin C content, and beets have about nine times as much protien as fat, if you concern yourself with such things, making them an all around healthy food. In Australia and New Zealand, they use Pickled Beets on Burgers the way we use Tomatoes or Onion, that may well be worth trying, if like me you’ve never been the biggest fan of this red root.

Cory Doctorow's ©ontent

Cory Doctorow is an interesting author. He’s just one of many people trying to make a living writing Science Fiction, and while he’s not my favorite author of all time, I have enjoyed his books. What makes Doctorow interesting is that he’s one of the very few people who seek to make a living in content production who seems to ‘get it’. Every book Doctorow has ever had published has been released for free on his website. Not only that, but they’re all licensed under a CC-BY-NC-SA-US license.

What’s this mean? It means that we’re free to share, remix, and create derivative works of any of Doctorow’s books, as long as we attribute the original work as being his, don’t use our creation to make money, and license our contribution under the same license. If you’re not familiar with what the Creative Commons is, and why it’s important, the FAQ is a good place to start.

©ontent is a collection of essays, editorials, and op-eds that Doctorow has written for various publications over the recent years, where he addresses this issue of Content Production in the Internet Age. The point he makes is simple: the way the major producers are going about this is all wrong. I happen to agree. As I said, Doctorow has released free, re-mixable eBooks of all his books online. As part of that, the books have had something like a 30:1 download to sale ratio. On the surface, that looks kind of bleak. In reality, Doctorow argues, every last one of his books has outperformed the sales expectations of his publisher, so sales are better than he had any ‘right’ to expect.

The most relevant question, and the one for which there is no means to answer, is whether or not this would have been true without the free download. I agree with Doctorow’s ultimate statement, that most likely it wouldn’t be. Book Actuaries have made careers out of estimating how many copies of books to print based on statistical models of how well a book is likely to sell. It’s their job, it’s what they do. I’m sure these guys absolutely love being wrong from time to time, but for them to have been consistently wrong…it’s statistically unlikely, unless there is some form of positive correlation between the eBooks and the paper book sales.

The point that rises time and again throughout ©ontent is simple. Content Producers need to stop looking at content ‘piracy’ as lost sales. People who pirate media, more often than not, never would have paid for it anyway. And for Doctorow, the 30:1 ratio, he views more as if 30 people looked at his book in the bookstore, and one out of every 30 actually bought it, the publisher would be ridiculously ecstatic. And they are. Music is a similar thing. In the early days of Napster, an enormous amount of music was traded, but what doesn’t usually come up at that time, is that music sales were very strong during that time. Sure, not every song downloaded was met with a CD Sale, and there may have been more downloads than there were sales, but not everyone who looks at that CD is going to buy it anyway.

I’ve had this happen before. I began downloading Firefly back in 2004, and after watching one or two episodes, I ran out and bought the DVDs, at least partially because they’re reasonably priced. A lot of other shows, I’ve downloaded to watch, but I haven’t bought because I believe the price is too high for the physical media, and if the digital version is available at a reasonable price (rare), it’s got DRM. I’m a Linux user, I don’t have access to a legal means to view that media, so I don’t buy it.

What Doctorow argues is that DRM damages the industry as a whole. DRM is not hard to bypass by users who wish to, and for users who want to do the right thing and pay for media, they resent the seemingly arbitrary limitations put on their freedom to use the media where they wish. I see this argument regularly.

“But DRM doesn’t have to be proof against smart attackers, only average individuals! It’s like a speedbump!”

While average individuals may not be able to break DRM on their own, they are generally smart enough to search Google to find tools to do the cracking for them. But more appropriate the ‘average individual’ is an honest individual, and they’re not interested in ripping people off. They’re perfectly fine with paying for media, and will do so provided the media is good enough, and the restrictions on it’s use are reasonable. Unfortunately, those two caveats rarely apply.

I’m going to have more posts over the next few weeks talking about the issues of DRM and Trusted Computing, but I’d fully suggest reading Doctorow’s book. Not sure you want to buy it? It’s freely available. If you like it, buy it, or do what I’m doing, donate a copy to your local library. It’s a good collection of essays, and I’d suggest anyone interested in the logistics of Content Publishing and DRM read through it.

Custom ASP.NET Authentication: Role-Based Authorization

This is part three of my articles on writing Custom ASP.NET Authentication. The first article served merely as introduction, but the second delved into writing a Membership Provider and what that entails.

Now that you have a Membership Provider written, and you now have a method to authenticate and manage users. However, being able to authenticate the user is only half the problem. If you’re had to customize the login process, odds are you also are going to need to customize the authorization process as well.

The most common means currently to do this is to separate the concerns out into “Roles”. For instance, some users are Administrators, some users are Customers, some users are Salespeople, etc. Users can fit into multiple roles, but ultimately what they’re allowed to access is based on the Roles they’ve been assigned. This works well for authenticating users against certain activities, but it doesn’t really define what resources they have access to. We’ll get to that later, besides, for many cases, basic Role-based Authentication will work fine.

In ASP.NET this process is driven primarily by two Interfaces: IPrincipal and IIdentity.

The Identity is the interior class, so we’ll start there. The Interface is simple, including merely three Read-Only Properties. The first, a string representing the Name, the second a string representing AuthenticationType, and the final being a boolean IsAuthenticated. For us, this was not enough, as the Name field (which ASP.NET generally uses as the user name), is not what we use internally to identify the user. Luckily, we can freely add new Properties, since the rest of the API only requires it to be a IPrincipal.

So, what do these values mean? IsAuthenticated is generally set to true anytime a user has given a valid username and password. The reason for the property appears to act as sort of a gatekeeper into the rest of the class. In our Application, before I cast the IPrincipal into our application specific type, I test IsAuthenticated, because I know that if it’s false, then I’ve not set up the user with the correct Principal.

As for the AuthenticationType, I typically leave it as whatever the base authentication mechanism was. For our purposes we’re using Forms-based authentication, so we end up saving this as “Forms”. I never use this variable for anything, and I’m unclear if the framework does, but I’m fairly sure it’s important to leave this value as what it’s defined as in the Web.config. If nothing else, consistency is king, particularly in a system as dense and opaque as ASP.NET.

The name, I set to the Username for the user, but only because that better matches the name (and datatype) of the property. Internally, I still use the numeric ID which we use to uniquely identify our users, and that is what the Identity takes in the constructor. The name is merely calculated on each access of the property.

The second interface, and arguably more important is the Principal. It’s even simpler, as it only contains the Identity (again, Read-Only), as defined above, and an “IsInRole” method, which takes a string and returns a boolean. This makes using the class very easy. Need to know if the user is an admin?

if (User.IsInRole("administrators")) {
 // Do stuff for Admins

That simple. How you determine if a user is in a role or not is up to you. Active Directory Authentication checks AD groups for membership, for our purposes, we look into a database to determine if a user fits a certain role or not. You can define as many roles as you’d like.

This interface can be extended as well (obviously), but that discussion will wait until next time, where we talk about Claims-based authentication. Right now, there is one very, very important caveat regarding using your custom Principal and custom Identity. Namely, ASP.NET will not remember them between page loads. The ASP.NET login cookie doesn’t contain information about the principal for the user, so the next time the user hits your page, the system will view them as nothing more than a simple forms-authenticated user with a basic Principal and Identity that fails to give you access to the methods you’ve defined.

This is simple enough to work around, in your Global.asax file.

protected void OnPostAuthenticateRequest(object sender, EventArgs e)
    if (User.Identity.IsAuthenticated && User.Identity.AuthenticationType == "Forms")
        var ident = (FormsIdentity)User.Identity;
        var userInfo = Membership.GetUser(ident.Name);
        var principal = new CustomPrincipal(
            new CustomUserIdentity(int.Parse((string)userInfo.ProviderUserKey), 
                User.Identity.IsAuthenticated, User.Identity.AuthenticationType));
        HttpContext.Current.User = principal;
        Thread.CurrentPrincipal = principal;

Since ASP.NET so helpfully recasts your Identity into one of the base types, you’re basically stuck doing this on every page load. First we take the current Identity, which has been helpfully made a FormsIdentity, then we get their UserId from the Membership (remember, we made that the providerUserKey), then we declare our new Principal, giving it a new Identity as an argument. Since we did use Forms authentication, we use the values defined in the current Identity object to fill in parts of our new Identity object. If I wanted to allow non-forms based authentication, that should be possible as well, just don’t check the Authentication type, and leave the Identity an IIdentity, and you’ll be fine. For my purposes, non-Forms based authentication could be a problem, so I simply disallow it.

Once you’ve declared the new Principal, replace the default Principal with your custom one and you’re done. The only potential caveat is that any attempts to use your custom methods will require a type-cast into the Custom type, since the Interface doesn’t define your custom stuff, but if all you want is the IsInRole method, the cast is basically pointless.

This system isn’t horrid, my only real complaint with it is that ASP.NET fails to remember what I’ve done between pageloads. I know that the web is stateless, but with all the other work Microsoft did in trying to make ASP.NET seem state-full, this seems like quite the oversight.

Microsoft, You're Killing Me

1 Comment

While I want to say that Microsoft really had a good thing going with the Gates-Seinfeld ads, which I was kind of sorry to see die, Microsoft keeps putting out promotional videos that make me want to cry.

Some of you may remember the Bruce Springsteen inspired music video of a few months ago, and all I’m left with is the same reaction I had then. WTF Microsoft?

Sustainable Living: Green Tomatoes

A combination of unfortunate weather and late planting conspired together this year to leave a lot of unripened tomatoes on our various vines. All told, when we went out on Saturday to pick the last of the fruit from the vine, we picked over 30 pounds of tomatoes, the vast majority of which were ‘green’.

So, what do we do with over 20 pounds of green tomatoes? They’re unsuitable for any normal application of tomatoes, since they’re more tart than sweet (raw they taste sort of like apples). While one of the ladies we ran into at the garden suggested Green Tomato Pie, we weren’t quite daring enough to go that route. Instead, Catherine spent some time coming up with a few recipes: Green Tomato Ketchup, Sweet Green Tomato Pickles, and Sour Green Tomato Pickles. Between those, we’re going to be rolling in Green Tomato for quite a while.

For the Green Tomato Ketchup: Chop Tomatoes and some White onions, layering them in a pot, adding salt to each layer. Let this soak overnight. The next morning, pour out the liquid and rinse everything well. It’s incredibly salty right now, and you don’t want all that salt in the final product. Add Vinegar and Sugar to the mix and boil, the onion and tomato will break down. Then just can and go. Green Peppers, Onion Seed, Coriander, and Garlic can all make welcome additions, but as with most things, I’d suggest starting basic before you start experimenting too much.

For the pickles, Catherine chose to use our green Yellow Pear tomatoes, at least partially because my Mom is such a fan. Actually, the recipe for these are almost identical to the ketchup, though allspice apparently makes a welcome addition to the herbage added after the brine. Also, you’ll want to cut the tomatoes larger. With our Yellow Pears and Gold Nuggets, we just halved them and left it at that.

Gardening and Sustainability force you to reconsider a lot of things. While I’d never even consider buying green tomatoes from the grocery store, with the Garden it’s more likely we’re going to end up with produce that we can still use, that we wouldn’t have normally bought. People have been doing this for years, and it’s only responsible that we keep this knowledge alive. I shudder to think about how many unripened tomatoes are left to rot in the big commercial farms every year, as I can think of very few commercial foods that use green tomatoes, even though they can certainly have a good place in your kitchen.

New Releases in Web Technologies

This week saw two major releases in Web Technologies. First, YUI released 2.6.0, which brought eight tools out of Beta, and hundreds and improvements to the YUI libraries. One of my favorite features is that the YUI Loader, which I tend to use on all my pages, can automatically combine files into a single download. Fewer HTTP requests is a huge win, which should overrule the cache miss performance hit. In addition, there are a lot of accessibility issues added to the system, which having a state job is an important consideration.

Unfortunately, it doesn’t appear that any of the issues that I’ve submitted code for have been addressed, but several of them came up after the 2.6.0 pre-releases were made available (ie, post-code freeze), and I suspect my issues are somewhat fringe cases.. I’d still like to see some movement on them from Yahoo!, if nothing else to show that they’ve thought about the issue and made a decision on it.

But, in all I’ve been pleased with YUI 2.6. On the WSU Catalog upgrading was as simple as importing the newer version of YUI Loader. That simple. Of course, I’m only using a few different libraries, like the Menu and Connection Manager, so none of my modules were heavily affected, but I was pleased with the ease. I am most looking forward to YUI3 though, which will make YUI much easier to use and integrate.

Moving from the JavaScript side to the Plugin side, Microsoft finally released the first Release Candidate of Silverlight 2. Unfortunately, it’s developer only with no go-live license, but it implies that the final is really close. I’m suspecting the end of the month at Microsoft’s PDC. The conference release is consistent with how Microsoft has handled all Silverlight releases to date.

The release is a good one. It broke many things, but it’s far faster, the default style looks better, and they finally added some controls we pretty badly needed. One of these is the dropdown box that is finally available, but still many things are lacking. We’re having to use an external Rich Text Display, the dropdown box doesn’t fit all our needs, so we’re having to use a different one in a few places.

Mostly though, I’m not convinced the plugin route is wholly necessary. Sure, we’re using one in our current project, but we could do most of that in DHTML and JavaScript, and the browser requirements wouldn’t be much different. There has been so much work over the last six months in next-generation JavaScript interpreters which further makes me question if a proprietary plugin is necessary. The only reason Silverlight may catch on is for those users who simply don’t want to learn JavaScript. It’s an unnecessary technology today, and will become more so over the future.

History Hacker

Last Friday, on the History Channel Bre Pettis, formerly of Make Magazine and founding member of NYC Resistor, had his pilot for a new show ran. History Hacker is a potential new show where Bre, a well known name in the DIY crowd, chooses some technology in history and deconstructs it, working through a series of projects the viewer can try at home.

It’s an entertaining show, and on the premiere, they broke apart AC Electricity, focusing on the history between Nikola Tesla and Thomas Edison, including how DC powered much of New York. While the history is interesting, and I really liked the art style used to tell the history story, it was the projects that were really cool.

Bre Pettis and a few others go through the process of building a generator that attaches to his bike, and shows how to use it to power a GPS unit. He goes to a glass-blower to make his own Neon tube lights, then shows how they can be lit through the air using a Tesla Coil.

I remember watching Bre Pettis on the Make Weekend Projects podcast back when he was still doing that, and Pettis is just as into the DIY stuff as ever, and his energy is somewhat contagious. Overall it makes for an entertaining show, and I for one would love to see more. No word yet on if History Hacker will become a full-time show, but I do crave more good DIY programming on TV.