January 2011 Archives

Book Review: Cooking for Geeks

Jeff Potter’s book, Cooking for Geeks, was published last year and received quite a bit of praise over the web. As a Geek who is into food, I was interested in picking this one up and reading through it.

My taste is cookbooks is a bit specialized. I don’t tend to care much for books that are simply collections of recipes. My favorite cookbooks have been Michael Ruhlman’s Ratio and Mark Bittman’s How to Cook Everything. But these cookbooks, which do both contain recipes, are more about the process of food. Of what differentiates classes of recipes from one another.

Cooking for Geeks falls solidly in this class of cookbooks. Yes, there are plenty of recipes, as there are in those others, but it’s much more about imparting a deep understanding of the science of food. And it’s done through interviews with known chefs and scientists and geeks, including Tim O’Reilly and Adam Savage.

Potter effectively shares the language used in the food world. In the chapter on Wine, there is significant discussion with a sommelier about the reality of wine pairings and descriptive characters that are used inside of the industry. He goes into detail on ‘molecular gastronomy’, both in the sense that it’s used today to mean ‘using chemicals in cooking’ (by which metric, the Twinkie is probably the pinnacle of food), but also in the sense of hard science that’s being done on food to better understand the reactions that take place inside of food.

I kid a bit about molecular gastronomists, and while I respect the skill of chef’s like Wylie Dufresne, there have been people practicing molecular gastronomy for far longer that aren’t chefs, and approach the problems from a purely research perspective. However, the techniques of both are discussed, and presented in ways that left me interested in them, and thinking critically about what I’m doing while cooking.

That is where Cooking for Geeks really excels. If you’re into geek, and by that I mean you eat, drink and breathe geek, and you’re also into food (or want to be), then I think you’re likely to enjoy this. It goes into enough detail to sate your desire for information, while driving it home time and again that just like whatever else you’re doing, it’s okay to experiment, and even fail. After all, like Potter points out, even if you’ve created an inedible atrocity (and believe me, I have), pizza is only a phone call away.

Links:

Sharing YUIDoc for Gallery Modules

1 Comment

The YUI Gallery has nearly 250 modules, as of this writing. With some notable exceptions, it appears that very few of those modules have much in the way of documentation. This is unfortunate, because creating even simple documentation using YUIDoc is dead simple.

One of the features of Github that is very convenient are pages. Creating a branch named gh-pages inside of any repo pushed to github will cause Github to create a subdomain for you at username.github.com and will push to the repo name subdirectory. For myself, my yui3-gallery documentation appears at http://foxxtrot.github.com/yui3-gallery/. This can also be set up by enabling the “GitHub Pages” feature in the administrative settings of your repo.

Writing YUIDoc Comments is beyond the scope of this post. But you should be documenting your code, and you might as well do it in a way that you declare usable metadata with them.

The easiest way to build API Documentation is using Dav Glass’ git-yui tool, which if you’re working with YUI’s source you should probably have anyway. git-yui provides tools for doing pretty much anything you might need to do, grabbing other users changes, syncing your repo with them, building modules, documentation, pull requests, jslint, etc. It does a lot, but it makes almost all of it trivial.

To start, you’ll need to clone the following git repositories from YUI: 1. yui3-gallery 2. builder 3. yuidoc

These should all be cloned to the same folder.

Finally, just enter the yui3-gallery folder and run git yui docs <modules>, so for me in my current push that is git yui docs gallery-nodelist-extras gallery-node-extras. Once this runs, there will be a yui_docs folder in the root above your gallery directory. Checkout the gh-pages branch in your gallery module, and copy yui_docs/html into the root, checking in all the files.

For me, this process looks like this:

craig@majikthise:~/project/web/yui/yui3-gallery/$ git checkout master
craig@majikthise:~/project/web/yui/yui3-gallery/$ git yui doc clean
craig@majikthise:~/project/web/yui/yui3-gallery/$ git yui doc gallery-nodelist-extras gallery-node-extras
craig@majikthise:~/project/web/yui/yui3-gallery/$ git checkout gh-pages
craig@majikthise:~/project/web/yui/yui3-gallery/$ cp -R ../yui_docs/html/* .
craig@majikthise:~/project/web/yui/yui3-gallery/$ git commit -a -m "YUIDoc Update"
craig@majikthise:~/project/web/yui/yui3-gallery/$ git push

I am looking to extend Dav’s git-yui script to allow you to do git yui doc my which will query YUI Library for a list of gallery modules you own and build docs for those, but I’ll need to do some tests to ensure that works, since I know I’ve only properly documented 2 of my 5 modules.

The other side of this is if you want to include examples. In that case, you should probably put the YUIDoc generated files into the /api/ folder in your gh-pages branch, and then you can build more detailed documentation in the root. The point is, that even without high-quality prose-like documenation, API docs are easy to generate, and they should prove to make your modules easier to use and maintain.

My Thoughts on Google's WebM Decision

Yesterday, Google made an announcement that they were going to drop H.264 support from their Chrome browser. It’s gotten a lot of reaction online, and for good reason, it’s a big decision. It makes Google the third of the big five browser makers to take a stand for open-standards are refuse to distribute the H.264 decoder, the other two being Opera and Mozilla, boht of whom have never supported H.264 in their browsers. Of course, the two that still support H.264 are Safari and Internet Explorer, and on our sites, that’s at least 45-55% of our traffic, and we have an abnormal skew towards Firefox, I suspect.

I believe strongly in Open Standards, which is why I support Google’s move here. While the MPEG-LA won’t charge me directly for a decoder, or for content I post online, I’m still having to pay for it at some level. H.264 in Firefox would limit Linux distributions ability to distribute Firefox, and it could potentially lead Mozilla to violate the GPL licensing on Firefox. Plus, it would cost Mozilla million of dollars. Theora, based on VP3, has been around for near a decade, and WebM, based on VP8, has Google’s own backing. Incidentally, both are supported by Firefox, Chrome and Opera already. And both are available royalty free for any usage.

While I support Google in this move, I am not naïve enough to believe that they are doing it for the common good. Removing H.264 removes the licensing they are needing to do currently for Chrome and Android, and potentially YouTube. Not supporting H.264 can potentially save Google a lot of money, especially if they do a simple video editor embedded into YouTube and available on Android (both of which I guarantee are coming). Plus, as hardware decoders for WebM become available, as I suspect they will in the next several months, Google can start moving away from H.264 on mobile, and start making more and more of their content on YouTube not available to H.264 only devices.

And Mobile is where this battle is really focused. Even if this WebM issue doesn’t break the back of their competition in Mobile (WebM decoders can be written and integrated into future software updates), it will give them a head-start for a while, and that may well help them dramatically.

Most content online is already H.264 as well, since most video distributed via Flash (by far the most common transport), is H.264 encoded. Content providers are unlikely to want to re-encode their media to support this new format, and that (understandable) reluctance, stands to entrench Flash even more. Which is sad. This may not end up being the year we finally get away from Flash for video.

Ultimately, this whole kerfuffle is the fault, in my opinion, of the W3C. They didn’t have the browsers interface with the OS to use the OSes decoding frameworks, most likely at the request of the content publishers who didn’t want to have to guess at OS support for their media. But they also didn’t give a minimum set of codecs that the Video and Audio tags must support to be complete. There was a lot of fighting at the time, with Apple in particular being hard behind H.264 (because of Hardware decoders in iOS devices), and Mozilla and others were strongly opposed. I understand why the W3C refused to take a stand, they couldn’t and still move forward with standardizing HTML5. But it is a failure that lead directly to the situation we have today.

Links: * WebM * W3C * HTML5 Video

Using VS2010 Web.Config Transforms Without VS2010 Web Deployment

I have built a fairly involved set of MSBuild scripts to build and deploy our web applications to our servers. At the time I was building all of this, I found that the Web Deployment Projects that were available for Visual Studio 2005 and 2008 were not sufficient to meet our needs, in part because we use Hudson for CI instead of Team Foundation Server, but that’s another issue entirely. To support making changes to our Web.configs during builds to either our Test environment or our Production environment, I’d written a few MSBuild tasks to do simple XML Translation.

However, as work on my build scripts is ancillary to my primary job function, my solution was not as polished as I would have liked. It required an XML file for each replacement, which meant that each build configuration had several files that needed to be applied. It required a fair amount of familiarity with XPath. It works, and it works well, but it just isn’t as user-friendly as I’d like.

So when I saw the new Web.config Transformation Syntax, I was really pleased. A single file per configuration, and I don’t need to use XPath, though I can if I want? Sign me up!

However, by default, those files only appear to be applied when you use the “Build Deployment Package” option in the context menu of a Web Application project. This really didn’t fit into our current build system, and it made a lot more sense to adapt the Transformation Task to my build scripts than modify my build system.

In your common build file, you’ll want to add the following PropertyGroup:

<PropertyGroup>
    <OutputWebConfig>$(TempBuildDir)\web.config</OutputWebConfig>
    <OriginalWebConfig>$(ProjectDir)\web.config</OriginalWebConfig>
    <TransformWebConfig>$(ProjectDir)\web.$(Configuration).config</TransformWebConfig>
</PropertyGroup>

The TempBuildDir property is a temporary folder where I copy deployment files so I don’t modify anything in the project directory before pushing it out to the server. ProjectDir is the path to the Web Application Project and is defined in the project-specific MSBuild file. However, these three properties are needed for the task to succeed. With the properties defined, you can then add the task:

<UsingTask AssemblyFile="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.Tasks.dll" TaskName="TransformXml" />
<Target Name="Core_TransformWebConfig" Condition="Exists('$(TransformWebConfig)')">
    <TransformXml Destination="$(OutputWebConfig)" Source="$(OriginalWebConfig)" Transform="$(TransformWebConfig)" />
</Target>

This target will need to be added to the dependencies of one of your other tasks, prior to the code being pushed to the server.

We are in the process of moving our web.config transformations to this system, as it’s a cleaner syntax. It is fairly easy to integrate into your build system without needing to make a drastic change to your build system, and with the Exists conditional, I’m actually using both systems in tandem, while I wait to convert some of my older projects.