February 2010 Archives

Blocking "Domain Smacks" in Apache

Recently, some intrepid person decided to buy uwrejects.com and have it show Washington State University’s website. Frankly, I thought it was pretty amusing, but there were a fair number of concern by certain executives at the institution. However, it did seem to remind me of the concept, of the “Domain Smack”

Alright, sorry for the ad (I’m not even getting paid for that), but it explains what the hell a domain smack is.

So, what do you do when some jokester decides to domain smack you, and you (or your boss) is really concerned with it? Just a few rules with modrewrite, and you’ll be fine, on Apache at least. IIS and other servers can do this as well, and IIS can even import Apache modrewrite rules easily.

RewriteEngine on
RewriteCond %{HTTP_REFERER} smackingdomain.com
RewriteRule .+ [F] # Return 403 Forbidden
RewriteRule .+ http://jokestersite.com/ [R] # Redirect back on the joker.

The syntax is fairly straightforward. Turn the RewriteEngine on (if you haven’t used it yet in the .htaccess file), check the Referer header (which is misspelled in the specification) for the smackingdomain, then rewrite ANY URL (the .+) to either return forbidden or redirect back at whomever has decided to make fun of you.

At the end of the day, WSU is seriously considering doing nothing. Which is good, because it’s better to have a sense of humour about such things. However, good mastery of conditional URL Rewriting rules are good to have, and I’ll probably do some more advanced stuff with it later, since the documentation lacks a lot of examples which could be really nice to play with.

Genetically Engineered Pain Insensitivity Misses the Point

As reported by the New York Times and Change.Org’s Sustainable Food Blog, researchers at the University of Toronto and Washington University have devised a means to make mammals insensitive to pain. So far, they’ve only worked on mice, but the protein that they’ve genetically engineered away from the mice is common to pretty much all mammals.

The writers reporting on this are discussing the development due to how it could impact commercial animal production in the US, which is rife with animal cruelty abuses, like veal production in boxes, or docking of pigs tails to keep them from biting each other’s tails. The argument is that by making the animals insensitive to pain, they are no longer as effected by the unpleasant conditions in which they live. Of course, it could cause issues with the animals not moving away from potentially dangerous situations because they are simply not bothered by pain. For instance, part of the reason pigs tails are docked as so that they’ll fight back if their tail gets bit. Without sensitivity to pain, they’re not likely to fight back, which raises the threat of infection to the animal.

However, the biggest problem is that this research, while interesting, wouldn’t actually solve the problem. From an animal rights perspective, it probably encourages even more egregious abuses, since poor handlers will likely be rougher with the animals than before, simply because the stimulus they were providing is no longer effective. Plus, how does the lack of perception of pain make the actions any less offensive? But ignoring the issue of animal rights and cruelty, this solution does nothing to solve the problems that modern commercial animal production causes elsewhere.

The environmental impact of CAFOs? Could get worse, since the animals insensitivity to pain encourages even higher densities. Which encourages greater centralization. Which increases the food safety risk. While the removal of this protein is unlikely to have any negative health effects on it’s own, and animal breeding is easier to control than plant breeding, there isn’t much risk of some of dangers of genetically modified food that are often raised, but the most likely end results of this technology are highly negative.

The research is interesting, and the knowledge of the mammalian pain experience could be used to generate some new pain treatments. However, as a technology with reasonable application in modern commercial animal production….I don’t see it. And I see it making things worse.

Hacking MSTest out of Visual Studio

For a while, we’ve been using Hudson as our Continuous Integration server for a while now, but we had a problem where the unit tests we had written were all in MSTest. However, MSTest doesn’t like to be installed without all of Visual Studio. Luckily, Mark Kharitonov at Shunra has figured it out and written up a post about it. I’ll only note two things I had to do to make his solution work:

  1. I had to add the VS2008Stub\Common7\IDE folder to the global PATH so that mstest.exe could be found (and restart Hudson so Hudson would update it’s PATH)
  2. Hack MSTest to allow me to use the /testmetadata and /testlist options.

I had to do #2, because the MSTest plugin for hudson seems to only want a single TRX file, which would have required me writing a bunch of MSBuild stuff I didn’t feel up to writing so that MSTest would run once for the entire build, instead of once per Test DLL file. By using the /testmetadata option, I can set up a vsmdi file and tell it which tests to run, which also provides me the added benefit of disabling certain tests that I might not want to run for some reason or another. In my case, I had a few tests that went against the database that really need to be rebuilt using a Mocking framework. They can’t run on the CI server (the CI server doesn’t have access to the Database), but they have some mild use currently on my local machine, so I don’t want to simply exclude them.

Anyway, MSTest, it turns out, actually checks your Visual Studio license to determine how it should function. Basically, it offers a few more options if you have a Visual Studio license, and a few more options if you have a Team System license. We don’t have Team System (or we’d probably be using Team Build anyway), but we do have Visual Studio licenses. Of course, they don’t tell you that the binary does different things based on a registry key, that’s what my good friend .NET Reflector is for.

Due to the fact that Reflector actually displays disassembled code, code I didn’t write, and I sure don’t have permission to redistribute, I’m going to gloss over what exactly I did to figure out what to do in the next paragraph. I’ll likely put together a new post in the future about hacking .NET using Reflector, but it will be on non-encumbered code.

It turns out there are five special codes that MSTest looks for to enable or disable features, namely the Test List Editor (what I was interested in), Team Developer Tools, TFS Integration, Remote Execution, and Authoring Non Core Tests. I’m not entirely sure what those last four are (though I plan to investigate), since as I said, I only really need the first one. Interestingly enough, the way they’ve implemented this security, it’s a pretty simple hack to enable the ones that you aren’t licensed to use. Which wouldn’t be the right thing to do, which is why I’m not providing those codes, or even the actual location of the codes you shouldn’t have, to you, my reader. I’m covering my own ass.

Open up Regedit on your development box, and point it to HKLM\SOFTWARE\Microsoft\VisualStudio\9.0\Licenses, and you’ll see a list of keys (probably only one, but it could be more than one). Export the keys you see under this hive, and then import them into your CI server, and it should automagically unlock the features you were missing on CI, check the /help output to be sure. Note: On a 64-bit system, the Hive is HKLM\Software\Wow6432Node\Microsoft\VisualStudio\9.0\Licenses. Just make sure you export and import to and from the correct location, if there is a difference in bit-width between your two boxes.

Now, you should have all the MSTest features you had on your development box on your CI box, and you didn’t have to do a full install of Visual Studio in CI.

Copying Files out of the Windows GAC

Sometimes you need to get a file out of the GAC on Windows, either to look at in something like .NET Reflector, or maybe to copy a DLL (licensed, of course) to a server when you don’t need (or want) all the other cruft that the installer might drop on the box. I’m not going to judge.

GAC View in Windows Explorer

The GAC can be viewed on a Windows box by heading pointing Explorer to %windir%\assembly, but Explorer abstracts that folder away so that you can only do limited things with items in the GAC. They’ve even gone so far as to make it impossible using any of the GUI filesystem tools in Windows to navigate into the subfolder hierarchy. So, when faced with a GUI that just won’t cooperate, I turn to my trusty friend, the command line.

GAC Directory Structure from Command Prompt

Now, I can see the structure of the files that I’m looking for, and if I look back at the Explorer view, it even provides me clues on where to look. For instance, the fifth column, Processor Architecture, tells me which GAC_ folder I need to look in. For me, I’m almost always interested in GACMSIL. Once in that folder, there is a new folder for each unique entry in the Assembly Name column for the given Architecture, followed by a group of folders following the naming scheme {Version}_{PublicKeyToken} (that’s two underscores between Version and Public Key Token). Inside of this last folder, is my DLL, which I can copy out to another location.

For instance, System.Core, the core DLL for .NET that everyone has anyway, can be found at: %windir%\assembly\GACMSIL\System.Core\3.5.0.0_b77a5c561934e089\System.Core.dll

GAC Directory Drilldown from Command Prompt

Now that you know how to find the files, it’s trivial to copy them to wherever you need to, for whatever you’re looking to do. Of course, if you use this to break a license or anything else shitty, it’s not my fault.

BackTrack4 SD Card with Asus Eee PC

2 Comments

I’ve been watching Hak5 since it hit Revision3 last year, and I’ve generally enjoyed the show. Recently, host Darren Kitchen talked about creating a persistent Backtrack boot-able flash device. Since I’d be using this with my EeePC (or another laptop), I decided that the idea of running an OS off a thumb drive for any period of time was scary, so I decided to go the SD Card route.

Unfortunately, Revision 3 doesn’t provide good show notes for how this was done, Hak5.org is down, and Backtrack’s own page on a persistent USB drive is completely empty. The provided content is taken almost verbatim from Darren’s presentation on the linked video above, and I’ll embed the video after the instructions. I just felt that a write-up would be convenient.

  1. Download the Backtrack4 ISO
  2. Set up bootable media with Backtrack, either burn a CD, or a thumb drive using unetbootin
  3. Boot BT4, put your SD Card in your computer and find out what device it mounted as. Enter dmesg | grep hd.\|sd. at the command prompt, the bottom entries will likely be the correct ones. On my system, it was /dev/sdc so that’s what I use.
  4. Run parted /dev/sdc (I vary from Darren on this)
  5. Type print at the command prompt, odds are you’ll have 1 partition. Delete all the numbered partitions with the rm command.
  6. Create the first filesystem with mkpartfs primary fat32 0 2.5GB This will create a two and a half gigabyte, fat32 partition as your main data store.
  7. Make Partition 1 bootable with set 1 boot on
  8. Create Partition 2 with mkpart primary ext3 2.5GB 100%. This will fill the rest of the device with a empty partition. I used an 8GB drive, but the 100% will fill the rest of the drive.
  9. Exit parted with quit
  10. Run mkfs.ext3 -b 4096 -L casper-rw /dev/sdc to create the persistent area on the storage.
  11. Run mkdir /mnt/sdc1
  12. Run mount /dev/sdc1 /mnt/sdc1 to mount the first partition you created.
  13. Run rsync -r /media/cdrom0/ /mnt/sdc1 to copy all the files from the boot media to the boot partition. This will take some time.
  14. Rum grub-install --no-floppy --root-directory=/dev/sdc1 /dev/sdc to install grub on your sd card.
  15. Edit /mnt/sdc1/boot/grub/menu.lst in your favorite editor
  16. Change the line ‘default 0’ to ‘default 4’ to load in persistent mode by default.
  17. To the end of the kernel line for the Persistent Live CD option, add ‘vga=0x317’
  18. Shutdown and reboot.

EeePC Notes:

  1. My EeePC is an original 8G, meaning that it’s running a Celeron M, not an Atom, and it has the smaller (6”) screen.
  2. To select your boot device from the Eee PC Menu, hit the ESC key when the system starts to boot (these things boot fast, so hit it quick) and choose the ‘USB2.0CardReader’ option to boot from SD. If you’re booting from a thumbdrive and your thumbdrive has a U3 partition, odds are you’ll want the first one on the list. If it refuses to boot it, reboot and try the other.
  3. The system is currently booting into a text console for me, not the GUI. If I want the GUI, I can just type ‘startx’ and it comes right up. I’m trying to solve this issue, and when I do, I’ll update this post.

Software Testing Club Magazine

The Software Testing Club, an organization I had never heard of before Ara Pulido who works for Canonical on the QA Team, mentioned it on her blog, recently released the first issue of the own Community Magazine. Having downloaded and read it over the last few days, I was pretty pleased with the content, and will likely continue to read, especially since my testing chops could use some improvement as I’m expected to be a developer and tester in my current position.

And I would argue that even developers not in dedicated test roles should read this magazine, since it provides good feedback, especially on bug reporting and the mechanisms (and importance) of testing. Unfortunately, the Magazine doesn’t seem to have made any efforts to sort of bridge that gap between developers and testers. To be fair, it was written by people focusing on test, and developers have often been harsh toward testers, so I’m not necessarily blaming them for the making fun of developers that’s done. I merely found it interesting how prevalent media around this conflict are.

The only real complaint I have with the magazine is that it does have some editing issue. Sentences getting cut off at page breaks, obvious typos, etc. I hope that the STC can improve their processes and put out a more polished community magazine in the future. Still, for a first effort it’s a good read, and it has plenty of information I’m still mulling over.

Michael Pollan Speech at WSU

Several weeks back, on January 13, Michael Pollan spoke at Washington State Univeristy as part of this years Common Reading program. Having read both The Omnivore’s Dilemma and The Eater’s Manifesto, I was excited to have the opportunity to listen to the man speak (though, like an idiot, I forgot to bring my physical copy of the Manifesto for signing). I’ve failed to write about this sooner, primarily because I haven’t taken the time, but I did post during the event to my twitter.

At it’s source, there wasn’t a whole lot of surprises in his talk to people who’ve read his work. He’s been beating the same drum for quite a while, that modern food production is simply unsustainable.

However, it was really interesting for him to be talking to a research institution with a rich history of agricultural research. He focused a lot on the role that an organization like WSU could play in reinventing agriculture, moving away from modern industrial practices to a method that is at the same time more traditional but also based on new, as yet undone, research into what makes the most effective post-Organic farming.

In the agri-system Pollan envisions the farmer becomes an intimately involved steward of the land, ensuring balance between plant and livestock raising. For instance, one Argentian farm he described had found that growing several years of nitrogen-fixing cover crops, and raising grazing stock on those fields, allowed several years of nitrogen-stripping crops (wheat and others) to be planted in a field without requiring any additional chemical support for the farm.

He spoke of an Urban Farm in Detroit that employs (with good wages) over a half-dozen people, and feeds many more, which is run mostly in greenhouses, using vermicomposting to heat their facilities. They are even able to raise fish and watercress in a symbiotic system that, according to Pollan, produces nearly zero waste (I’d have to see it to believe it, but it’s an interesting thought). That particular farm is also covered in the most recent Urban Farm magazine, which looks to be a promising publication.

Pollan spoke often about creating a ‘post-industrial’ form of agriculture based on this research, but I think that he might be downplaying the fundamental understanding of land management that almost all farmers had before the agri-revolution post-World War II. Still, codifying that understanding through the scientific process will be necessary to prove the viability of these methods.

Pollan did discuss this briefly, but I think it needs to be focused greater on the necessity of changing the overall structure of the Western Diet. We need more farmers. We need to spend more on our food. And we need to eat less meat. Meat production is always going to be more resource intensive than growing vegetables. Catherine and I have tried to have at least two meals a week vegetarian. It’s been working well, though I’m not terribly well versed in cooking without meat.

What Pollan didn’t focus on as much as I thought he should, was the message that our expectations about food are not reasonable. We can’t eat meat every day of the week. We can’t expect to get any produce at any time. It’s about expectation management, and I don’t think Pollan expressed that enough. He did talk about it a bit, but fundamentally, it’s the biggest problem, and the one that needs to be addressed to most.