September 2008 Archives

The Current Economic Crisis

It’s been all over the news for months, the ‘Subprime Mortgage Crisis’. People are going into foreclosure all over the country, many people now owe more on their houses that it is worth in todays market. It’s all very unfortunate, but even more unfortunate is that nothing can be done to stop it.

For many, this issue is traceable back to the 1999 Gramm-Leach Bliley Act, which repealed parts of the 1933 Glass Steagall Act. Most notably, the part which prohibited a bank from being both an Investment Bank and a Savings Bank. The idea seems to have been that Investment Banks were generally going to be engaged in higher risk activities, and it would be best that the banks that most people kept their money in were insulated from higher-risk investments. This same legislation created the FDIC.

The idea isn’t bad, per se. By restricting banks in this manner, the theory goes, people can have more faith in the institution. However, from a Banks perspective, it is incredibly trying, because historically either Investments are doing well, or Deposits are doing well. In essence, in good times, people invest, in hard times people tend to save. By being restricted to the type of financial institution they were, the banks were, in essence, forced to ride this rough cycle of high periods to low period. The banks argued that, if they were allowed to do both sides of the financial services coin, they’d be more stable in the long term.

Which also, is a great sounding idea. In theory.

But, the Gramm-Leach Bliley Act gave the banks this opportunity, though some banks had already begun playing both sides of this field. Not only that, but those banks had failed to learn anything from watching the Savings and Loan crisis in the 80s, namely the imprudent lending to take advantage of high interest rates.

Now right now, Interest Rates are still low. But it was the same sort of greed that led to this issue today. Today, the only difference was that the lenders, the people selling the loans to the consumer, found a way to sell the loans to someone else, so they were never really saddled very long with bad debt. I’d explain this process, but there are already people who’ve done a much better job than me. Like this guy.

The whole problem, and as a person who took loan applications from Mortgage Brokers on behalf of Wells Fargo Home Mortgage back in the summer of 2002, I can absolutely verify this is true, is that the people selling the loans, who make the commission off the sale of the loan, have no interest whatsoever in you keeping the Mortgage or being able to make the payments. Their job is simple, get the bank to loan you the money, so that they get theirs. You fail to pay? That’s the banks problem.

Back in 2002 and earlier, this wasn’t that big a deal. Banks weren’t going to loan money to people who obviously couldn’t afford to buy a house. The Banks weren’t going to loan money to someone who’s credit was terrible, and who couldn’t get their employment verified. There were checks there, because ultimately, the Banks did care if you could pay.

Then, something changed. The Global Pool of Money, which just begs to be invested all the time, wanted somewhere to put their funds, but the Federal Reserve Bank was only offering a ‘paltry’ 1.5% interest rate. Suddenly, there were Billions of dollars on the open market, looking for something to invest in. Housing prices were strong nationwide, so mortgages seemed like a natural fit for this.

The problem was, the pool of good people to lend to ran out. But the demand for Mortage-Backed Securties was still strong, so over time, the requirements for getting a loan got lower and lower. Eventually, they practically disappeared. This American Life, on National Public Radio, had a really good piece on this. I’d suggest listening.

And the money wasn’t really in Fixed Rate loans, so a lot of people were shilling Adjustable Rate loans to people, because the commission was higher, and good people who dreamed of owning their own home got lulled into this dangerous trap.

I feel bad for those people, I really do. But bad decisions, even poorly informed bad decisions carry with them consequences. A lot of people are going to lose their homes, or they’ll have to renegotiate with their banks (which I guarantee most banks are interested in doing at this point). And that’s unfortunate. But the truth of the matter is, nothing can be done at this point to fix it. Nothing.

But it will get better. This is kind of like a College Freshman, away from their parents for the first time, who loses their head with their newfound freedom and proceeds to waste all their money, and fail all their classes only to get kicked out of school for being academically deficient. It sucks, and odds are that irresponsible kid caused a fair amount of damage along the way, and hurt other people, sometimes without even realizing it.

But it’s not the end of the world. People grow up. So do businesses.

There are those who would argue that this all comes down to deregulation, and the deregulation makes it easy for large companies to fuck over individuals. That is true, to a certain extent. But what you’ll notice is that these problems typically arise in the short term immediately about regulation is removed. The business run forward with their new found freedom, and when they hurt themselves, they usually hurt others as well. It’s unfortunate, and there should be consequences for that, but just like the parents of that College Freshman shouldn’t step up and coddle their kid for his irresponsible behavior, the Government bears no responsibility for propping up failing financial institutions.

We all are in for a rough time economically. But anytime an industry is as out of control as the Mortgage industry was. Anytime prices on a scarce resource climb at the rate that housing prices were, a correction is inevitable. When the market is being artificially inflated by greed, and the blindness that greed can cause, the correction is going to be horrible.

At this point, it’s just too late. The economy is going to take some terrible hits, particularly in Banking. But I don’t think it’s going to be as bad as some people do. Sure, there are going to be fewer banks in six months than there are today, and it might take a little while to recover from the almost 7% drop we saw in the Dow Jones Industrial Average today, but we will recover. And we’ll be stronger for it.

That is, if the Government just lets nature take it’s course.

Sustainable Living: Soap, Revisited

When I started writing this series on Whole Foods, Catherine and I had embarked on this path in our lives for two reasons. First, health. I’ve often talked about the Nutritional value of various foods, and I’ve tried to provide resources that point to a need to rethink who we listen to in this country for nutritional information.

The second reason, was to be more self-sustainable. Partially, this was to save money, as we have in our garden, but much of it came down to a question of did the current socio-economic model make sense, and was it strictly necessary? Do we need to buy the kinds of things that we do, and create the waste (both seen and unseen) that buying into that process really require? I’m a Libertarian, I want the government to stay out of my, and for that matter everyone’s, business, but the last six months have really made me change a lot of my views of consumerism.

I’m renaming this column, because while there will always be a food component to this writing, that simply won’t be the entirety of my writing. So welcome to Sustainable Living, formerly Whole Food Adventures. And without further ado, onto the topic at hand.

Last time I wrote about Soap, it was to extol the virtues of shaving with real shaving soap as well as talk about my excitement about finding a good supply of hand-crafted soaps for a reasonable price. While I still use that soap daily for my shaving and showering needs, it was a little pricey for me to want to use it on my hands, and I really prefer liquid soap for hand washing.

About a week ago, I got this gem from Pearls of Country Wisdom by Debora S. Tukua via TipNut. Sorry you’ll have to follow the link for the recipe, it’s not licensed such that I can reprint it (as far as I can tell).

I learned a few things implementing this recipe the first time. First, not all soaps are created equal. I have no idea what soap they were suggesting be used, but we bought a four-pack of Ivory soap at the grocery store for $1.99, and while each bar of soap is only 4.5 oz (~127.6g), I ended up not with 5 or 6 cups of soap, but closer to 2.5 quarts (2.4L).

Actually, I’m lying, I ended up with almost 5 quarts (4.7L) because I decided to make one and a half batches because of the quantity of stuff I had. Needless to say, this was nearly unworkable.

I began by running the soap through the grater plate on my food processor, before swapping out the grater plate for the normal blade. While I boiled the water, I used the blade to break up the soap even further. Pouring the boiling water in, and then the room temperature water, I soon found that the bowl of my food processor was full (and was in fact, overflowing a bit) but I decided to trust the recipe and carry on, adding the honey and the glycerin.

I love honey, which is probably why I bought 5 pounds (2.3 kg) of it last year. That was right around $20US. Best sweetness bang for your buck, I promise. Actually, I still have most of the honey, but it never spoils, so you should never feel bad about stocking up.

As for glycerin, you’ll probably have to go to your local craft store. I was able to get 2 lbs (.9 kg) for $10US, and that should easily last at least 50 or so instances of this recipe. Clearly, I’m likely to save a fair amount of money making my own liquid hand soap.

Unfortunately, when I let the solution cool, it became clear that it really was taking up that much volume. Like an idiot, I decided to solider ahead, and made quite the mess I had to clean up later. This also included trying to fill up an empty milk carton, which I did, before I realized that the solution was nowhere near incorporated. There was a mostly water portion, as well as a soap layer which was about the consistency of marshmallow fluff. The majority of the solution was fluff.

Having already washed all the food processor stuff once, I proceeded to put it back together and split the solution in half. At this point I was able to add enough water to form an actual soap-like solution, which was nice because I was actually able to fill up the milk jug in a reasonable amount of time.

On the one hand, this soap is going to last me for months. On the other hand, that is going to make it a lot harder to master in a timely fashion. I know that my next batch is likely to include Ivory soap, as I still have bars I don’t intend to use anywhere else, but I may end up finding myself a different brand of soap after that. While I don’t necessarily dislike the soap that I’ve got, I’m not convinced that I can’t do a bit better.

That’s a large part of what this is all really about, after all, the thrill of doing it yourself. There are those who are claiming that this latest economic meltdown may well lead to a DIY Boom across the nation. I welcome that.

Nineteen Eighty Four

In 1949, George Orwell, a British national who had lived through two world wars and a hideous economic depression, lacked any real faith in the future of Humanity. Or at least, it would seem that was the case, considering that the author completed his career with Animal Farm, and Nineteen Eight Four. Both paint humanity, and it’s future, in a highly negative light. In Animal Farm, the allegory regards the willingness of the few to take everything from the many, and the willingness of the many to simply hand it over.

Nineteen Eight Four takes the allegory further, painting a dystopian world where a small group of people (actually, three small groups), control almost every aspect of a larger, but still relatively small group who does all the dirty work of the Party. Below them, making up the majority of the population are the unwashed masses, the factory workers, the garbage men, and so on. The numbers are not so different from global society today, as far as hierarchies go, but the level of control exercised by the Inner Party (roughly 5% of the population), is frightening.

There are many who feel that the story of 1984 is one that we are still moving toward. Many of Orwell’s contemporaries joined him in writing these Anti-Utopias, which painted a frighteningly depressing view of humanities future. After the events of the early part of the 20th century, who could blame them.

Frankly, the last seven years have shown more movement to this world of completely supervision than any period of time. And technology has progressed to the point where Orwell’s vision, wasn’t even clear enough. Unfortunately, people have short memories, and where fear is involved, logic is often in short supply. In many ways, that was the most frightening part of the world Orwell created. The Inner Party did everything they could to keep the masses afraid, so that the masses would continue to let them do what they were doing, because it seemed that they were trying to help.

Fear is a powerful motivator, but the problem with motivation by fear is that it can be easily manipulated, and turned into something far more self-destructive.

I find some irony in that the English Socialism (Ingsoc) of the world of 1984 seemed to center around England, both because of England’s bent toward socialism politically, but also because England, far more even than this country, have taken steps toward universal surveillance of the populace. Just like Animal Farm before it, 1984 is a cry against the evils of Socialism, but it also paints a compelling image of why Socialism may be unavoidable.

The problem with Socialism is that it sounds good. It always has. But often, the people most vehemently speaking out for Socialist goals, have the least intention of living by them.

Because of Canada’s Socialized Health Care, the only Canadians who can get timely and good health care, are those who can afford to come to the US.

Al Gore told the world that they needed to cut down their energy production, while he and his family were using 20 times that of the average family. He buys ‘carbon credits’ from a company he owns himself. I’ll spare the continuing rant about that scam.

Socialism doesn’t work, not on a large scale. It never has. It never will.

For me, the worst part about the story, was the way that Orwell masterfully told of Winston’s (the protagonist) increasing understanding of the flaws of the system. His unhappiness at his understanding of the situation, and his desire the set things right, to break down the Party. The first two-thirds of the story do an amazing job of making us feel like there is hope. Like the world, which was allowed to get this way, can still be saved.

Perhaps that simply wouldn’t have fit Orwell’s needs. Perhaps he really believed what he wrote. In many ways, I think it was the latter. For Orwell, the society he wrote was beyond saving. In the end, they break Winston down so completely that there is absolutely nothing left of the man who hated the Party. The man who wanted the world to change. And that’s the danger. Using fear, misinformation, guilt, hatred…we can be manipulated. We can be changed.

We aren’t there yet. Society, though it has slipped in recent years, has not yet slipped that far. In some ways, I wonder if we are savable. Just two weeks ago, I was having a conversation with my parents, where my mother actually uttered the phrase, “If you aren’t doing anything wrong, what do you have to worry about.”

My own Mother. An educated, professional woman. Such is the battle we face. If freedom is to be preserved, we must all do our part to show people why they should desire it, before they lose it completely.

I’ve spoken about Cory Doctorow’s Little Brother here before. If you’re in Australia, copyright has run out on Orwell’s works, allowing them to be distributed freely. I’ve already had my mother working on Little Brother.

I try to have hope that the world will be different, better, tomorrow. Some days that hope wanes, and all I can see is the is the face of Big Brother. Do some reading on your own. Remind yourself why freedom is important. Maybe then you’ll understand.

BigBrother_transparent.png

Android Becomes Official

On Tuesday, T-Mobile had their official press conference announcing their new G-1, which is the exact same phone as what HTC has called the Dream, just rebranded by T-Mob. Of course, it appears to have been renamed on HTC’s website, so all I can figure is that they signed a hell of a deal with T-Mobile. Don’t feel bad, Europeans, apparently Deutsche Telekom will also be offering the G1, though that’s not much of a surprise, since they’re owned by T-Mobile.

This is really exciting because we’re finally about to see honest to God Android hardware on the market, in a very short people of time. Unfortunately, many people seem to have mixed feelings about the device. People are upset it doesn’t have Multitouch, though that’s apparently due to patent issues relating to Apple and/or Microsoft. It doesn’t have a headphone jack. Many people don’t like the asthetics, but almost all the complaints I’ve seen are related to people not liking the phone, but mostly liking Android.

I’m not going to lie, I want one. I want one bad. But, I’m not going to switch to T-Mobile, so the $179 price tag for people willing to sign two years with T-Mobile doesn’t entice me. Luckily, Engadget spoke to the CTO of T-Mobile, and T-Mobile won’t stop unlockers, and they’ll be selling an out-of-contract phone for a mere $399. Not bad at all. Of course, they claim they won’t help you unlock the phone for 90 days, but I’m pretty sure that’s bullshit and they can’t enforce that.

Regardless, I like the device. The pictures are all kind of iffy, but I’ve at least read that it’s well constructed, and for me the only real turn off is that the Music Player isn’t that fantastic (which an Android Developer could capitalize on, unlike the iPhone), and the lack of the 3.5” headphone jack. I don’t really want to buy a ExtUSB set of headphones, but I may be forced to. I wonder if the G1 will limit output to my Bluetooth Headset…

In some ways the bigger announcement than the phone, was the available of the 1.0 release of the Android SDK, it’s great to have seen two SDK releases in as many months, and the actual final ABI-stable build of the SDK. I’m definitely going to be playing around with development again, if nothing else to give another reason to convince the wife to let me spend $400 on a phone.

The fact is that Android is in quite a position here. Google and the other members of the Open Handset Alliance have said that they won’t restrict what users install on their own phones, and the behavior exhibited so far lends credence to that. In fact, Google encourages developers to replace functionality in the phone, if they can do it better.

Android has great potential to be a healthy ecosystem, and while the target platform isn’t as stable, there is a reasonable baseline that can be expected. Location Services won’t always be available, but you can be sure that Internet access always will be. You’re not even guaranteed that Media playback will be available, but I suspect most phones will implement that.

The iPhone has the benefit of every phone being the same. But that same benefit is an enormous weakness. In a few years, even the cheap freebie phones could be running Android, and have access to high-quality apps and the like. I firmly believe that the future of computing is not the cloud, it’s the mobile device. In my opinion, Android is best positioned to be that future. Now, if Google would just release the source, as they’ve promised to…

In a World Without Walls, Who Needs Windows?

A few weeks ago, Microsoft debuted their new Ad Campaign about, well, nothing. But it was okay. The ads stared Bill Gates and jerry Seinfeld as themselves, having Seinfeldian conversations filled with non-Sequiturs and bizarre situations. They were funny. Really, really weird. But funny.

It seems that a company of Microsoft’s size, while completely able to make stupid videos, they should avoid sharing them with the world. After two weeks, and four and a half minutes, the company appears to have pulled the plug on the Gates-Seinfeld commercials.

Actually, that last point is still arguable, its entirely possible that Microsoft is merely spacing the video’s out. Regardless, they’ve shifted gears to their new “I’m a PC”/”Windows, not Walls”campaign, which is a very direct (and deep) dig at Apple’s widely known ad campaign. And it looks like Microsoft may have completely eliminated Apple’s ability to use those ads any longer.

It’s no secret that people liked the PC character, played by John Hodgman more than the Mac guy. He was more sincere, more endearing. The Mac guy was exactly the kind of douchebag hipster that the majority of the computer-using world hates Apple for. However, Apple didn’t portray the Mac guy as such in anything that seemed to resemble self-deprecating humor.

With the “I’m a PC” ad, show below, Microsoft has managed to take a huge swipe at the ad monolith built by Apple over the last few years. It starts with a Hodgman look-alike acting out the PC role, and saying that he’s been made into a stereotype, and then proceeds to go through a laundry list of MS personnel, and regular folks as well, each of them proudly claiming to be a PC. It’s a good ad, I really liked it. And for all those people who make the crack about modern Macs essentially being PCs, while that may be true, the majority the population, if they bought Mac, wouldn’t likely know that they could install Windows, nor would they be likely to in the first place. Besides, Apple has sought to form that distinction, so the argument is wholly irrelevant.

As a long-time Linux user, I do find it stupid that PC has become synonymous with Windows. PC is a really, really generic term. Hell, Mac’s before they had Intel-based processors and could run Windows were still Personal Computers. But I digress.

What I find the really silly part of the entire ad campaign is the new slogan “Windows, not Walls.” It’s clear what Microsoft is getting at here. With Mac OS X you can only run the OS on Apple’s hardware. They ship many products with intrusive DRM (but then, so does Microsoft), and… well to be honest I’m not sure where else the Microsofties are trying to take this argument. Plus, the slogan is silly, because without Walls, who needs Windows?

As much as I detest Apple’s restrictions that Mac OS X be run only on Apple hardware, and as much as I hope that the Psystar lawsuit recently filed takes Apple to task for killing the clones (I know it won’t, Psystar will settle out of court), in many ways, Mac OS X is far more open than any version of Windows. And I’m not just talking in the sense of the source code being available, though certainly much of it is.

No, I’m talking about the fact that Mac OS X lacks the same level of intrusive copy protection (how many times have I had to install the Windows Genuine Advantage tool?). While Apple software often has DRM built it, it’s not a core part of the OS. Apple bundles developer tools with every copy of Mac OS X, including Python, so you don’t have to download the tools, and the same tools that professionals use are freely available to the hobbyist.

Apple is by no means without guilt. I’ve written about this before. But when it comes to living in an Open World, Microsoft comes out ahead on only a single issue, that of the hardware restriction Apple has. On every other front on the issue of ‘walls’ risen against users, I have to give the award for Openness to Apple when looking at this two-horse race.

But it doesn’t have to be a two-horse race. As I said, I’m a long-time Linux user. Lately, I’ve been starting to do some work to help make Linux more attractive to the mainstream, but if you want to talk about Openness…. If Mac is the Wall, and Windows is the….Window. Linux is space outside. A Window is almost worse than a wall when you put bars across it. Giving that tantalizing taste of freedom is more cruel than not.

If you truly want a world without walls, don’t look to Windows. Find something truly free. If you’re not ready to head outside yet, that’s fine. Sometimes life is hard outside, sometimes war breaks out between the tribes. We’re working on it. That land without barriers is getting better every day, however, and if you’re not ready to step outside just yet, wait, I’m sure we’ll coax you out soon.

Whole Food Adventures: Tomato

Tomatoes have long been a popular fruit to cultivate by those of us in the Western World. A member of the Nightshade family, tomatoes are perennials when grown in warmer climates, but as far north as we are, we’re pretty much stuck with raising them as annuals. Luckily, the seeds aren’t terribly hard to keep.

Unlike a lot of other plant’s seeds, Tomato seeds do require a bit more work. The seeds won’t simply sprout with just a bit of dirt and water, they need to be fermented first. The process is simple, simply scoop out the seeds with the ‘goop’ that they rest in from a halved tomato, put it in a cup with some water, and cover it in plastic wrap. Poke a whole in the plastic wrap so that air transfer can still happen, and let it sit for a while. Eventually, the seeds will be ready for planting next season. Plus, if you find some tomatoes you like, you can probably just take the seeds out of one, and you’ll be about to plant them next year. The only unfortunate caveat here is that some lines of tomatoes are hybrids, which don’t produce viable seeds. If you have any questions, ask the grower at your local market, they likely know.

One of the nice things about Tomato is that it’s one of the few fruits we can grow today that still exists in a large variety of cultivars. Sure, you usually only see three, maybe four, types of tomatoes at your local grocery store, but in our garden we had no fewer than seven types of tomatoes this year, from large heirloom varieties, to tiny ‘gold nuggets’ a delicious small yellow tomato, to a purple-fleshed tomato. This is fantastic, at least partially because the more different types of plants grown, the less likely they’ll fail due to disease and pests.

If you really want to experience everything that tomatoes have to offer, you’ll have to grow them yourselves. Now, I’m not the gardener in our house. I cook the plants, my wife grows them. But I will share what little I know. My wife has found that growing them in large plastic buckets full of good soil tends to work really well, the belief is that the buckets help keep their roots warm, which the tomatoes quite like. However, they do really well in the ground as well, so if you have a good patch of garden, be sure to plant there. Our tomato beds are a little below the walkway level, in order to make it really easy to water, since we can just put the hose on that part of the plot, and fill it up with water. No muss, no fuss.

One thing to note, is that mandrakes are kind of rough on soil, they tend to drain a lot of nitrogen in them, so you’ll want to be careful to rotate which part of your garden is tomato from year to year. On the question of cages, my wife feels that the plants do better when you use stakes that you tie the plants to. This has the added benefit of making it easier to pick the fruit, since the plant isn’t surrounded by as much metal and it is able to spread out a little bit more.

As I said, in our part of the country, tomatoes are effectively annuals. When the weather turns cold, and the ground begins to freeze, the plants soon die. If the year is anything like this one, and you planted at all like we did, you’ll likely have a large number of under-ripe tomatoes still on the vine. We do this year, but we’ve left all of them on the vine for the next week or so. This is partially in hopes that they might still ripen, but also because we have a lot of ripe tomatoes to process. But we fully intend to pick those tomatoes soon, and we’ll can them as we are with our ripe ones.

Next week, I’ll talk about what we’ve done with our tomatoes, including the green ones, and provide some recipes.

Blog Downtime

My webhost, who’ve I’ve always been really happy with in the past, made one hell of an error on my account over the weekend.

I run a Wiki for a forum I’m a member of, and we had an issue where a few of the tables had crashed beyond recovery. I’m unsure what happened, but attempts to it recover it all failed. My host said that there was a backup from Thursday morning of last week, and I asked him to apply it. I thought that meant just the database in question. He updated everything, causing me to lose the last two days worth of posts on this blog.

Worse, I also lost every last one of my databases. They were absolutely trashed by the rebuild. Unfortunately, I failed to backup everything recently, having depended on my hosting providers backup strategy. Luckily for my Wiki users, I had a month old backup of that database. Everything else, I had removed the backups not long ago. My failure completely.

I rebuild the Wiki, but I’m going to be taking a while to rebuild this site with all the old content, but I have it all, and it’s all coming back. Save for the posts from Thursday and Friday of last week. If anyone has those, I’d really appreciate them.

Note: I’m not planning on updating the rest of this week. I’ve got 11 months worth of posts and comments I need to reintegrate into the database, as well as getting my Personal site back into Movable Type. To my usual readers, I apologize, but I’ll be back online as soon as I can be.

Another Note: I’ve currently got all the archives except January through May 2008 up. I expect the archives will be done tomorrow (Wednesday)

Custom ASP.NET Authentication Part 2 - Membership

The first step in authorizing a user is authenticating them. There are any number of ways this could be accomplished, from basic Username/Password authentication against a database, to LDAP authentication, to biometrics, to the lowly Yubikey (which desperately needs more love). However, Microsoft only provides a small handful of authentication mechanisms, primarily based around Active Directory. For the RONet, we use a hybrid approach. We authenticate our users against Active Directory, but then verify that the AD user is authorized for our system. Admittedly, we could work this into the authorization step (discussed next week), but we don’t want to let un-authorized users do anything, so we simply deny them at authentication.

Unfortunately, this means we needed to write a custom MembershipProvider, which can be found in the System.Web.Security namespace. .NET does do quite a bit of the work for Membership for us, but interface demanded by the MembershipProvider is pretty heavy. For our purposes this was a fairly easy step, we manage very little of the user’s directory information and we can’t do anything with their password, so for us this class is pretty light-weight. Unfortunately, this does mean that we have a lot of methods that do nothing.

First, ther are a series of required Properties: 1. ApplicationName - The name of the Application you’re authenticating for. We don’t use this, but it’s set in the Web.config 2. EnablePasswordReset - Notes if the user can reset their password through our system. They can’t 3. EnablePasswordRetrieval - Notes if hte user can retrieve their password through our system. If this is ever anything other than false, you’re doing something wrong. Passwords should always be hashed, and preferably salted before hashing. Never, ever store plain-text passwords. 4. MaxInvalidPasswordAttempts - The number of times that a user can mess up their password before locking out their account. We don’t use this. 5. MinRequiredNonAlphanumericCharacters - Pretty straight forward. Only used when accepting new passwords. Not useful for us. 6. MinRequiredPasswordLength - Same as above. 7. PasswordAttemptWindow - How long, in minutes, to lock out a users account for when they mess up their password more than MaxInvalidPasswordAttempts. We don’t actually lock out user accounts, so we don’t really use this. We do use logic in the login page to prevent brute-forcing through our login system. 8. PasswordFormat - Returns a MembershipPasswordFormat value noting how the password is stored. Again, never use the value Clear. 9. PasswordStrengthRegularExpression - A regular expression (stored as a string) to match a password against to make sure it’s acceptably strong. In my opinion any password fitness methods used should probably be stronger than a simple RE, but this is probably better than nothing. 10. RequiresQuestionAndAnswer - Requires that you post a question and get an answer from the user before doing things like resetting their password. 11. RequiresUniqueEmail - Tells the provider to ensure that no duplicate e-mails addresses exist for clients.

I’m not going to go through all the functions required in this class (go here for that) because based on the Properties, the functionality of the class is pretty straightforward. This class provides functions to do a fair amount of user management, from locking/unlocking accounts, to making new accounts or removing old ones. It allows Passwords to be recovered, or reset, and provides information necessary to facilitate the kind of security questions that we see all over the web. Most importantly, this class provides the mechanism for you to define how the data store is checked for user’s information.

For our purposes, we don’t care about User Management at this time (eventually, I will likely implement the functions to place user records in our local data store, but for the time being, that data is managed by a Classic ASP app). We will never likely care about Password Management, as that is a requirement for Central IT and the University Active Directory. Because of this, the functions which handle that functionality either return false (where appropriate), or throw exceptions (where appropriate). Sometimes the exception is a ProviderException, other times, it’s an InvalidOperationException, though I’m on the fence about whether or not the IOE’s should be replaced by ProviderExceptions.

It is important to note, that the only NotImplementedExceptions that I’ve left in the code are methods that I fully intend to implement. Like the Mono project, I believe that a NotImplementedException is the same as a TODO item. That’s just kind of an aside, but it makes the most sense to me.

Moving on, we’ve got methods to get an individual users’ MembershipUser.aspx) record. These are fairly straightforward, contaning the MembershipProvider Name, the user’s username, unique key, email, Password restore question, and description, if their account is locked out, creation/logon dates etc. Lot’s of information, but nothing too sensitive. You can search for a MembershipUser based on username or unique key. If you want their username based on their e-mail, you can either search for their username based on their e-mail, or get back a MembershipUserCollection (which only makes sense if RequiresUniqueEmail is false, obviously).

And the MembershipUserCollection comes up often. Internally, it appears to be a List which can only contain MembershipUsers, though it wasn’t implemented with Generics. These methods are harder to implement as they require the idea of ‘paging’, where a request is made for a certain page of records, or a certain pageSize. You return to the user the MembershipUserCollection based on the search for their their e-mail, or username, or a complete list of users, and an out parameter which contains the total number of records in the entire data set.

One really nice thing about this MembershipProvider system is that you can allow certain options to be set by the User if that makes sense. There is an Initialize() method for the provider class which takes a Dictionary as an argument. With that, the user can specify the values they want for every property you provide. Now, for us, we’re the only ones who will be using this class, and the options aren’t really negotiable, so I’ve hard-coded them. But, this is a level of flexibility that is nice to have if you aren’t implementing such a special purpose (albeit reusable within our unique problem space) MembershipProvider.

There is one thing I’d like to mention before I wrap this up, and that is some caveats I ran across in my authentication step. The method which validates a user’s credentials is the ValidateUser method, which takes in a Username and Password and returns a Boolean value to indicate if the user should be allowed in to the system. My first draft of the method looked like this:

  1. public override bool ValidateUser(string username, string password)  
  2. {  
  3.     try  
  4.     {  
  5.         DirectoryEntry de = new DirectoryEntry(“LDAP://ad.example.com/cn=” + username + “,OU=Users,DC=ad,DC=example,DC=com”,  
  6.             “ad" + username, password);  
  7.     }  
  8.     catch (Exception e)  
  9.     {  
  10.         return false;  
  11.     }  
  12.     var ronet = new RONetDataContext();  
  13.     return (from users in ronet.Users  
  14.                       where users.EmployeeId == (int)de.Properties[“employeeID”].Value  
  15.                       select (users.Enabled && !users.Cancel)).SingleOrDefault();  
  16. }  

The code is fairly straight-forward. We try to log-in to the Active Directory server as the user, and recover that user’s specific DirectoryEntry. We then check our local database (using LINQ) by finding a user with the given EmployeeID (I’d do an integer comparison, but the code wasn’t originally written to do that, and the refactoring project hasn’t begun), and then make sure that the user is Enabled, and not Cancelled. The call the SingleOrDefault will cause the statement to return false if no matching records are found.

Of course, I did say this was my first draft, so that fact that it didn’t work is likely apparent. I already knew that LINQ was a lazy-loading API. For instance, if I were to drop the call to SingleOrDefault and save the query into a variable, I would get an IQueryable object, which I could choose to apply more constraints to. No call to SQL would be made until SingleOrDefault (or any method that returns an actualy value, or set of values) is called. It’s fairly simple. It turns out, however, that DirectoryEntries are also lazy-loaders. If the username or password was invalid, this particular method would fail would throw an exception at the point I tried to access the Properties, not at the initialization of the DirectoryEntry object.

The revised code I settled on simply moved the LINQ into the try-block, since even a Database Query should result in a false return value. While I’m not doing any logging in the Catch block, certainly, I am not precluded from doing so.

That leaves only one final step to configuring a custom MembershipProvider, and that is defining that provider in the Web.config file for your site.

  1. <system.web>  
  2.     <membership defaultProvider=“CustomMembershipProvider” userIsOnlineTimeWindow=“20”>  
  3.       <providers>  
  4.         <add  
  5.           name=“CustomMembershipProvider”  
  6.           type=“CustomMembershipProvider” />  
  7.       </providers>  
  8.     </membership>  
  9. </system.web>  

And that’s it. Just refer to the Membership object in the HttpContext, and you’ll now be using your custom MembershipProvider. The userIsOnlineTimeWindow is used to set the timeout on the Cookie that the user is given when they login (Yes, ASP.NET uses separate Session and Authentication cookies), but this provides seamless integration with existing ASP.NET controls with your custom User store. Logging a user in via FormsAuthentication looks exactly the same as if you were using any other authentication mechanism, allowing you to easily swap out MembershipProviders with little to no impact on the rest of your application, which is probably my favorite part.

Next time I’ll talk about the next step of validating the user, Authorization. Including means to ensure that you can do more fine-grained control of a user’s access than simply checking their broad role definitions.

Whole Food Adventures: Farmer's Markets

I’ve mentioned the Moscow Farmer’s Market here before, but while I’ve mentioned it, I’ve never really gone into detail about the value of the Farmer’s Market.

Most communities have some form of Farmer’s Market that runs for some period during the year. Here in the Moscow/Pullman area we’re lucky enough to have one that runs from May through October. Millwood Presbyterian Church in the Spokane Valley has their own market that runs a similar time frame. And from what I understand, that is merely one of several markets available in the Spokane area.

Farmer’s Markets serve several important roles. First, they provide a place for people who engage in Craft business to peddle their wares, like the Soap, but beyond that they provide a place for you to buy fruits and vegetables directly from growers. Often this food is certified organic, which frankly, I care little about, and even the stuff that isn’t certified, often fits the definition of organic, the growers just don’t want to be bothered with the paperwork.

I’d suggest going to your local market. We go almost every week, picking up a certain selection of standard sundry fruit and vegetables, and occasionally getting special goods (like the 25 pounds of peaches we bought a few weeks back. What’s more, we’ve established who our favorite vendors are, so we know who to get to first, as they’re quality and prices are the best.

The Farmer’s Market allowed us to get our garden up and running at fairly low cost, and allows us to find high-quality produce at prices I wouldn’t dream of seeing at any grocery store. Plus, you’re buying local, which I don’t think anyone would argue is a bad thing.

Custom ASP.NET Authentication

Microsoft's ASP.NET was an attempt to bring the Stateful Application Model to the Stateless Web. For a certain (admittedly common) type of Application it works pretty well. Provided, of course, that you don't want to do something outside of the 'normal' ASP.NET model. This is, of course, part of why I dislike ASP.NET so, but it is the technology our office has chosen to pursue, and it has led to some interesting challenges as we try to bend the technology to fit our needs.

The primary suite of Web Applications that we maintain, is what we call the RONet. The architecture on the RONet was done right around ten years ago, and the entire architecture was written in classic ASP/VBScript. The architecture, which this post will not go into, is pretty clean, but it was clearly designed at a time when the Internet, and a Web Application, was not what it is today. The existing Architecture, while sufficient for what has already been written, is not sufficient for moving forward, so we have been working on putting in to place a new Architecture to replace the old one in a piecemeal fashion.

Regrettably, Microsoft did not provide any good mechanism to integrate Classic ASP to ASP.NET. This makes the task of a piecemeal replacement of Classic ASP with ASP.NET far more difficult than it needs to be. There are a few methods to work around this. In Classic ASP, Authentication is typically verified using a Session variable. One solution, is to copy the correct Session variables from Classic ASP into ASP.NET. There are known methods (read: ugly hacks) to accomplish this.

The problem with reimplementing this sort of Session-based authentication in ASP.NET is that it's not terribly forward thinking. ASP.NET implements it's own Authentication model that exists beyond the session (that's not entirely fair, as the Authentication is tied to the Session Cookie, but bear with me), and by not taking advantage of the ASP.NET model, you lose access to things like the built-in Role-.aspx) or Claims-based Security.

This post represents the beginning of my adventures in customizing the ASP.NET Authentication mechanism. There will be several most posts over the next few weeks regarding how this is accomplished. We will be using ASP.NET Forms Authentication, since Forms authentication is best suited for the work that I've been doing.

  1. Custom MembershipProvider to customize the Authentication Step
  2. Custom User Principal to customize Role-based Authentication
  3. Custom Claims-based Authentication

The next post in the series will be posted next week, where I'll be going over writing a custom MembershipProvider, and particularly focusing on some of the challenges that I ran in implementing our particular system.

Working Towards Open Science

We live in an interesting time in Science. In the past, the conducting of science required expensive equipment, and immense amounts of time which made the ability to conduct scientific inquiry wholly out of reach to anyone outside of Academia. Several things have changed in society over the last twenty five years as technology has grown which have led us to the precipice of a fundamental change in the way that scientific inquiry is conducted.

First, the people have access to far more information than we’ve ever imagined before. This is due to the Internet, and Internet-based movements like Open Courseware, which makes the ability to learn far easier, not to mention cheaper. Recently, the California University System sought to use Open Courseware to reduce the cost of Education, citing the high costs of textbooks. As someone who has, in some way or another, been tied to a University for the last seven years, I can certainly agree with that sentiment, but more importantly, it makes the information used in gaining a college education available to everyone. Is it a replacement for a College Education? Not likely, since much of the benefits to a college education are inherent in the working with peers, and with the professors, but for some people it’s enough to help.

This has also been met with the recent movement towards opening up Scientific Journals. However, on this point, we still have a long way to go. A journal that my wife’s advisor has been published in on several occasions, Molecular Biology and Evolution, costs around US$141 per year to subscribe as an individual (US$678 as an Institution), less if your a student or a postdoctoral researcher. Frankly, this is cheap. Very cheap. Another Journal where he has been Published, the Journal of Morphology is only available to Institutions, and costs a blistering US$5533 per year to subscribe.

Still, there has been movement here. Both the journals mentioned above allow for articles to be made downloadable via the Internet for free, and it seems that the MBE journals subscription is more to cover membership dues into the organization which publishes it. Things are changing where many grants which are based on public funds (ie, grants for the National Science Foudnation, then the papers resulting from the grant must be made freely available to the public. This is fantastic. In my opinion, Science needs to be conducted freely, and out in the open. First, because then it benefits the most people, and second, openly conducted science is the best mechanism to further drive scientific development. And while many scientists do interesting trade in publication credit and such, ultimately, I believe that most scientists agree. However, if you’re not lucky enough to be affiliated with a university, there is going to be a large percentage of journals which you simply can’t read, because historically, the cost of membership to a journal has been too high. And the scientists do not even see any of the money from the publication of their materials.

We have more access to the data, and to the results of science than ever before, but there is more to the development of an open scientific infrastructure than simply the papers that result from scientific inquiry. The next step is Open Data. Luckily we’ve come a very long way in this respect as well, at least when it comes to research on Genetics. The National Center for Biotechnology Information offers a convenient place for scientists to upload genetic sequences they’ve made in order to allow others to carry on work with that sequence information. GenBank, NCBI’s sequence database, contains tens of millions of sequence records from species ranging from humans to hagfish, porpoises to platypii.

Why do projects like GenBank exist? Well, first, some scientists receive grants to sequence an animals genome. The methods they use for this are generally imprecise, and there is a lot of what’s known as “Shotgunning” involved, meaning that they throw enzymes at DNA and see what sticks. Some genes are easier to extract than others, and depending on the perceived value of the gene, the desire of a scientist to extract it changes. For instance, in Catherine’s lab, they feel that the 18S and the 28S ribosomal genes are particularly valuable for deep historical phylogenetics, and they’ve got some data to back that up. For that reason, the lab has developed protocols to extract those two genes in their entirety, something that many others do not feel is worthwhile. It probably doesn’t help that the protocols need to be modified depending on the species being extracted, and there are a limited number of researches using these genes. Genetic research is still very much changing, and I lack the knowledge of biochemistry to speak any more to the difficulties inherent in the practice. The point is that the people doing the sequencing may not get everything, and they make their data available so that others can do the analysis.

So, with the data being made available, the need for complex lab equipment to perform certain types of analysis on genetic material has been greatly lessened. Certainly if you can do your own sequencing you have an advantage, but it’s no longer necessary. I suspect that many other fields have similar open data initiatives, but genetics happens to be a field today where the sheer quantity of data being produced, and needing to be produced is mind-boggling, so it makes a particularly good example.

This takes us to our third level of what’s required for conduct open science. First, we had the Open Knowledge. Then, we had Open Data. All that’s left now is Open Tools. It just so happens that we already have mechanisms in place to supply this, in the work done by the Free Software Foundation and the Open Source Initiative. In the field of Statistical Biology, there are a large number of software tools that are used by most researchers in this field, and save for one or two notable exceptions, this software is all Open Source, much of it copyleft.

With this, we now have all we need to do real science. We have the ability to learn. We have the data we need. We have the exact same tools used by the academics themselves. Science is doable by the layperson, in a way that it has not been before.

This is not to say that Academia is without merit. Most scientific inquiry will still be done in Academia. The scientific inquiry done in privately-held corporations will rarely be released to enrich all society. The funding necessary for certain types of inquiry will always be easier to get in academia. But, for those people who are interested, who have an itch to scratch, they can do their own research. On their own time. And they have the power to discover amazing things. Academia will always be the heart and soul of science. I never see that changing, but the laypeople need to be able to benefit from, and contribute to, science. There are still battles to be fought, regarding truly free access to research and such, but we’ve come far, and I don’t see the movement toward Open Science slowing down any time soon.

We can still hasten it’s coming though. Make sure that research done using public funds must remain available to the public. Not only the papers published as a result, but the data generated for the research. We should do what we can to make it not only easier, but also more valuable to participate in this Open Scientific Community. It’s for all our benefit.

First Impressions: Google's Chrome

Today, as has been all over the news, including the NY TImes, Google announced, and released the first Beta of their new Open Source Browser, Chrome. Not only that, but they released a 36-page comic book about what they’re trying to do with Chrome, which frankly I thought was worth reading. In fact, I read the entire thing before I downloaded the new browser.

What Google is trying to do here is admirable, they’re trying to reinvent the way we interface with the web, and they’re doing it in the Open, with a license that can benefit proprietary browsers, and they’ve done it with that undeniable Open Source demand to “Show Us the Code!” Admittedly, Google is only distributing packages for Windows at this moment, but the browser will build on Linux and Mac OS X (it may not work 100% yet, though), and work is underway to fully support those platforms.

But, having installed Chrome on my Vista box at work, I’m impressed. My primary desire was to make sure that the WSU Catalog which I published last week still works. And it does, and my god is it fast. I mean, it was pretty fast before, but the data feels like it’s rendered the instant it comes off the wire, and the DOM manipulation that I do is nearly instantaneous, in fact, I might refactor a few things now that I’ve actually used a browser with a fast JavaScript engine. And really, it’s largely V8 that’s so impressive. I’ve never used a JavaScript engine this fast, and luckily it’s BSD license means that we finally have a solid JavaScript engine which could potentially become the JavaScript engine used by all the major browsers.

Not to say that the work on SpiderMonkey and the work WebKit has done isn’t good, but V8 appears to be a clear winner in the JavaScript performance game. Plus, the security model, where every tab is it’s own process, has clear benefits. I’ve got a problem on my Vista box where Firefox crashes any time it tries to initialize the latest Beta of Silverlight. This is particularly problematic as I’m working on a project with a Silverlight UI right now, but luckily it works in IE. No idea what’s wrong with Firefox. The great part is, that even if such a problem existed with Chrome, only the tab which faulted would crash, not the whole damn browser. Fantastic.

Is Chrome going to replace my Firefox usage? Probably not. I’m a heavy extensions user, and I’m not sure I could survive without my Firebug, Foxmarks, and Greasemonkey. And even my wife uses Greasemonkey for a few sites that are considerably harder to use without the bit of JavaScript code I use Greasemonkey to inject. So, what I want is a version of Firefox that uses V8, and has process-level disconnects between tabs.

Oh, and the ability to drag a tab between windows. That feature is awesome.

Chrome is impressive, and I’d suggest checking it out. For many users, it’s likely the better browser. And I really hope that the Firefox people take notice and integrate the good parts of Chrome in with their extension framework. Frankly, I don’t even care about the whole WebKit vs Gecko debate, they’re practically interchangeable in my mind. But there is no competing with V8.

Whole Food Adventures: Zucchini

In our Garden this year, we decided to plant a single Zucchini plant, which has become one of the most prolific producers on our little patch. Catherine’s ecstatic. I’m not a big fan of Zucchini, but when my wife brought this beast home, well, I had to do something with it.

But why is Zucchini worth growing? Well, it’s low calorie, and high in certain vitamins (A) and minerals (folate, potassium and maganese). It’s easy to prepare in a variety of ways, and it contains a lot of water, which can be useful in a variety of preparation, I plan to share two preparations that I actually enjoy.

First, is the old standby Zucchini bread, which is slightly sweet and makes for a tasty breakfast. You can get two loaves of bread from about two cups of shredded zucchini, and the loaves make great breakfasts or snacks. The recipe is simple

2 cups shredded raw zucchini 3 eggs 1 3/4 cups sugar 1 cup vegetable oil 2 cups flour (I use Spelt) 1/4 tsp baking powder 2 tsp baking soda 2 tsp cinnamon 1 tsp salt 2 tsp vanilla 1 cup chopped walnuts (optional)

Strain and press the excess water out of the zucchini, and set it aside. Mix your eggs, sugar, and oil together in a mixing bowl, then stir in the dry goods until everything has just combined. Fold in the zucchini, and finish mixing, then place in two greased and floured 8 1/2 x 4 1/2-inch loaf pans.

Bake at 350 degrees Fahrenheit for about an hour, at least until a toothpick put in the center comes out almost completely clean.

The next preparation is meant more for snacking or an appetizer. Fried Zucchini is healthy and tasty, and takes only a few minutes to prepare. First, dredge slices of zucchini through a couple of beaten eggs, then through breading (bread crumbs and Parmesan is very tasty), set aside. Bring a couple of inches of olive oil to temperature on a medium-high burner and slip the breaded zucchini in, and let fry until golden brown.

Zucchini, like many squash is a healthy and versatile plant. Taking a little time to find a preparation that works for you is good in the long run. Plus, they are very, very easy plants to grow, and a great way to get a start in your own garden.