April 2008 Archives

Linux on the Desktop Requires Funding

A while back, the Linux Foundation was criticized pretty heavily earlier this month after this year’s Collaboration Summit for not supporting Linux on the Desktop. Admittedly, the core supporters of the Foundation: IBM, Red Hat, even HP and Dell, have more interest in Linux on the Server than on the desktop, after all the core of these companies business is on the Server.

However, Novell is a major contributor, and they have put an enormous amount of money in GNOME on the desktop. They have a Desktop Standards which has been working hard to standardize behavior of the Linux desktop such that application interoperability, even between KDE and GNOME. These days, I can run a handful of KDE apps that lack good GNOME equivalents (like Scribus) and still have copy and paste and session support. The work of the Foundation has helped Desktop Linux immensely over the years.

And now they’ve turned their eyes towards Device Drivers. Historically, this has been a problem with Linux, because devices can take a lot of work to get working, and many companies simply aren’t willing to take the time to support their device on Linux, when the driver interface potentially changing, and the potential legal issues involved in linking binary code against the GPL’d kernel.

So, the Foundation is suggesting that companies release their drivers as Open Source, which will enable the drivers to continue to work as the kernel evolves, as well as potentially opening the door to improvements to those drivers. Traditionally, this has been avoided by most companies, because they were attempting to protect their intellectual property wrapped up in their designs, and by releasing the drivers, they could be giving up control of that IP. Eric Raymond wrote years ago about the fallaciousness of this argument, but it wasn’t until recently that companies like AMD releasing documentation for ATI cards, and Intel doing the same for their GPUs.

Frankly, I’m glad about that. Ever since we bought Catherine’s Dell Inspiron 1420N, which Dell would only sell us with an Intel-based chipset, I’ve been hoping to see improvements to those drivers. Compiz doesn’t work, and any games depending on OpenGL, are basically unplayable. Admittedly, the laptop is primarily a work machine, but Catherine has been disappointed with that particular failing of the drivers. I’m sure it will improve, though if I could find a place to donate some funds to the project, I likely would.

Ultimately, this is where Linux on the Desktop is going to continue to struggle. Most Linux users simply aren’t willing to pay for software or drivers. Admittedly, I’m always looking for the FLOSS alternative, and I’ve very rarely offered financial support for a project that I’ve been using. There has only been one place that I’ve routinely supported Linux software, and that is games. If nothing else, if we can get more Games on Linux, I’m sure it will become easier to attract casual users to Linux, which will place further pressure upon the device manufactures to support the platform. In addition, with more users, we’re more likely to see other software on the platform as well.

However, a lot of companies aren’t willing to risk releasing their products on what they feel is an untested platform. Smaller companies, like Introversion and Basilisk Games are typically more willing, as they need to reach as broad an audience as possible.

Now, Runesoft, a game company with several Linux releases under it’s belt, uses an interesting mechanism to fund their ports. They require that a certain number of preorders exist for a game before they do the port. The current game is jack Keane, which they’ve already committed to the Mac. Considering Ryan Gordon’s comments in his FLOSS Weekly Interview claiming that, for UT2004 at least, there were more Linux than Mac clients, this is an interesting decision, but the difficulty in judging the Linux install-base (due to lack of a ‘sales’ model) makes the decision understandable.

This is a great model. Runesoft can guarantee that they’ll have their time paid for before they commit to the port, and we can help finance this and future projects. If we want Linux to survive on the Desktop, we as users need to be willing to financially support the projects. Jeff Atwood pushes this same position. It just makes sense. Software takes time to create. Yes, most Open Source software is created to solve an itch of the developer, but many successful projects reach a point where the developer begins adding features for others, and not just themselves. Ultimately, at this point either the projects suffers because the developer can’t afford to spend the time necessary to add the features that everyone wants.

Support Software. Support your Platform. I pay for commercial Linux software that I plan to use. I’m planning to budget a bit of money to donate to open source projects each month, so that they can continue to thrive. Without money behind the platform, Linux can’t advance. And if we want Linux to advance on the desktop, we, as desktop users, need to be willing to put forward some cash.

Secure Applications Programming

Neosmart Technologies recently posted a diatribe about Windows Vista’s UAC subsystem, and how it basically forced them to rewrite their iReboot application. iReboot is an interesting Windows application which allows you to set the bootloader to load whichever OS you want on the next reboot, rather than waiting for the system to reboot, and quickly interrupting the bootloader to override it’s default selection. Thinking back to the days when I still dual-booted and the increasing popularity of the Intel-based Macintoshes, I can definitely see why this application has begun to catch on.

However, after reading this editorial by the author of iReboot, regarding the challenges he faced making the application work correctly on Vista. In short, he had to split the application into two parts: a System-level Service which does the modification of the Boot Loader, and a user-mode application which the user uses to send commands to the service. Seems pretty reasonable to me, and Unix systems have done this for decades. The Windows Kernel has always used a similar communication mechanism as well.

His basic argument is that the application must be installed as an Administrator, so therefore it should be able to start as an Administrator without any problems. However, Vista’s UAC wants to verify with the user every time the app starts up to verify that the application is allowed to do what it’s trying to do. Now, ignoring the annoyance factor of UAC, which Microsoft evidently did on purpose, it is perfectly reasonable for Vista to do this. What Mr. Al-Qudsi seems to completely fail to understand about Application Security, is that the Operating System decides who has what permissions on the user level not the application level. It is up to the user to, if necessary, elevate the permissions of an application to accomplish a system-level administration task.

If iReboot was allowed to elevate it’s privileges automatically, than any application ran by that user could elevate it’s privileges as well. In the comments, someone mentioned that perhaps a “RunElevated” registry key could be added, but that seems to me to be terribly prone to abuse. Some applications need to be installed with Admin Privileges but shouldn’t require those privileges to run, but with this registry key, ISVs could easily bypass the built-in security model in Vista.

Ultimately, it comes down to a security trade-off. By making the portion of the application which does elevated privelege attacks separate, it becomes easier to ensure that the application does what it is said to do, plus it is easier to test those code-paths and ensure proper behavior. It ensures that the inputs to the elevated code are better defined through an exposed interface, and encourages stronger design from the outset. It is a bit harder for the programmer, as their code now depends on Inter-Process Communication libraries and potential threading issues. At the end of the day, it reduces risk. it reduces the access of an unprivileged user to privileged code, it reduces the necessity for a user to have unnecessary access, and it better encapsulates the roles each part of the software plays.

As a former Administrator of a Windows 2003 and XP-based network, it was constantly frustrating for me to realize the number of tasks that my users struggled to complete because they simply had user permissions, how quite a few pieces of software would fail to run correctly due to the reduced permissions set of most users. In many ways, ISVs like Neosmart are responsible for the traditionally poor state of security on the Windows platform, due to their refusal to implement their software correctly, and their complaints regarding what they need to do for Vista compatibility.

Even in my current office, our helpdesk people struggle daily with legacy software that doesn’t work right (or at all) on Vista because the developers depended on bad behavior in XP and older OSes, and they’ve refused to correct their software in a timely manner. Vista isn’t going away. Windows 7 may be coming sooner than anyone expected, but it will build upon the Vista core, rather than stepping back to XP. There has been a paradigm shift at Microsoft, and this time it was definitely for the better. ISVs need to get on board, or prepare to made irrelevant.

Beginning Adventures in Whole Foods

This weekend, Catherine and I have begun to try to revisit the way we view and think about food. Now, I’m a big guy, I weigh at least 50 pounds more than I’d like, but I’ve always attributed that to my mostly sedentary lifestyle. Admittedly, I have been known to over-indulge from time to time, but I’ve been getting far better at fighting those cravings and at least trying to eat healthy. The sedentary lifestyle hasn’t changed much, but we’re working on that now.

Catherine’s mother had purchased a book, Nourishing Traditions: The Cookbook That Challenges Politically Correct Nutrition and the Diet Dictocrats by Sally Fallon and Mary G Enig, a journalist and a PhD biochemist, who argue that all the health problems and obesity in this country can be traced back to our food (like most nutritionists). However, they argue that the things that we’re told are bad for us by the “dictocrats” (I hate that word) typically aren’t the things that are bad for us.

The argument makes a lot of sense. We have more incidents of heart disease, Alzheimer’s, depression, cancer, and a host of other medical problems. Yes, there are more people, and we have better technology than we’ve had in the past to detect such conditions, but the curve seems far too accelerated, as if we’re asymptotically approaching a point where we all die of heart attacks by the time we’re fourty-five.

It doesn’t have to be this way though. Dr. Gregory House makes a point of fully investigating the environment in every patients case, as 99% of the time, a strange disease is somehow related to the environment in which the person has spent their time. So, what has changed in our collective environment? What we eat has changed immensely even over the last sixty years. Our food is engineered to produce a more voluminous, consistent product.

Despite all of this, little effort has been spent on making food healthier. We keep moving toward more and more processed food. High fructose corn syrup is in everything; Margarine, which most people my age grew up with, has been linked to a multitude of health issues; and the highly-processed corn oils used for frying at most restaurants is more and more being linked to health problems like high cholesterol as well.

The book argues that the way our grand-parents or great grand-parents grew up eating is in fact the healthiest way for us to eat. What does this mean? Processed foods, like Margarine and Conola oil, and corn syrup should be eliminated. Naturally foods, like milk, butter, and grass-fed cattle are in. In fact, lots of vegetables can’t be properly processed without butter because of the existence of fat-soluble vitamins that our bodies can’t get to any other way. As a long-time believer in butter, I was pretty glad to read that.

We did discover that the majority of the recipes in the book tend to build off of a few basic techniques, primarily requiring fresh whey, the protein-rich dairy byproduct created when making cheese. In an effort to make some whey of our own, we’ve begun the process of making some homemade yogurt, which was a simple matter of mixing whole milk with a bit of fresh plain yogurt which contains live cultures. Come tomorrow, we should have some pretty excellent yogurt. The rest of the plain yogurt we purchased is being wrapped in a tea-towel and left in a strainer, so that it drips whey and turns into a fresh cream cheese. Next week, we’ll do the cream-cheese with our homemade yogurt.

In addition, we’ve bought a buttermilk culture, and a kombucha culture off of an etsy store, in an effort to try to replace some of what we drink with more healthy alternatives.

The book makes a lot of sense to me, and anyone interested in the ‘whole foods’ concept, I’d suggest pick this book up first. It’s full of good hints, and plenty of discussion about why modern foods are bad for us, which include references to real medical and scientific journals. Part of eating right may well involved letting go of this abundance culture we live in, where anything we want in almost always available, but I think that not only will the quality of our food increase, but so will our overall health. And that alone is enough reason to change.

KSplice: Runtime Kernel Patching

Recently, Jeffrey Brian Arnold of MIT completed a thesis on a software project entitled ksplice. Ksplice seeks to solve the problem where sometimes systems administrators will wait for weeks to install security patches since they don’t want the downtime of a reboot. Downtime is inconvenient, there were 50 security-only patches released over the last 32 months. Of those, the majority were potential privelege-escalation bugs. These bugs should be the first for patching, but for many systems people, security has traditionally taken a backseat to guaranteeing uptime.

There have been other projects like Ksplice in the past, but they’ve all tried to solve the a very general problem of kernel patching. Ksplice focuses on the problem of patches released purely for security reasons, ignoring the case of general patches from one version to the next. This is a powerful difference, as the majority of security patches are small, and rarely do they touch major data structures. Since Ksplice ignores the case of long-lived data structures, it can’t be used on all patches, but it did work for 42 of the 50 x86 non-DoS security related patches released between 2005 and 2007.

The theory behind KSplice is pretty simple. Determine which functions have changed in the patch, load the new version of the changed functions, and make sure that the never version execute instead of the old. It does this by inserting a ‘trampoline’, or a call to the new version of the function, at the beginning of the old function. This does lead to some additional memory overhead, as both versions of the function will be in memory, but the difference in minimal, and the added benefit of the kernel being patched until you can reboot it is pretty impressive.

The project is also much simpler (and thus more easily portable) than many other projects. It doesn’t try to parse the C code, instead using the GNU Binary File Descriptor Library to examine the binary structure of the compiled object code, using this compiled version to examine each function in memory. As a result, KSlice knows what to replace without having to make any assumptions about what the compiler might do when building the software for real.

Regrettably, the research may be for naught, as it appears that Microsoft may have patented the technique. Now, this patent would probably be pretty easy to overturn, as there is no doubt an immense amount of prior art. Hell the kernel module support in the Linux kernel looks like it could be covered by this patent, and that’s been around since well before 2002. Okay, maybe not exactly, but patenting trampoline functions and binary analysis is pretty ridiculous.

So as not to go into a rant about software patents at this time, I’ll leave it at this: Microsoft’s patent means that no company will touch Ksplice at this time. The patent appears to be one which could be successfully overturned in court, but that takes time and money. And no one has more time and money than Microsoft.

KSplice is a good technology. It solves a good problem for systems administrators, and it appears to do so in a pretty convenient manner. We’ll see how the patent situation plays out, though I don’t expect it’ll turn out well.

From a security standpoint, there is a potential risk, in that if an attacker had already taken advantage of a privilege execution flaw, they could inject almost any code they wanted into the running kernel, however, they already can do this via kernel module loading, which is how KSplice operates, so virtually no security is lost using this technology, and it does make it much easier to keep your kernel patched without worrying about reboots. Speaking of which, I installed a new kernel a few days ago, and I really ought to go reboot…

New CAPTCHA Technology?

2 Comments

The Completely Automated Public Turing to tell Computers and Humans Apart (CAPTCHA) Test is supposed to be a method to determine if an entity connecting to a service (typically via the web) is a Human or a Computer. Traditionally, this has been accomplished by generating ‘words’ and rendering them as either an image or an audio file, such that a person could easily recreate the intended text, but a program would struggle.

CAPTCHA began simply, with nothing to obscure the data, which was quickly beaten by existing OCR technologies. Some people, like Coding Horror still use these styles of “naive” CAPTCHAs since what they are protecting simply isn’t important. As time went on, and CAPTCHA began to be used for more and more sites, including financial and e-commerce sites different techniques for adding ‘noise’ to the image were developed. However, as with all things, money called out and people figured out how to hueristically break almost any CAPTCHA. TicketMaster works hard trying to keep people from buying too many tickets (as an effort to curtail scalping), but there is so much money to be made reselling concert and show tickets that TicketMaster’s CAPTCHA only stops the most amateurish scalpers.

Even Google, who’s CAPTCHA was considered one of the strongest was broken recently, causing hordes of fake GMail accounts to be created, and leading some people to Spam blacklist mail originating from GMail. The amount of money to be made by breaking CAPTCHA seems to far exceed the amount to be saved by doing it right in the first place. All anyone seems to know now is that clearly CAPTCHA as it exists today is virtually pointless.

For a while, I was trying to set up reCAPTCHA support on this blog, to try to cut down on spam comments (any unautheticated commentors need to be approved by me right now, the filters let through a handful of spam comments a day into this holding pen). I didn’t do this because I thought reCAPTCHA was unbreakable, I know that it isn’t. The idea was that by presenting a slightly less open target, people would hopefully stay away. Unforuntunately, I had trouble with the MovableType Plugin that was available, and never really took the time to correct the errors. Still, reCAPTCHA was a cool idea, offering a CAPTCHA to solve a difficult OCR problem (which incidentally, would be a great method to break other CAPTCHAs). It was simple, but it did enough for my basic purposes.

Ultimately, any sites that depend on any real security from the CAPTCHAs need something far better than what currently exists, and there are several inviting technologies in this space. One of my student loan providers uses a login CAPTCHA where I had to select an image from several dozen images, and enter in a word that I could remember in relation to that image. Then, they show me four images on login, and ask me to select my image and enter the selected word. Not safe from shoulder-surfing, but nothing really is, but far less likely to be guessed via remote attack or bot.

Most interesting in the world of CAPTCHA research is the work being done at Penn State University by Professor James Wang entitled IMAGINATION. It’s a two-stage CAPTCHA focusing on images. The first challenge requires you to click on the center of any photo in a collage of photos that might overlap, followed by identifying a fairly complex image by choosing a word from a list.

These tasks are pretty trivial, but there is enough noise on the images, with really messed up colors to make identification harder. Is it perfect? No, it will someday be broken, but it’s a strong step forward. interestingly enough, many attempts to crack CAPTCHA implement some fairly advanced machine learning algorithms, and that will become far more important as CAPTCHA evolves. Perhaps the same element of society which profits from breaking CATPCHA may become the leaders in AI research into the future?

Vulnerability Analysis: Flash and the ActionScript VM

Matt Dowd, a researcher with IBM’s Internet Security Systems group, recently discovered an interesting exploit in Adobe’s Flash plugin. The exploit is an interesting modification of a standard null-pointer attack, that reveals some pointed problems with the software design that frankly anyone could have made.

The basis of the exploit is simple. With a specially crafted SWF file, an attacker can execute arbitrary code and take control of a system. Due to the way the plugin is compiled, the same exploit works on both IE and Firefox, and even Windows Vista, which supports a compilation flag which would protect against this error, is susceptible to this bug.

The nature of the bug is the ability to rewrite a specially selected patch of memory, due to the failure of proper error checking. Unfortunately, this particular error checking failure is an incredibly common on in C programming. However, Mr Dowd’s analysis of the exploit, which includes plenty of assembly dumps and code interpretations to explain exactly what is going on in the ActionScript VM to allow this exploit to occur.

The nature of the exploit comes from setting the SceneCount variable in the DefineSceneAndFrameLabelData structure to a negative value. In truth, the structure definition (taken from the file format specification) does define this field as a 32-bit unsigned integer, exactly as it should be. However, the analysis shows that in the function that begins to process this structure, makes a ‘jump if greater than zero’ call, which assumes that the argument is a signed integer, so any integer 0x80000000 or greater would pass the check since the value is less than zero and should be invalid.

No too much later, a call to malloc is made, which fails due to the invalid data, however, there is never a check to make sure that the return address from malloc is correct. What his means, is that the user is now free to overwrite any block of memory greater than 0x80000000. Due to the nature of the VM, there were some restrictions on the data that could be overwritten. The target memory area has a huge number of functions that could lead to the plugin (and thus the browser) to crashing. In addition, the overwrite is based on a specific address formula, making the selection of the location to overwrite that much more difficult.

Despite all this, it was still possible to exploit in order to get control of the system. Luckily, Adobe has already patched this hole, but if you haven’t updated your Flash player in the last week, I suggest you do so post-haste.

Book: C# 3.0 Pocket Reference, Second Edition

At Boise Code Camp last month, I was able to get a free copy of O’Reilly’s C# 3.0 Pocket Guide, Second Edition, written by Joseph Albahari and Ben Albahari. Though O’Reilley publish the book in it’s “Pocket Guides” series, at 230 pages, it hardly counts. Still, the book has been a staple in my briefcase ever sense.

The book doesn’t only focus on C# 3.0 features, serving more as a comprehensive guide to C# language features, from the standard issues like Properties, to the fairly large number of pre-processor directives available. The 3.0 features might receive a bit more coverage than the older features, but even if you’d never touched C# before today, this book will serve as an excellent desktop companion to the language.

Unfortunately, since it is a general overview, it may not contain all the information you require to solve a problem. For instance, it’s section on Language Integrated Query (LINQ) is pretty short, which pretty much begs you as the reader to find a more in depth reference if you’re trying to get all you can out of LINQ (and if you’re a web-developer you really should be). Incidentally, the authors of this reference, wrote another Pocket Guide on just LINQ, which weighs in at 161 pages, which I suppose indicates I should be glad LINQ got the quality of coverage it did in this more general guide.

I’ve been a fan of C# since I first started looking at the language (which wasn’t until the Mono was well under way). It makes some excellent compromises between C/C++ and Java, and is fairly powerful. These days I’m tending more and more toward dynamic languages like Perl and Python, but C# is still generally a pleasure to program in, except for the ever expanding and increasingly obtuse .NET Framework Microsoft keeps churning out extensions to. To be fair to Microsoft, the .NET programming I’ve been doing lately has been against SQL Server and SharePoint, but those APIs could have been much better designed than they are.

If you do anything in C#, this reference would be an excellent one to keep on your desk. It’s clear, concise, with clean code examples. The authors do an excellent job of organizing the information and presenting it in a manner that any experienced programmer can easily follow what’s going on, and depending on your level of programming experience, this book can even serve to teach you the language.

.NET is a solid technology, and C# is the flagship language for the technology. If you’re doing any work in .NET, or simply want to learn the language for your own enrichment, the C# 3.0 Pocket Reference will be excellent to keep around.

WTF Microsoft?

You know you have way too much money when you can produce a music video as an internal joke.

Seriously, watch the Video. It’s clearly all new video. Bruce Springsteen, I’m so fucking sorry on your behalf. Word is Microsoft made the video on purpose, as a way to help people in the company not take themselves so seriously.

While I still believe that Vista and the rest of Microsoft’s ecosystem still kind of suck, I will say that I want to thank Microsoft for some good laughs on this one. This video is so embarrassingly funny, and I shudder to think of the amount of money Microsoft probably spent producing it.

This Week In Security Theater

Autocomplete. It’s a convenient evil common and popular among today’s browsers. The ability for Autocomplete to store certain bits of form data for return visits can be greatly convenient when filling out a variety of forms on the Internet today. However, in order for Autocomplete to work, the data in question must be saved somewhere, and that somewhere needs to be accessible in the browser. The claim on many is that on Mac OS X, Safari stores it’s autocomplete data in the Key Ring, which is saved to disk using strong encryption (though the keyring password is always in memory, and susceptible to certain types of DMA attacks). Internet Explorer and Firefox are not designed to integrate tightly with such software, so while they may obfuscate their representations of saved data, the data is still easy to recover.

And this is the inherent evil in Autocomplete. The feature can and will save immense amounts of personally identifiable information, from usernames and passwords, to addresses, credit card numbers, etc. Credit Cards are, of course, the issue where most people get up in arms about Autocomplete.

At some point, the browser developers decided that, if a field wasn’t supposed to utilize autocomplete, that it was the responsibility of web developers to tell them when fields were supposed to not utilize the Autocomplete feature. Some people argue that it is important that the web developer do this, for the instance where the user is inputting ‘protected’ information into a public terminal, which, unfortunately, rarely have autocomplete disabled. Frankly, if it’s a public terminal, you shouldn’t be putting any information you care about keeping confidential into it. What part of public do people fail to understand?

This issue was initially raised on SomethingAwful.com, where a forums user requested that the site stop offering to helpfully fill in his credit card information in the forums store. The near immediate response from the (non-technical) lead of the forums? That we’ll gladly disable our psychic ability to guess your credit card number. Even without fragmaster’s comment, an argument ensued about whether or not this was the responsibility of the site. My argument is that sites legal responsibility to protect user data extends only as far as the data-link used to connect the client to the server.

The very nature of the web demands this, as the very nature of the web is such that we, as web developers, have no control over the client that is being used to connect to our system. Attempts to limit which client browsers connectivity can be thwarted with relative ease. It is our responsibility to ensure that we’re not serving up confidential information to the wrong people; to ensure that the data in our possession is properly secured, and that the links on which that data travels are properly secured as well.

The PCI Data Security Standard, a Credit-Card industry think-tank, requires that stored cardholder data is to be secured in a reasonable fashion, utilizing encryption and access restrictions and implementing a data retention/disposal policy regarding cardholder data. The PCI DSS is not a law, however, so while not following it’s guidelines can cause the merchant providers to revoke ability to use their service, there are different sets of laws (and legal requirements) regarding data breaches.

The question becomes who is liable for the security of this data. I argue that it is the cardholder who is ultimately responsible for who they provide their card information to. Once that card information has been provided, it is the responsibility of the storing party to secure it. As a web-developer, I am not responsible for the security of your browser. I am responsible for securing my databases. I am responsible for protecting the SSL Certificate that we use to communicate. I am responsible for any content served to you from my servers. But, I am not responsible for your decision for ensuring that your browser or operating system are secure.

If an attacker is able to access your autocomplete files, then they can very likely implement another form of attack that completely circumvents the issue of autocomplete all together, be it by sniffing traffic for web forms with credit card numbers, or logging keystrokes, or any other mechanism. As a web developer, I can’t be held liable if your credit card number is intercepted due to malware installed on your machine, and in the case of Autocomplete, the browser is behaving in a malware-like manner. If it is any software-vendor’s responsibility to ensure the security of any data subject to autocomplete, it is the browser vendors responsibility.

Admittedly, Internet Explorer, Firefox, and Safari all three support an extension to the <input> and <form> tags, which can disable autocomplete. However, this is a non-standard extension, which causes failed validations, and frankly, disguises the risk inherent in autocomplete. A more standards-compliant mechanism to support this would be using Javascript to set the autocomplete variable to “false”, like the in following:

// Disables autocomplete on the Google search box
document.forms[0].q.autocomplete = "false";

Alternatively, register an “onfocus” event with the input field can accomplish the same thing. For browsers that don’t support the option, nothing happens, and for browsers that do, this is a standards-compliant means to disable autocomplete for the given field.

It’s a relatively easy thing to do, adding only a small amount of bandwidth to each pageload, on pages that aren’t accessed as frequently as many other pages. So, why do I feel it’s such a waste? Because it makes people feel that an inherently insecure feature is more safe than it really is. Plus, my responsibility to protect their data should extend no further than my ability to control their data extends. Users can choose to override this option, a users credit card number might be stolen by someone looking over their shoulder as they enter it in, someone could be watching the desktop through a VNC-like service.

This has not, to the best of my knowledge, been tested in court. Many organizations are going to cover their own asses by simply adding code to disable this, which may not be wholly unreasonable to do, though I’d suggest going the Javascript route, for compatibilities sake.

The question of who is responsible if a credit card is stolen from autocomplete data is a complicated one. Is it the Application Vendor? I feel that it obviously is not, since the autocomplete is a feature of the browser, not the application. Is it the browser vendor? Probably not, they do ask the user if they want autocomplete, though they do make it easy for the user to zip by and enable it without thinking too hard about it. So, who’s left? The person who owns the computer from which the compromised data was recovered from? If it’s a public terminal, whoever configured it should never have allowed autocomplete to be active. However, the person still expressed an implicit trust to the owner of the system when they put their credit card information into it. On a public terminal this is always dangerous. Using a computer owned by someone else always implies that you trust that person enough that they won’t misuse it.

Users need to be aware of the risks. Using autocomplete has the risk of revealing personal, confidential information. It is up to the user to ensure that they only entrust the data to people whom they feel are trustworthy. Once the seller has secured their database and provided encryption for all communication channels over which the card number will travel, then the merchant has done all that they can, and any autocomplete magic that they might attempt is doing nothing more than masking a real risk, or disabling a feature that some users might actually want.

Clinton Smells Blood, Tears into Obama

Hillary Clinton, smelling blood after Obama’s recent comments about working class America, Clinton has wasted no time tearing into her opponent, calling him ‘elitist’. Senator Clinton, I’d like you to meet the Pot. Pot, meet Senator Clinton.

That aside, I can hardly blame Hillary for taking advantage of such a tactical error on Obama’s part. “And it’s not surprising then they get bitter, they cling to guns or religion or antipathy to people who aren’t like them or anti-immigrant sentiment or anti-trade sentiment as a way to explain their frustrations.”

Sure, Barack, Middle-Class America may be bitter, but they don’t hide behind gun control and religion to make up for their bitterness. These things are important to middle-America because they are as much a part of the culture of white workers as anything else in this country. These are people who’d grown up for generations working hard, usually earning a subsistence wage. They require guns to hunt, initially for survival, later as a right of passage. It’s a part of who these people are, just as their Faith in God is.

If Working-Class America is Bitter, Mr. Obama, it’s because they’ve spent generations struggling to build something for their families. They’ve done this while battling the wealthy, who more often than not would seek to rise above by taking advantage of those they saw beneath them. This fight, and this ecosystem led to the creation of labor unions, which helped the men of that time to ensure stable wages and satisfactory living conditions.

But then the ecosystem changed. The companies have learned that taking advantage of employees can cost them more in the long run, the unions have become the enemy of the people they represent (though most of the represent-ees don’t recognize it), and the government, largely through the policies of the Democratic Party, have begun taking ever larger amounts to support welfare programs.

Are all these programs bad? Certainly not. The opportunity for hand up is something that I think everyone would be glad to receive. The problem is that we see countless examples of people who survive on these programs, making no effort to earn beyond it. The system is structured such that some people actually make more money on social welfare than off it. We hear stories of drug dealers collecting welfare while selling drugs, of families having more children simply to collect more benefits, of illegal immigrants taking advantage of social programs, sometimes without paying their share of the taxes.

Working-Class America doesn’t hide behind these issues. These issues are important to them because they have a direct effect on their way of life. These issues are a part of who these people are, and when politicians come and tell them that the things that are important to them, like their Faith, and their past-times like Hunting (and the guns that go with it), are wrong and bad, then these people are going to get upset.

Democrat’s tend not to get the working class vote. For many of the reasons that Senator Obama claimed that they hide behind in bitterness. Such a comment, while quite possibly elitist, also fail to recognize the truth of life for the Working Class. And Hillary Clinton tries to monopolize on Obama’s poor phrasing with her own special brand of lies and misinformation.

“She is running around talking about how this is an insult to sportsman, how she values the second amendment. She’s talking like she’s Annie Oakley,” Obama said, invoking the famed female sharpshooter immortalized in the musical “Annie Get Your Gun.”

Her record agrees with Obama’s criticism. Hillary Clinton is no friend of the working class. She panders to large corporate interests, and offers funds to the disadvantage, with little incentive to take control of their own lives. She is a carpet bagger, only interested in furthering her own agenda.

At least she’s honest about something. Her actions leave little room for anyone to think differently.

And in the meantime, McCain has also made remarks about the poor quality of Obama’s comments, and tatit shows little understanding of the Working Class. For his own credit, McCain has said little else.

The next few primaries are shaping up poorly for Obama due to this commentary, though it’s unclear if this is enough for Senator Clinton to actually get the nod from the DNC. Still, as the Senator’s continue to snipe at each other, they do a fantastic job weakening their own parties position for the upcoming election. I honestly begin to if the party won’t use up all the anger directed at Bush that puts McCain at a disadvantage. I just don’t think that Obama and Clinton are thinking about the fallout all their current mudslinging is likely to have in the upcoming general election.

The WSU VPN on the Eee PC

With my new Eee PC, I took it to work to show it off and also to keep me entertained on my Lunch break and while I wait for Catherine to finish teaching. Unfortunately, when I booted it up to show it off, I tried to get it to connect to the Washington State University wireless network.

WSU uses a slightly different wireless setup than I am accustomed to, where the Wireless network itself is open to anyone who wishes to connect to it, however the only system that a wireless user can get to is the VPN gateway for campus, which I use periodically from home when wanted to search Google Scholar or something.

The system is really quite clever. This way users don’t need to register their device MAC addresses with central IT, but access through the wireless is restricted only to users who can authenticate to the VPN. It’s a pretty simple system, that is really quite effective,and pretty easy to set up and use on my Ubuntu Desktop and Catherine’s Ubuntu Laptop.

When, I unpacked my Asus Eee PC, I quickly set up the VPN connection so that when I got into the office today, I should have no trouble connecting. The Eee PC comes preloaded with a PPTP VPN client, and configuration was pretty straight-forward. I’m even able to configure the VPN connection to automatically open the wireless link before attempting to open the VPN link.

Unfortunately, the connection still didn’t work. When I ran the route command, I was greeted by two default routes. Apparently, the VPN configuration went great, except that it neglected to place itself above the unencrypted link as the default route, causing the system to try to route packets over the unencrypted, and mostly unroutable link.

Until I can figure out how to make the VPN client behave correctly, I’ve put together a simple workaround, which is simple, but not automatic. Functionally, I simply don’t want any traffic to go over the unencrypted link, so I simply deleted the unnecessary route using the following command at the terminal (reached from CTRL-ALT-T on the keyboard):

sudo route del default dev ath0

This ensures that all default routed traffic heads out over the VPN link (which is ppp0 by default), allowing my VPN link to work, and it’s been working like a charm. Not being a KDE user, I’m unsure if this is a KDE application issue, or an Asus Eee issue, though I suspect it’s a KDE problem.

A bit annoying, though the workaround is simple.

For those who have never set up the VPN client under Linux before, the process if very simple.

First, choose the Wireless Networks icon from the Internet menu, and choose one of the WSUNEXUS connections. This is the unencrypted wireless network.

Second, open up the Networks tag under the Internet menu and click the Create button. This will bring up a list of connection types for you to select. Scroll down to the bottom of the list and select “Virtual Private Network - PPTP VPN” and click Next.

ConnWizard1.png

Go ahead and tell the VPN connection to use the WSUNexus Wireless Connection created in step one and click next. The hostname of the VPN server is vpn-gateway.wsu.edu.

ConnWizard3.png

Leave the WINS server blank.

Enter in your username and password. You can choose if you want to save your password or not. I did, but I expect I’ll be the only one using my Eee.

ConnWizard5.png

Finally, click Finish on the last screen, and your connection will be saved.

From here on out, when you want to connect to the WSU VPN, you’ll need to open the Networks application, select your VPN connection, and click the Connection Button and choose Connect. From there, the VPN should automatically open the Wireless connection and then start the VPN.

Finally, to finish the problem listed above, open up a terminal window by pressing CTRL-ALT-T on the keyboard, and the terminal will open. In the terminal, type in

sudo route del default dev ath0
. This will remove the superfluous default route, and allow you to connect to the web through the VPN.

ASUS Eee PC

Given that I owed the Government a considerably smaller amount of money on my Income Taxes than I’d initially thought, I decided to buy myself an ASUS Eee PC 8G in Black. Since the difference between 3-day and 2-day shipping was less than $5, I opted to upgrade the shipping so that I’d get the little device sooner.

I immediately unloaded the device, plugging the battery and AC in, leaving the device off while Catherine and I went on a walk. About an hour later, we got home and my EEE was all charged, and I was ready to start it up. As they say, the device boots up in under 30 seconds, and the initial setup took only a couple minutes, which included skimming through the EULA.

The standard interface has been well discussed. There are a number of standard applications. Firefox for Web access, Adobe Acrobat for PDF reading, Open Office for Word Processing/Spreadsheet work, and a set of games and educational apps, which show that the Asus Eee PC is really well suited for children.

Not that I think the Eee is a kids’ toy. Even though the screen may seem awfully small, at 7 inches and 800x480 resolution, it’s really quite comfortable to use. It’s surprisingly bright, so bright that I turned it down to about 33% of the highest setting, and it’s still crisp. The keyboard is small, but even with my large hands, I’m doing a pretty decent job touch typing out this post. My only complaint about the keyboard is that a few of the keys are in slightly different places, which I’m slowly getting accustomed to.

For the time being, I’m probably going to leave the device on the stock Xandros installation that comes preinstalled. i may eventually switch over to Debian Eee, which if nothing else will return me to my comfortable GNOME Desktop. The Eee PC desktop can hardly be called KDE, though it does use KDE apps and QT4, but I miss things like Tomboy and a few other GNOME apps. Depending on how easy it is to install new software on my Eee, I’ll likely stick with the default install.

Speaking of installations, there were updates for nearly all the default applications, which the Eee makes fairly easy to install. The problem is that I have to upgrade each and every application individually. I should have been able to simply click a single button, or go through a set of check boxes to mark which updates I wanted.

The battery life appears to be as advertised, nearly 3 hours with the stock battery, and that’s with the wireless radio going, as well as some time on Youtube. Most surprising was that over 2 gigabytes of the 8 GB flash were filled, in retrospect this is likely related to OpenOffice and some of the other applications. Knowing that Windows XP takes up 1.1 GB without any other software, I’m curious where the stock Xandros install sits.

I’m very happy with this device so far. Mind you I’ve only had it for a few hours, but it should be great for blogging, and the Internet on the go. The thing is tiny, and most of the photos that i’ve seen don’t really give a good sense of scale unless you’ve seen the thing in person.

Time Tracking Woes

I’ve got a complaint about UPS’s otherwise fantastic shipment tracking. It’s a fairly small thing, but it makes their website very, very difficult to use. I’ve noticed this on every UPS package I’ve ordered in recent memory, and I just can’t think of any cause for it.

Simply put, why does the UPS Tracking site report times in “Local Time”? I’ve ordered many things where the package will cross all the US Timezones over the course of it’s travels into my little hands, and it’s confusing for me to determine how long the package is actually taking to travel from point A to point B. Sure, I don’t really need to know this stuff, but it’s kind of fun to track and watch. But doing every thing in “Local Time” means that, not only do I need to convert timezones myself, I also need to figure out what timezone Dallas is in!

I do enjoy watching the package progress because it reveals some information about how UPS routes it’s packages. In essence, I like watching Package progress for the same reason I’ll periodically run a traceroute from my computer to some random Internet host I’m connecting to. Determining where the links in the graph are reveals some implementation details about a distribution network, which I consider to be somewhat fascinating.

Ultimately, my desire here is a frivolous one. I don’t really need to watch my package progress. I don’t really need to pay attention to how long a package sits on the ground before being sent on it’s way. I don’t really need to care. But I do. And there are times where, if what I was shipping was important enough. Was time-sensitive in some way, I’d really would care about that information. Plus, I’d bet that the reason that UPS displays “Local Time” to me on the web, is because that is what is stored in their database. Surely they wouldn’t bother to convert to a less precise time format simply for the hell of it. They only reasonable explanation is that they’re serving up the data in the most efficient manner possible, which is the data that is already in the database.

So, I ask you, how does UPS manage to watch their supply chains and determine where things could be improved? Their analysis software would have to examine each city and determine it’s Timezone before it could normalize the data so that UPS knows exactly how long it took the Truck to make it from Baldwin Park to Ontario. Or the plane from Ontario to Dallas (clearly, this would be more of a problem with the trucks).

Organizations make this mistake all the time. They assume that the only time that matters is the local time. Now, I’m not suggesting we switch to Swatch’s Internet Time or anything. But if you’re an organization that spans time-zones, why wouldn’t you want to standardize your companies data storage on a single timezone, whether that’s UTC or whatever timezone you’re corporate office is in? At least by standardizing your organizations time schedules in your central data you can always know what an event at 13:00 means. Your users usually shouldn’t need to know what Timezone you’ve standardized on, as you can easily convert time to their ‘local time’ when you go to display it to them, but this task is nearly impossible without a standard to begin with.

Subversion vs Distributed Source Control

Jeff Atwood over at Coding Horror had a post this weekend espousing the wonders of Subversion (SVN) on Windows. Atwood has long been a supporter that developers should be using any source control (except Visual Source Safe), a sentiment that I completely agree with.

However, I find myself disagreeing with Atwood on certain points. First off, I’ve actually met a Developer who was happier with SourceSafe than Microsoft’s new Team Foundation Server. Of course, this was largely because of some bizarro practices at the company he worked for where they had a multitude of Database Stored Procedures (some of which handled things like sending e-mails), and Source Safe allowed them to map a small selection of these stored procedures between repositories. From what I gathered, they had a repository that contained all the stored procs, and then individual project repositories. SourceSafe allowed them to map a selection of stored procedures to the individual project repositories in such a way that they only saw the stored procedures relevant to their project, but any changes made to the stored procedure would be a change shared across all projects that used that procedure. Team System doesn’t allow this. Neither would Subversion or Perforce, for that matter. In both those systems (I don’t know enough about Team System to comment), there seems to be no easy way to map those files, as both of them would create a discrete copy of the procedure file, which would lose the auto-update capability.

Incidentally, PlasticSCM would support this sort of behavior, through the use of custom selectors, though this would take some work to setup. Regrettably, I don’t know of any Free Source Controls capable of this, though there are several projects that I simply have no experience with. However, I’m willing to bet that if you have this level of interplay between repositories, there was likely a major design flaw at some point. git’s “super projects” support, might fulfill this need, but I’ve not used it.

Linus Torvalds, creator of git and of course the Linux Kernel, gave a talk at Google a little while back. Regrettably, git does not yet run on Windows effectively due to some file-system weirdness between Unix and Windows. Git resulted from BitKeeper pulling their open-source licensing plan because of an accusation that Andrew Tridgell reverse engineered bitkeeper. Torvalds harbors a deep hatred of CVS, and by extension SVN. His talk is amusing, and it clearly tainted by his feelings. While I may not agree with him completely, I do agree with the general sentiment that Distributed Source Control is a necessity.

Atwood makes the claim that since Source Control is only recently mainstream, that the idea that most developers would even consider distributed source control is ridiculous. Frankly, I think once you get past the idea that Source Control is hard, distributed source control is a very easy step, and it’s incredibly useful.

With Plastic 2.0, it appears that Codice Software has taken large strides in allowing distributed development, as each developer could have their own Plastic server which they control entirely, and then they simply create change packages to send to whoever needs it. But Plastic also supports centralized development allowing a house to do both.

Should you rush out and buy Plastic? Maybe not. For my development at home, I favor Git without question. At work, I favored Plastic because we do Windows-based development, and centralized source control is more convenient for backup purposes as well as keeping tabs on who is committing what. This is the real reason why I believe that distributed systems will have trouble catching on. By their very nature, true distributed systems make it harder to track statistical information about who is committing what, how often, and how much. Project Managers live for this sort of statistical information (though this is changing as Agile Development and Scrumm catch on). In distributed systems, the only thing to measure a contributor on is the final product of the contribution.

Distributed Development just makes sense though, when you look at the new world of Agile Development. Everyone works in their own sandbox, able to check in and branch at will. Developers are free to share their in-progress changes (if necessary), otherwise integration issues only arise when you reach an integration step, which is the only place you want merge conflicts and issues to occur. Everywhere else development needs to be fluid. Plastic does a great job of bridging the gap between central and distributed development. It’s not a ‘perfect’ distributed system, and there are a few features it’s missing that I really want (mainly an API and ‘triggers’ system), but it allows for heavy branching, and distributed repository servers.

Ultimately, any source control is better than none. However, a distributed model has some really powerful benefits. Consider something like Plastic with its Distributed support, git if you’re on Unix, or even Mercurial (which I’ve never used. Depending on central source control provides a single point of failure, as well as a single point of attack. If your central source repository is compromised, you’ll have to restore from backup (and depending on your backup scheme, you could lose data). With a distributed development system, everyone has the shared development history of the entire project, so you can grab the history from any other developer, losing only the non-shared changes you had been working on. Permissions aren’t an issue since everyone is working on their own local systems, and all anyone has to submit are deltas to whoever requires them. To anyone who has begun using Source Control, the benefits of a system like this should be obvious.

Windows XP on the Asus Eee PC

The Asus Eee PC is a Ultra Mobile Computer designed by Asus last year. It’s a tiny device, smaller than a piece of paper, and barely an inch thick. Weighs less than two pounds. I want one.

Traditionally, they’ve been Linux devices, and to date you can still only buy the Linux version. They use a custom version of Linux, but Asus has made drivers available so that any Linux distribution can be used, provided you can fit into 2, 4, or 8 GiB. I’m wanting one for anywhere browsing, some minor hacking, and blogging.

Microsoft, of course, has no desire to allow the Ultra Mobile Computing world to go Linux. They’ve been trying to get on the One Laptop Per Child project since it’s inception, they were able to become the primary OS on Intel’s Classmate. And now, they’ve managed to get Windows XP working on the Asus Eee PC (though you can’t buy it with WIndows yet).

Microsoft put a video up on Channel 9 detailing some of the technical parts involved in getting Windows XP running in this environment that was rather unthinkable at XPs inception. It is somewhat of a testament that, aside from some drivers, XP required very little changing to work in this environment. Of course, it takes at least 1 Gigabyte for the Operating System (not including IE7 and Windows Media Player 10). That’s half the storage on the lowest end model of the Eee.

To help Hardware and Software integrators, Microsoft has published some guidelines. In general, the current crop of ULPCs are well within the specs for Windows XP, and as long as applications are more careful about how often they write to storage, there really isn’t too much to be changed.

I can see some people wanting this. I don’t, but I can see some people desiring it. Personally, I just don’t see the point. I can get a full system, with web, office, media, etc in at least the same space that Windows wants for just the base operating system. And unlike Windows, Linux doesn’t want me to compress the disk (check the guidelines, page 6).

That’s right, because Windows takes up so much space, Microsoft wants you to add disk compression. They say that it will only increase your CPU usage by 1 to 5 percent. Of course, what they don’t tell you is that you’ll see a similar decrease in battery life, but I suppose that fact just isn’t important.

Actually, my biggest problem with the whole damn thing was how self-congratulatory the Microsoft guys were about the whole thing. Frankly, if they had been unable to get XP running on the damn thing, it would have been ridiculously embarrassing. The fact that they’ve had to continue to support Windows XP on Ultra-lights. It seems that this fact is embarrassing enough, that Vista’s replacement is practically just around the corner, which should also spell the end of Windows XP.

For those that desire it, the ability to run Windows on Ultra-Light computers like the Asus Eee PC is pretty nice. In my opinion this development only reduces the value of these hardware platforms, by leaving less usable space to the user, and reduces what the system can be used for. With Linux on the device, I can comfortably do some programming with it. If I only had the 2 GiB Eee, I’d be hard pressed to install the OS and any Dev tools with Windows. Admittedly, I’m not Microsoft’s target market with this development, but I’m guessing that most users who are buying Eee’s simply won’t bother trying to replace the stock Xandros OS.

OOXML Passes ISO Standardization Process...Probably

It appears that Microsoft’s Office Open XML document format has made it through the ISO Standardization process, meaning we now have two XML-based document formats approved by ISO, both of which strive to solve similar problems. Great job, ISO. Ignoring the argument about whether or not the standard was even necessary, there seem to have been [questions about some of the votes from several of the member nations of ISO(http://blog.abrenna.com/formal-protest-against-norways-yes-to-ooxml/). Will these questions be enough to reverse today’s announcement? Probably not, but it still is rather bothersome.

Microsoft is, of course, already spinning the beginning controversy as FUD being fueled by competitors like IBM. Admittedly, IBM, who’s Lotus Suite utilized the Open Document Format, has a bad history with Microsoft, and their support of Open Formats like ODF are steeped in calculated business decisions, but there are many more technical people than simply IBM who view OOXML as a inferior format.

And even where OOXML is better than ODF, and there are places where that is true, those improvements should have been worked into ODF, not competing format. Besides, don’t we have Standards to prevent this sort of interoperability failure and difficulty? ISO failed us on this one. Not just because OOXML has been passed, but because they’ve passed two standards in less than two years that strive to solve the same problem. And both votes have been surrounded by questionable behavior, from Yes votes where 80% of committee members said No, to companies being paid to join ISO.

My views haven’t changed on OOXML, and while I have grown to love Microsoft Research, their business wing continues to earn my ire. Congratulations to Microsoft, I guess, for getting this ‘standard’ approved through ISO. And Damn You ISO for allowing it to happen. I’ve no doubt we’re far from the end of the fallout from this, and I shudder to think of the wreckage of ISO that will exist when all is said and done.