August 2010 Archives

Free Software Ethos

Last week, Jono Bacon posted a question to his blog asking about why we’re passionate about Open Source. It’s a question I’ve been thinking about a lot lately, so when the post was put up, I decided shortly thereafter that I needed to put my thoughts down.

I’ve been using computers since about the age of five. Admittedly, this was 1988, quite a bit later than many of my contemporaries, but such is life. At the time, we used to frequent a shareware shop, picking up 5 1/4” floppies with various programs (mostly games), or grabbing things elsewhere. Actually, I spent an lot of time playing Nethack, which means I’d also encountered my first violation of a free software license, since I know the source code was not included on the floppy of nethack we had in our house.

My first real introduction to Free Software came in the form Linux, specifically a Debian install disc included with Maximum PC (a long-defunct computer magazine), which based on the Debian Timeline, I think must have been Debian 2.0 “hamm”, though it might have been “bo”, putting this sometime in 1998. At the time, I really messed up that first install, forcing Debian to install every package in the disc’s repository, even those that conflicted. Needless to say, the system was wholly unstable, but the little bit I could play with it, and what I’d read, made me intensely interested in Linux as a platform.

It was more than that. I’d known, pratically since the first moment I sat down at a computer, that I intended to work with them. It didn’t take long before I was trying to learn everything I could about the system. Of course, we were running DOS on a 286, so the closest thing to a development experience we had was a GW-BASIC interpreter, still I played a lot with BASIC, and trying to examine other things about the system. We didn’t have a Modem at this time, so I was not very attached to the whole BBS scene, except through a friend or two, which if anything, simply slowed down my introduction to hacker culture.

The time I was growing up was the heart of the “Don’t Copy That Floppy” battles for mind-share. For the record, I had done my share of piracy as a kid, but part of that was simply lacking the resources to pay for things, especially shareware, which usually involved mailing checks places. I think that it is part of what made Free Software so interesting. Here were people building an entire Operating System for the love of it, and even in the mid-90s there were a handful of people who were able to make a living at it.

There was something freeing in using Linux for me. First of all, development tools were simply at hand. Getting a C compiler on DOS was difficult and expensive, until I lucked upon a free copy of Borland C++ 4.0 that someone was looking to get rid of. There was more to it than that. There was something cloying, exciting about using Linux for me. I felt as if I was in control.

It was around this time that I started my first open source project, a simple program to add, subtract, multiple and divide matrices written in C because I didn’t trust that my TI-83 was doing the right thing. I was about to say I should see if it still compiles, but it looks like I borked the links to the tarball at some point. I’ll need to correct those…

It was exciting, releasing something that I’d written, and posting it up to Freshmeat, but what was more exciting was receiving patches from contributors. I received a bunch patches from a half dozen or so other contributors, patching bugs and adding bits of functionality. I had a bunch of changes, and I even packages it up for Debian, though I never aggressively sought sponsorship for the package.

I switched to Linux as my primary OS when I got to college in 2001, and stuck with it ever since. Throughout school, I’d follow the various bug trackers for software I was using and try to do my part filing bugs, doing a bit of triage, and periodically trying my hand at fixing bugs, though it was until I was getting out of school that I felt comfortable actually fixing bugs in some of the larger programs that I used on a daily basis. Part of that was a lack of confidence, part of that was a lack of skill and time to understand how interconnected some of these projects can be. Whatever the reasons, I’d persisted in these communities in the fringes, typically the local expert among those I know, but not deeply involved.

While I’ve never been as active as I want to be, the reasons for wanting to be involved has remained the same. It feels good contributing to an open source project. It’s exciting when a patch (or pull request these days) I’ve prepared gets accepted into a project. I like knowing that code I’ve written is being run across the globe, by a lot people, but it’s most thrilling knowing that another developer had seen the value of my contribution.

There is more than simply the desire to be recognized, however. Spending time in Academia, particularly now that my wife is working toward an eventual PhD, I’ve seen how Free Software can be used to help drive knowledge in ways that have little to do directly with computers. I’ve seen how the fields of scientific computing seem to have come further in the last six years or so, than they had for decades prior, in part due to the availability of free software in their field. My interest in Free Software began very much as an academic issue. By contributing and working toward the common good, taking the best ideas (or at least the best ideas right now), knowledge can increase much more quickly than if we operate in vacuums. The dynamic and exciting environments that people have often associated with start-up companies, has always existed around Free Software, because the structure affords people the freedom to come and go between projects to a certain extent, which, for many projects, can help to ensure that the project is always exciting.

More recently, my devotion to the ethos has been ramped up by a technology landscape that has become increasingly hostile. We’ve got laws that make circumventing even ignorant DRM a crime. We’ve got software and hardware providers actively working to restrict access to their devices, in ways all but guaranteed to stifle innovation. Even relatively open platforms, are being modifies to close them up awkwardly tight. I’m looking at you AT&T, with your removal of the “Third Party Sources” option on your Android handsets!

That’s the core reason I’m devoted to the ethos. Free Software doesn’t have anything to do with cost. Software is expensive to make, no matter how you slice it. It’s about the freedom to make mistakes. To do stupid things. But also to create amazing things. To go further than before. Sometimes you need to take a few steps back to be able to move forward, even if those steps back are the painful results of pretty bad mistakes. We need computing freedom for that world to exist, and while I’m not truly opposed to commercial software, I firmly believe that supporting Free Software, and it’s tenets, is the best way to keep us from being artificially crippled as we move forward.

Fitness Update

With my recent vacation, I missed a full three weeks at the Gym, the third of those weeks lacking the excuse of being halfway across the country. Regarding body shape, I know that I put on some additional girth, as one would generally expect during vacation, and I assume that I must have lost a bit of muscle as well, since my weight didn’t seem to have changed much since I’d left. Where I really noticed the problem was how badly my endurance crashed while I was out of town. On Wednesday of last week, I swam laps for twenty minutes, and as I exited the pool, I got a pretty nasty head rush. I knew my heart had been beating harder than had been normal for the workout I’d done, but it still surprised me how hard it hit me.

Knowing that I needed to take things easier, to rebuild my stamina, I’ve adjusted my workouts accordingly. I hope to be back up to a reasonable level within a month, but we’ll see.

In addition, this week was the first opportunity I’d had to take advantage of the free body composition testing that the Student Rec Center periodically offers. As I said, this was my first opportunity, so I was establishing my baseline for their particular test a lot later than I really wanted to. However, at least I have a baseline figure.

Now, the test as administered by the UREC Personal Training Staff is a trivially simple one. It was a skinfold test, but one that only required measurements from 3 locations. The Chest (pectoral region, near the shoulder), Abdominal (about belly button height, on about the same line as the chest), and thigh. Each measurement was taken twice, and then summed together and a chart was consulted.

As I said, this isn’t exactly an accurate test. According to the chart, I have about 30.2% body fat, which, according to a 2002 reference, is about 8% more body fat than is appropriate for a man my age. Now, I kind of doubt some of this test, since based on the surgery footage of a great uncle (who’s body shape I share) getting heart surgery, my barrel shape may not be indicative of large amounts of fat. But as long as the test is administered similarly in the future, it still serves as a metric that can be tracked.

I will however, grant that this test is a hell of a lot more interesting than the Body Mass Index (BMI), which considers me morbidly obese. Thing is, with my body type, even if I were to get down to 8% body fat (the lowest ‘healthy’ body fat, per the chart I have), my BMI would still put me in the at least Obese Class I, but probably Class II. And yet people try to use that bullshit test as a measurement of health.

I would be interested in having a Bioelectrical Impedance test, since that at least sounds like a more scientific test. But, at this time, I think I’ve still made quite a bit of progress, I know that I’m generally happy with the results. Now, I really ought to finish my data import analysis of the Star Trac data that I can get out of the cardio machines at the gym, so that I can more easily track progress (I don’t like the website I have available to me).

Barnes & Noble Nook

1 Comment

Official Nook LogoAs I’d mentioned a few weeks ago, for my birthday I got a Barnes & Noble Nook Wifi, seeing as how the reduced price of $149 was finally within my comfortable price range for such a device. Yes, I know the Kindle is a bit cheaper at $139 for a similar model, but I liked several things about the Nook. First, that it is based on Android (albeit not a terribly current Android), which means that things like add-on apps and the like should be easy to develop and add through an update from BN. But mostly that it was trivial to sideload EPUB and PDF books (such as those I buy from O’Reilly) via USB, and with expandable storage via microSD.

A note about the rest of this review. I have not rooted my Nook. I’m thinking about it, but I haven’t. I do know others who rooted their Nooks within days, and for most Nooks in the wild, that looks pretty easy, but I’m trying to give the default software a chance before I take that step.

I went with the Wifi only model, as I don’t spend too much time outside of an available wifi cloud. Which actually leads to my first criticism of the device. Since the Nook team forked their version of Android early, it doesn’t support certain things that Android has supported since 1.6, such as WPA-PEAP, which of course is the Wifi standard used by WSU, and increasingly by other large enterprises. This is not the most enormous problem, certainly, as I can load content via USB or while I’m at home, but right when I bought the device and wanted to play with it when on campus, it was annoying. I emailed Nook support about this, but they requested I call their help line. I think I got that gentleman to send the problem up so that it can hopefully reach their software staff.

The touch screen has proven to be fairly easy to get accustomed to, even for typing in text, though Catherine is still debating getting a Kindle for it’s physical keyboard. Since she’ll probably use the device primarily for reading and annotating scientific journal articles, it’s possible at this time that the Kindle may be better. Annotation is a weak point on the Nook, but given that the Kindle has not met success trying to get on University campuses, I suspect this may be a weak point of all eBook readers at this time. Part of the problem is the relatively low refresh of the e-ink display. While this is not a problem for reading, it can make moving a cursor to select text somewhat cumbersome, since the Nook presents you with a directional pad on the touch screen to find the start and end points. Plus, for some reason, the cursor position for selecting the end point of a word is just before the word you want at the end of the highlight, even though the UI presents opening and closing square brackets for marking your selection.

Even once you’ve annotated some text, it can prove somewhat difficult to get to that annotation, as the ‘View Notes’ options are located three menus down from the book’s main menu.

Currently management of side-loaded content is a bit unpleasant. The files are a presented as a simple alphabetical list, and I’d like to see a way to organize the documents into folders, or possible by tags, in order to make it easier to find a book, or subset of books. With an 8GB microSD, that can be a LOT of side-loaded content. Actually, this problem applies to the content downloaded from B&N as well, their content just organizes in order of last access, instead of alphabetical. This sort order is configurable, these are just the defaults the device uses.

I have had problems with rendering of PDFs, particularly those for books which were in a multiple column per page format, and had sidebar information. While reading a GURPS book I’d purchased through e23 some time ago, there were sidebar headings position away from the content they were associated with. The Nook, in attempting to reflow the document, removed the vast majority of the visual cues that the original editors had embedded in the document, making it slightly more difficult to read. I suspect a device with a true letter-sized screen probably wouldn’t do this, but it’s a weakness of the device.

In spite of the flow issues in PDFs, Reading on the device is pleasant. The refresh takes a bit to get used to, but I hardly notice it anymore. In the roughly four weeks I’ve had the device, I’ve finished three fiction books, and at least two non-fiction technical books. I’ll be doing book reviews more regularly, I think. The point is that I’m reading a lot more since I got the device. It’s just convenient. It doesn’t weigh much, I have plenty of choices in what to read, and more content does not occupy more physical space in my home. Will it replace paper? For me, it might. The benefits so far seem to outweigh my concerns.

So, will I stick with the BN Software Stack, or root my nook? I haven’t decided yet. Part of my wants to stick with the standard stack, to see what BN does, and try to work within their ecosystem, particularly for when they add app development and such. However, the Nook Dev people have done some cool stuff, and better management of my side-loaded content is likely to overwhelm my desire to stay vanilla. Either way, I love the device, and it’s fantastic for my needs. For a real academic like my wife, we’re a bit up in the air between this and the Kindle, but I’m exceedingly pleased with my purchase, even though I still would have waited for this more reasonable price point.

Ubuntu Maverick Alpha

It’s just under two months before the “Perfect 10.10” release of Ubuntu, Maverick Meerkat, will be released. I’ve been running the alphas on my Netbook since the first official alpha release and have been pretty happy with it, even though I had some configuration problems when I first attempted to install Unity, since some parts that I consider pretty important are listed as merely ‘recommended’ instead of ‘required’.

However, it’s been installing Maverick on my desktop finally this week, that’s made me really excited about the coming release. I’d been using GNOME Shell since earlier this year, when Lucid dropped. It’s exciting how GNOME themselves, and Canonical are both working to push the envolope in UI design on the desktop, something which has not been done in quite a long time. It would be nice if Canonical was working more closely with GNOME in some of their work. The seperation between the two has made it quite difficult for some of Canonical’s innovations to make it upstream, and I can’t blame GNOME for that, since Canonical has chosen to work outside of GNOME’s normal processes. Nor do I blame Canonical, since they are able to innovate without having to run everything through all the rest of the GNOME Foundation.

I guess that’s part of what makes GNOME Shell so exciting. It doesn’t seem to have the same problems with lack of cohesion among the GNOME ecosystem.

With two months to go, there are still a few problems. If you’re using an nVidia chipset for your video, you’ll need to update your Xorg.conf. I had a small problem with a mis-named library to run Shell. But there were easy workarounds for these problems, and they should both be addressed before the release.

Maverick is still updating regularly, and for most users, I still wouldn’t recommend upgrading. It took me a little while to address these issues on my desktop, and more than once this Alpha, X has completedly refused to start after an upgrade on my Netbook. It’s alpha, but this is shaping up to be an excellent release.

Information Systems Education

Money, it’s a crime Share it fairly But don’t take a slice of my pie

I think I was lucky, in that I began my post-College career working closely with Accounting. The common joke is that Accounting is the most important part of an organization because they’re responsible for the paychecks, but the accounting machine at even small organizations is dramatically more complex than that. Funds are transferred between General Ledger accounts, some of which represent cash-on-hand, others expenses or potential income, all of which carry various meanings that are very important to the financial health of the organization.

But outside of accounting, no one even knows about these accounts. Nor would they. To almost everyone in an organization, Accounting is a black box. Money goes in from customers, and comes back out again to employees and vendors. And sometimes Accounting seems as an impediment to the rest of the organization, as they may need to prevent a project from going forward (in conjunction with management, obviously) due to concerns about the money.

The point is, every single activity within an organization impacts one of these accounts, even though most people don’t understand, or generally need to, how that happens.

Usually, this isn’t a problem, but right now Washington State University is beginning the Fit-Gap analysis for replacement of the Student Information System across all of our campuses. However, while we are all glad to be replacing our aging student system, we are doing nothing at this time, with our aging financial system, which I think is simply backward. There is a reason that the Kuali Project has released three major revisions of the Financial System (beginning with the General Ledger), but still don’t have the first release of the Student System (which should happen soon).

All over the PeopleSoft interface as “GL Interface” Links, or other such information that controls how the Academic Structure information we’re working on now drives those all important General Ledger accounts. However, we haven’t really discussed that interface much, as it’s beyond the scope of our current discussion, which is fine. The problem is, the number of people who assume that things can be set up differently than we have in the past, without even considering how those sorts of decisions would bubble down to the Financial System.

For instance, we aggregate out our schedule data to the Campus level, in addition to the Year and Term level, so that students and administrative staff can limit how much information they see when viewing the schedule. This is metadata that will still be present in the new system, absolutely, but there was some brief discussion about creating Term records unique to each campus. Problem is, that there is linked information at the term level (from Academic Calendar data to Accounting Data) that is shared among all campuses. Plus, GL lines can be impacted by other selections like Campus and Organization that a given Course or Section is linked to, overriding options further up the Chain.

This is a recurring theme in most information systems, but definitely in Accounting systems. Define all data at the highest level possible, and do it once. Let it be overridden later if necessary, but common defaults should be defined once, at the highest level, to work their way down to the lower information. This can lead to challenges with those defaults changing late in the game, but that’s a communication and business policy discussion, not so much a technical discussion.

Most programmers end up working on Information Systems that are all very similar in their data integrity (and even structure) requirements. My introduction to many of these concepts came from working with Accounting systems, but any large system should be based on similar principles. My education on this topic was very much ‘in the field’, by analyzing existing systems. I learned far more about designing data schemas and other such systems after I graduated with my B.S. in Computer Science, than I think I even had the opportunity to when in school. Given that most software developers will end up working on or with an large information system, I suspect that programs seeking to offer a more general (or even vocational) software engineering program, should talk about these systems more directly. Whether it be the design of an Accounting system like MAS90, larger ERPs like Kuali, or even project management systems like Launchpad, there are similar data problems to be solved across all these problem spaces.

Saint Louis' Forest Park

Catherine and I have been winding down our two-week vacation, which had us put nearly 2000 miles on our rental car as we drove from Branson, MO to Petersburg, KY and a hell of a lot more between. Our last few days, we spent in Saint Louis, MO. We didn’t make arrangements to go see a Cardinals game (though we probably should have), since we weren’t entirely sure when we were going to be through town, and we had some other things we wanted to see.

Those other things, consisting largely of the Saint Louis Zoo and the Saint Louis Science Center. It was a little odd, being nearly thirty and childless going through the Science Center, but Saint Louis has a truly amazing facility with some really great exhibits, and both are in Forest Park, the site of the 1903 World’s Fair in Saint Loius.

And the zoo is clearly world class, particularly in some of it’s newer areas where it boasts elaborate and accurate habitats for the animals that it keeps. Somehow, they do all of this, without charging an admission. It’s free to enter either facility, and to view the vast majority of it’s exhibits. There were a ton of dedication plaques around the exhibits, which showed how much of the new consruction and other things were financed, but for the ongoing costs (these facilities seem to employ many, and they can’t all have been volunteers), it is clear that the city of Saint Louis, and it’s people, are immensely generous to these organizations.

We spent the majority of a day at the Zoo, though it was very hot in the early part of the day, and as we passed through the African Jungle exhibits, which featured Hyenas and the like, the animals that we did see, were almost all sleeping in the dens that were dug into the habitat for them, desperately trying to stay cool. However, in that area of the park, they have an amazing Hippo exhibit, which is set up with an underwater viewing area, so you can watch the Hippo’s swim (usually staying underwater fro 5 minutes or more) and play, but also see how the cicilids that inhabit their tank coexist with the Hippos, usually by picking at the Hippo’s skin for mites and other things, while the Hippo’s longued in the relatively cool water. Next were the Elephants, including one fairly large one we got to watch spray himself down with water using his trunk.

The hilight, in my wife’s mind at least, was the Stingray Pool, which did carry a cost of $3 per person, but given that the maintenace area for the pool was at least as large as the pool itself, and the half-dozen people attending the exhibit, was still cheap. At the Stingray pool, you get the opportunity to reach into the water and pet the Stingrays (which have had their venomous sting trimmed), of which there were two varieties and a range of ages. There was an opportunity to feed the rays, using small fish that they were selling, however, we didn’t. The rays tended to approach us as if we might have anyway.

There was plenty to see, particularly as the temperature dropped some, and the animals got to be a bit more active. Including the indoor exhibits (we only got to the bird house, the Monkey house and Herpatarium closed before we reached them), there is a ton to see at this facility, and the habitats were, as I said, excellent. There was also many interesting exhibits about the work that the zoo engages in, though a surprising number of them focused on reproduction inside of Zoos. Apparently, a lot of females are kept on IUDs to keep them from getting pregnant, and that contraceptive pill that they’re working on for human men? Apparently, the Zoo’s are hoping to be able to apply it to their charges as well.

And all of that is important, after all, the last thing the Zoo’s want are a bunch of inbred animals running about, because we got pretty close to seeing Giraffe sex as a female giraffe was, we believe, pretty clearly propositioning a male who was apparently just not in the mood.

But the Zoo is far from the only activity in the park. The Jefferson History Museum seemed to be mostly free, except for the Catholic Artifacts exhbiit they had visiting, though we didn’t spend too much time there, or in the Art Musueum. There are botanical gardens, which surround an area known as the ‘Jewel Box’, which I’m sure is beautiful in June, but was a bit crispy in August. There’s Paddleboats, and an Amphitheater, and a Pavilion at the highest point in the park, overlooking Saint Louis. There are amazng statues all over the park as well. It’s nice to walk through, even in the heat.

But, where we spent the majority of our second day in the park, was at the Saint Louis Science Center, which again, is almost entirely free to the public. The Pacific Science Center in Seattle, which I’ve visited at least a dozen times, currently costs $?? so this was a really surprising deal. From the park, you enter through the Planetarium, which I think is the best way to enter the Science Center, as they have real Gemini space capsules, which were evidently built in Saint Louis, as well as spacesuits worn during those missions. As you walk toward the overpass, there is a detailed history of NASA, as well as a collection of toys and memorobilia from America’s obsession with space, ranging from Muppet Babies Astronaut Lunchboxes to Cognac decanters commerating Apollo 11. There is even a Ground Control thing that you get to take the controls of. Regrettably, this was closed for the entire day of our visit.

After the Space exhibit, there is a series on Structures, from bridges (suspension and trestle), arches (including two large arches that you can build), and earthquake resistance. As a special exhibition (which did carry a cost), there was an exhibit about a Pirate Ship that has been being recovered for the last twenty-five years or so, the Wydah, which was taken by a storm off the coast of Massachusetts three hundreds years ago. The exhibit is set up to tell two stories. First that of the ship, as it began it’s journey hauling slaves. Then, the story of the man and crew who took her and changed her into a Pirate ship.

The exhibit was really excellent, as the efforts had recovered some really amazing artifacts, including a Sun King Pistol in remarkable condition, and several three- and four-pound cannons. The exhibit presented the Pirates, not in a romantic light, but an honest light. Making it clear why so many chose to ‘go on the account’ and join a pirate crew, when it was the best chance most sailors would ever have not only of getting wealthy, but of actually earning a wage, since many legitimate captains evidently did not pay their crews the wages they were promised. The Pirates were democratic. Even the Captain was elected and held to the same articles as the rest, sleeping with the crew in common quarters. When the Wydah was captured, one of the first things the Pirates apparently did with it was remove the cabins on deck, mostly to improve the ship as a fighting platform, but it also got rid of all the amenities of class common on other ships.

It was an amazing exhibit, well worth the $16 admission, which Catherine and I viewed as partially going to cover the cost of the rest of the visit to the science center.

We then proceeded to go through the ‘History of Life’ exhibit, which was a breath of fresh air after our visit earlier this week. The dioramas were excellent, and the detail on the animatronic T-Rex was pretty cool. There were some exhibits on the automobile and gearing, and the future of energy independence. But the Life Sciences exhibits really got Catherine excited. They have some really nice mountings of birds and smaller mammals for viewing, a fossil room with a T-Rex Metatarsal and other large dinosaur bones. There was even a lab targetting 10-12 year olds where they would get the opportunity to simple, yet exciting experiments like DNA extraction, fingerprinting and a few other things, all while wearing real lab coats.

The other exhibit we spend a lot of time in, was, of course, Cyberville! They’re exhibit on the history (and future) of computing was good, though I kind of wish there had been a few other things. There was a kiosk letting you play with the binary representation of ASCII, but the best of the Binary exhibits was a train that let you send it commands, first for direction, then speed, then time. The instructions were AWFUL, but I worked with a ten year old to figure them out, and then we were able to do a basic job teaching boolean math to a few other kids. There was a robotics exhibit using LEGO Mindstorms, a laser harp, a room with a variety of sensors you could interact with. An Art room with a simple 3-D modeler, a VR exhibit (which was not functional), exhibits on how the basics of how the Internet works, and a cooperative game where you work in a team of three to create a building designed by one of the partners.

I spent WAY more time in this part of the museum than was probably necessary, but I had a lot of fun, and had a few mothers thank me for helping their kids understand the exhibits a bit better, which was nice.

Catherine and I are currently childless, but if you have children, Forest Park really seems to have something for just about anyone. And the vast majority of it is free. We really enjoyed our trip to Saint Louis, the people were nice, we found an amazing restaurant, and while it was extremely hot the majority of the trip, it was well worth the few days we got to spend in this city, and I look forward to getting to return.

Perl6 First Impressions

2 Comments

Perl6 looks pretty solid, from a language design standpoint. It cleans up a lot of the cruft people have long complained about in Perl, like changing the glyph in front of a symbol to change the way you’re accessing that symbol, to support for (optional) strong typing of variables, and so on. It’s a well designed language, and implementations, like Rakudo, are complete enough to be useful.

I decided to play around with the classic Factorial problem to test some simple implementations of the problem. First, I did a basic, naive implementation.

sub fact($n) {
    return 1 if $n <= 1;
    return $n * fact($n - 1 );
}

This is a simple solution, but as I said it’s naive. It does a lot more work, since if you try to get all the factorials from 1 to 100, it has to recalculate each one individually, even though it’s repeating a lot of work. But, Perl will memoize decently.

my @factorialCache = 1, 1;
sub fact($n) {
    @factorialCache[$n] = $n * fact($n - 1) unless @factorialCache[$n];
    @factorialCache[$n];
}

This does precisely the same thing, except that it caches runs, so that if I’ve already calculated the factorial of 5, when I go to do the factorial of 6, it’s only two function calls, instead of 6. This is memoization, and it’s a cool functional technique, basically creating a lookup table in real time, instead of pre-computing all the values.

A few notes about some of the differences between Perl5 and Perl6 that are present in this code, when I’m accessing elements of the array “factorialCache”, I don’t need to use the scalar (‘$’) glyph to access the array as a Scalar. The fact that I’m using the square brackets indicates I want scalar mode. Perl6 is a lot smarter than Perl5 on how I’m using variables. When declaring factorialCache, I also didn’t need square brackets to declare it’s initial values, this seems to only be required when you’re using an anonymous array, such as passing an array to a function that wasn’t declare prior, as will be in the next code example. Small things, but they do reduce the ‘black magic’ impression that some people have taken away from Perl in the past.

Of course, the @factorialCache is available globally in this implementation, but I was having trouble figuring out how to do recursion inside of a closure for creating a generic memoizer that would use closure. I tried this:

sub memoizer($function, @initialValues) {
    my @cache = @initialValues;
        return sub($n) {
        @cache[$n] = $function($n) unless @cache[$n];
        @cache[$n];
    }
}

my $fact = memoizer( {
    return $^a * $fact($^a - 1);
}, [1, 1]);

This doesn’t compile, however, as Perl tries to reference $fact in the anonymous code block, but $fact hasn’t been initialized yet. The $^a syntax is interesting, because it’s a ‘placeholder argument’. I could have assigned $a by either my ( $a ) = @_; or my $a = @_[0];, but by using the caret, I’m telling Perl6 to pull those variables out of @_, in the order that I declare my placeholders. It’s a tiny bit dangerous, but for small anonymous functions like this, it shouldn’t be a problem.

In Perl5, this required something called the Y-Combinator. In Perl6, there is something to do with the &?BLOCK keyword, but I’m having trouble getting that to work.

That may be Rakudo, since other things in the Perl6 spec aren’t complete yet, such as using the state keyword when creating an iterator using gather-take syntax, which I was a bit disappointed to find didn’t seem to work in a way that allowed for elegant memoization of this problem, but that may be me not fully understanding that feature as well.

It’s rare these days I have cause to do much Perl. I’ve always been fond of the language, and I hope Perl6, with it’s cleaner and more powerful syntax, can help bring more people back to the Perl community.

The Creation Museum

This weekend, Catherine (my wife, and an Evolutionary Biologist) and I traveled four and a half hours (one way) from central Illinois to visit the Creation Museum located in Kentucky just across the river from Cincinatti, OH. The Creation Museum is an enormous facility, founded by Answers in Genesis, a group founded to push Bible-based education, but partially by showing how modern science can actually support Creationism founded by Australian, Ken Ham. The Museum is a temple of misinformation about modern science to create, for those who are inclined, what could be a very believable picture of a mere 6000 year history of the world.

Note: If the tone of the first paragraph isn’t clear, I am not a creationist, and I think that anyone who is is grossly misinformed. That doesn’t mean I believe faith is a bad thing, or people of faith are stupid or not worthwhile, simply that they’re basing their world-view on incomplete information, and that their incomplete world-view (and faith-based) world-view has no place in public school. Private schools, and what parents teach their own children? That’s fine, just not public.

Still, there was an enormous crowd at the museum, including at least a dozen Amish. And it was very clear that few (if any) others at the museum were approaching the trip from Catherine and I’s perspective. Unfortunately, I haven’t had an opportunity yet to go through all the photographs we took, and Catherine wants to save some of them for a possible presentation to the rest of her Department during Fall term. However, they will all go up online at some point, and I will update this post with more information when that happens.

Creation is actually a fairly small part of the museum, as it is actually trying to build an entire framework for belief in Creationism and the history of the world based on the Bible as a literal history. It does so by focusing on what Answers in Genesis calls the ‘7 C’s of History’, which I, who was active in a Church youth group through High School, had never heard of.

  1. Creation
  2. Corruption - Adam & Eve eat the Apple
  3. Catastrophe - Noah’s Flood
  4. Confusion - Tower of Babel and the spread of man
  5. Christ
  6. Cross
  7. Consummation - I believe this would be the Rapture, but the Six C’s and an R isn’t as catchy.

And there is a ‘walk-through’ history that focuses on each of these periods in order, with a beautiful Garden of Eden walk-through (complete with dinosaurs, more on that later), and the eating of the Apple, the first sacrifice, the hardness of life after the Garden and the murder of Abel by Cain. Then there is a huge Ark exhibit, complete with a talking Noah. The last three are not delved into much, but that’s fine.

The basic ideas behind modern Creationism is that God, when he created all life created ‘kinds’, like Ape-kind, Horse-Kind, Man-kind, Weasel-Kind, etc. There was a single male-female pair of each kind, and each kind had the genetic material to form each and every species we see today, since the ‘kind’ is roughly at the ‘family’ level. Also, before the Corruption, all animals (including the dinosaurs) were all herbivores. Either God decided as part of the Curse to add carnivorous behavior to animals, or the Kind always had that potential, but it didn’t manifest until after Corruption. And since there was no death (and probably no reproduction) before the Corruption, it couldn’t manifest until later.

When the flood hit, Noah didn’t have to take every species, merely a representative pair of each kind, which could then repopulate all the world. Okay, so Leopards and Lions are both born from the felid-kind pair that Noah took on the Ark. So…why don’t Lions give birth to Cheetahs today? In Creation-land every mutation to DNA that happens is a loss of information. This is one of the first really obvious places where creationism completely ignores well understood science, and redefines words to suit their own needs. Mutations are any change, and in fact most mutations add code. They may not actually do anything, until paired with other mutations later on, but actually very few mutations are a true loss of information. Given that these are a people who believe in Rapture, and probably believe it’s coming sooner rather than later, the idea that the genome will become too corrupted to be viable, probably isn’t much of a problem.


I plan to write more on the museum as Catherine and I review the photographs and prepare her talk. But I think it’s interesting to have a discussion of how this is such a big and popular museum, and why we’re having such a problem with people taking ‘creation science’ seriously.

For decades, the Scientific Community has treated creationists as foolish people who aren’t really worth addressing, because their wrong. And they may be wrong, but they’re not stupid, but their marketing is top notch. If the scientific community is going to be able to win this argument, they are going to need to get off their asses and take this threat seriously, and actually take the time to educate people on the truth. As it stands, creationists are using real science, but only part of that science to make their points.

There was also a video called ‘Men in White’ that we visited that is about a young girl questioning her life, and a pair of Angels, Mike and Gabe (obviously meant to be the Archangel’s Gabriel and Michael) trying to show her “God’s Truth” and it paints modern science as these smug people who follow science as a Religion, and can’t deal with evidence that seems to contradict generally accepted scientific fact.

Ultimately, that is the difference. Creationists have built their worldview on a very rigid structure, the Bible as a literal history of the Universe. Whereas Science bases understanding on where the preponderance of evidence point. That is the primary difference. There are parts of Creationism that can never change, while in Science, if there is strong evidence that something believed is wrong or different than was believed, than the view can be changed. When Astronomers announced that Pluto was just a Kuiper Belt Object, and not really a whole Planet, it was not astronomers and other scientists that really argued that point, the evidence was fairly clear. Pluto is large, but not the largest thing in an area of plenty of other large things of similar composition. It’s an important Kuiper Belt Object, certainly, but not a planet.

I want to work with Catherine on a book that looks at these ideas behind Creationism and presents the rest of the story that the Creationists choose to ignore, in plain language, to help demonstrate that Creationism does not possess quite as much strength in argument as they want you to think.

Ultimately, we need to remember these people are not stupid they are misinformed. And we can no longer afford to ignore them, or treat them badly. There is a lot of work to be done in this country to repair this dangerous, insidious, pseudo-scientific thinking that has done so much damage to real science in recent years.