September 2009 Archives

Derived Property Data Binding in Silverlight

For our Silverlight-based Schedule Proofing application at work, we have a special requirement for Summer Session, where they have course ‘blocks’, or a set of predefined dates beyond just ‘full term’ and ‘whatever’. This required a few interesting blocks, but mostly, it required some interesting tweaks related to data binding, that unfortunately, I had to do in code.

The relevant XAML looks like this:

<my:DataGrid x:Name="Selector"
            AutoGenerateColumns="False"
            HeadersVisibility="Column"
            GridLinesVisibility="None"
            IsReadOnly="True"
            Visibility="Visible"
            FontSize="11"
            RowDetailsVisibilityMode="VisibleWhenSelected">
    <my:DataGrid.Columns>
        <my:DataGridTemplateColumn Header="Start Date">
            <my:DataGridTemplateColumn.CellTemplate>
                <DataTemplate>
                    <TextBlock Text="{Binding StartDate}" Margin="5,4,5,4"/>
                </DataTemplate>
            </my:DataGridTemplateColumn.CellTemplate>
            <my:DataGridTemplateColumn.CellEditingTemplate>
                <DataTemplate>
                    <mye:DatePicker SelectedDate="{Binding StartDate, Mode=TwoWay}" />
                </DataTemplate>
            </my:DataGridTemplateColumn.CellEditingTemplate>
        </my:DataGridTemplateColumn>
        <my:DataGridTemplateColumn Header="End Date">
            <my:DataGridTemplateColumn.CellTemplate>
                <DataTemplate>
                    <TextBlock Text="{Binding EndDate}" Margin="5,4,5,4"/>
                </DataTemplate>
            </my:DataGridTemplateColumn.CellTemplate>
            <my:DataGridTemplateColumn.CellEditingTemplate>
                <DataTemplate>
                    <mye:DatePicker SelectedDate="{Binding EndDate, Mode=TwoWay}" />
                </DataTemplate>
            </my:DataGridTemplateColumn.CellEditingTemplate>
        </my:DataGridTemplateColumn>
            <my:DataGridTemplateColumn.CellTemplate>
                <DataTemplate>
                    <TextBlock Text="{Binding BlockName}" Margin="5,4,5,4"/>
                </DataTemplate>
            </my:DataGridTemplateColumn.CellTemplate>
            <my:DataGridTemplateColumn.CellEditingTemplate>
                <DataTemplate>
                    <ComboBox ItemsSource="{Binding TermInfo.Blocks}" DisplayMemberPath="Name" SelectedItem="{Binding SelectedBlock, Mode=TwoWay}" />
                </DataTemplate>
            </my:DataGridTemplateColumn.CellEditingTemplate>
        </my:DataGridTemplateColumn>
    </my:DataGrid.Columns>
</my:DataGrid>

The relevant portions of the data structure looks basically like this:

class SectionData
{
    private DateTime startDate;
    private DateTime endDate;
    private YearTerms termInfo;

    public YearTerms TermInfo
    {
        get
        {
            return termInfo;
        }
    }

    public DateTime StartDate {
        get { return startDate; }
        set
        {
            startDate = value;
            NotifyPropertyChanged("StartDate");
        }
    }
    public DateTime EndDate {
        get { return endDate; }
        set
        {
            endDate = value;
            NotifyPropertyChanged("EndDate");
        }
    }

    public SummerSessionBlock SelectedBlock
    {
        get
        {
            if (termInfo.Blocks == null) return null;

            var b = termInfo.Blocks.Where(q => q.Begin == startDate)
                .Where(q => q.End == endDate)
                .SingleOrDefault();
            return b ?? termInfo.Blocks.Where(q => q.Name == "Custom").Single();
        }
        set
        {
            var b = termInfo.Blocks.Where(q => q.Begin == startDate)
                .Where(q => q.End == endDate)
                .Any();
            if ((value.Name == "Custom" && b) || value.Name != "Custom")
            {
                startDate = value.Begin;
                endDate = value.End;
            }
            NotifyPropertyChanged("Block");
            NotifyPropertyChanged("StartDate");
        }
    }

    public string BlockName
    {
        get
        {
            return SelectedBlock != null ? SelectedBlock.Name : string.Empty;
        }
    }
}

public class YearTerms
{
    public ObservableCollection Blocks { get; set; }
}

public class SummerSessionBlock
{
    public int Year { get; set; }
    public int Term { get; set; }
    public string Name { get; set; }
    public DateTime Begin { get; set; }
    public DateTime End { get; set; }
}

Alright, that’s a lot of code, and most of it, I’m not really going to address here, since I’m assuming a basic understanding of Data Binding and Dependency Properties. Basically, all those NotifyPropertyChanged calls ensure that the UI gets updated. Also, the YearTerms.Blocks property will, in my case, be NULL for all terms that aren’t a summer session.

Now, since blocks don’t mean anything outside of Summer Session, I don’t want that column to be visible outside of that case. This isn’t a big problem, since all the sections which will be viewed on one instance of this data grid will, by definition, be from the same term, however, attempts to use XAML DataBinding failed with XAML parsing errors.

    <my:DataGridTemplateColumn Header="Block"  Visibility="{Binding BlockName, Converter={StaticResource VVC}}">

The VVC StaticResource just sets the Visibility to Hidden if BlockName is the empty string, Visible otherwise. As I said, this excepts with an obscure XAML parsing error. Unfortunately, things that can be quite simple in XAML can be a real pain in code when using Silverlight or (presumably) WCF. Partially because of how you have to identify the column. The solution is fairly simple, when the DataContext of the DataGrid is modified, I simply determine what the visibility should be and call a function that sets the visibilty.

public void SetColumnVisibility(string columnHeader, Visibility visible)
{
    try {
        Selector.Columns.Where(c => c.Header.ToString() == columnHeader)
            .Single().Visibility = visible;
    } catch (Exception e) {
        throw new ArgumentException("Column '" + columnHeader + "' does not exist.");
    }
}

Have I mentioned yet that I really love LINQ? This code is all wrapped up in a custom control, but I’m debating converting this function to an extension method on the DataGrid, since nothing quite like it is offered. This function is called from the web service callback responsible for setting the data context on this control, and it ensures that users only see this column when it makes sense to. Would have been nice to data bind this, but in reality, the binding works because I know that the data which determines the column visibility is technically shared among all members of the data, but the Silverlight Runtime doesn’t have any way of knowing that for certain.

More annoying is the second problem. The Date fields should not be modifyable directly by the user unless they’ve selected a special ‘custom’ option in the Blocks list. The ‘custom’ option is added at runtime, and is defined with start and end dates being just within the boundaries of the full term option. Which involves setting the IsReadOnly flag on the Columns for Start Date and End Date. Again, the Data Binding fails with a completely unhelpful XAML parsing error, and in this case, the data that I’m binding against is unique for the row, and doesn’t effect the rows around it, so I was kind of at a loss.

The problem with IsReadOnly on the DataGridColumn is that it effects all rows, not really what I want. So, really it’s out. But I don’t see a way to bind the IsEnabled flag on the Cell using a data binding (I’m fairly new to Silverlight, so this is likely my failing). So, to code it is. The problem I found was that, Silverlight doesn’t make it really easy to access the individual cells for a datagrid. You can get access to columns easily, and rows through a few events, but to access the cell….that’s non-trivial.

private void SetDateEditability(DataGridRow dataGridRow)
{
    var msi = dataGridRow.DataContext as MasterSectionInfo;

    // The fields should be editable if either of these are true, but not otherwise
    bool editable = msi.BlockName == "Custom" || msi.BlockName == string.Empty;
    // Should only except if Columns can't be found
    try
    {
        // I need these columns with these two names.
        var e = Selector.Columns.Where(c => c.Header.ToString() == "Start Date" || c.Header.ToString() == "End Date").Select(c => c);
        // For each column in the LINQ query above
        foreach (var column in e)
        {
            // Get the Cell Content for the argument row
            var el = column.GetCellContent(dataGridRow);
            // Get the cell itself, and set it's IsEnabled flag.
            (el.Parent as DataGridCell).IsEnabled = editable;
        }
    }
    catch (Exception) { }
}

Now, the question is, when does this method need to be called? It needs to be called anytime the selection for the Block column changes, which is accomplished through the CellEditEnded event on the data grid. For my purposes, I check that the edited cell is in the Block column, to save just a bit of time, but you can always decide how necessary that is in your own application. However, this isn’t sufficient, since it doesn’t effect the rows as they load, so you’ll also need to add a LoadingRow event handler which calls into this method for each row as it loads.

So, there you have it. How I created a virtual property driven by two backing properties, and tied the editability of the backing properties to the selection. I’ll be the first to admit, it’s not the prettiest looking solution in the world, and I’m sure it could be done better, but I was under a deadline, and really, this works pretty cleanly, for a solution derived from a Silverlight newbie.

Pollan Protests in Wisconsin

Apparently, this is going to be a particularly good year for Michael Pollan, at least in terms of book sales, as both Washington State University, and University of Wisconsin-Madison have chosen his books for their Freshman Common Reading programs. I don’t know what the freshman class at UW-Madison looks like this year, but at WSU that equates to around 3300 copies of the book, The Omnivore’s Dilemma, just for Freshman. The Bookie was also offering 20% off list to anybody else who wanted to buy the book (Students/Faculty/Staff/Parents/Alumni/etc) so all told, I really have no idea how many copies of the book are now floating around the WSU campus, but it’s not insignificant.

And, it almost didn’t happen.

Due to financial shortfalls here in Washington, the University felt it was going to have to completely cut the Common Reading program, largely because the program has historically involved an author visit, but the school couldn’t afford it. There were rumours that the real reason for the cut was pressure from Agribusiness, which I’ve never seen any real credible proof of, though certainly Pollan’s book is critical of the sort of agriculture common in the Palouse (re: monoculture), an agriculture which WSU has been instrumental in creating through years of wheat genetics and hybridization. Universitry of Wisconsin-Madison, who chose Pollan’s later In Defense of Food: An Eater’s Manifesto, is no doubt just as tied to Agribusiness as Washington State.

[Bill Marler], whom I’ve wrote about before, decided to provide a donation to WSU to keep Pollan on the agenda, and the Common Reading program has been moving forward, with a planned visit from Pollan on January 13th of 2010. Having read both of Omnivore and Defense, I’m looking forward to the visits, though I suspect that WSU’s experience with Pollan’s visit will be no less controversial than his visit to Madison.

Civil Eats has already broke this story down, and I’m planning on simply adding to what Paula Crossfield has had to say on the issue, though perhaps not as kindly as she has been.

First, the Defense of Farmers group. Anyone who claims that Pollan is anti-Farmer is a fucking idiot. In both Ominvore and Defense, Pollan routinely says that the small amount American’s spend, and expect to spend, on food is downright ridiculous. Currently, it’s about 10% of our disposable income. Prior to 1933, that figure was closer to 25%, per the Salem-News article linked above.

And what’s allowed for that? Mechanization. Hybridization. Specialization. The hallmarks of modern agribusiness, and they’ve done a fantastic job in churning out massive numbers of calories. However, the food that’s resulted, is quite easily shown to be not as good for us. We have the highest instances of obesity, diabetes, heart disease, and a whole host of other problems than we ever have had historically (but the problems in the health care industry are totally the insurance companies, right?). These problems are effecting more people every year. And younger people. In the last thirty years, incidence of Diabetes among people under the age of forty has gone up a full percentage point, nearly three times. Among older people (45+), it’s at least doubled. And when you look at Type 2 Diabetes, the type most often linked to weight and nutrition, it’s figured that over 13% of African Americans, 9% of Latino Americans, and 8% of Caucasian Americans suffer from Type 2 Diabetes.

And why would the numbers be more prevalent among racial minorities? Generally these people are poorer than Caucasian Americans (this is a generalization, and one I don’t intend to discuss the reasons for here. It’s an injustice, a pretty disgusting one, but it’s also a completely different discussion). And heavily processed foods tend to be cheaper, so they tend to be consumed more by poorer people. If agribusiness were to be restructured along the lines presented in Pollan’s book (which is basically impossible, and possibly improbable), then these people would be getting more fresh fruits and vegetables, and less corn-derived food products. And, yes, they’d be paying more for it. Which might not be a bad thing. More money on food, less on Cable TV and other non-necessities.

And, there would be more jobs for farmers. Higher prices paid to farmers (particularly if Farmer’s Markets can become more prevalent). It might actually be possible for a farmer in the corn belt, or the heart of the dairy belt, to survive without government subsidy. And to make more money than they do now. And, we’ll be taking better care of the soil, and ourselves. No more hypoxic algae blooms. No more nitrate contamination of drinking water (There is no corroborating evidence of the so-called ‘blue baby syndrome’, but artificially high nitrate levels have been shown to negatively impact water life). And most likely, a decrease in the incidence of obesity and it’s host of related health problems that have begun to plague the developing world.

But we live in a post-consumer economy. It’s hard to sell people on the idea that we need, in the words of Arthur Sinclair to “get a lot of white collars dirty.” We need more farmers, and we need people who are willing to spend more money on food. And those two goals…might be tough to reach. Still, neither of those goals are anti-farmer. In fact, they’re far more pro-farmer than any agricultural policy than we’ve seen in a long time. They’re ideas that seek to make sure that Farmer’s can actually afford a living wage, and not be dependent on government subsidies, because the current food system is a boon to Monsanto, Tyson, and other large-scale agribusiness. Not to the American farmer.

The second point I’d like to address, are comments by John Lucey, Madison professor and food scientist, who was quoted by the AP as complaining about how Pollan says Food Science has merely broken foods down to nutrients, and completely missed the point of the work in the strides that food science has made in food preservation, food safety, and meal prepration time. Now, Food Safety is kind of a joke. Just read Bill Marler’s Blog, we have one of the most unsafe food systems in the world, and most of it is because of the work of modern food science, which has created this single supplier system we have. A food safety problem at a single plant, can make people sick nationwide, something that was never possible before the shelf stabilization and preservation additives that Dr. Lucey is claiming are so great. Which is to say nothing of the other risks such additives occasionally supply.

A lot of these problems are not the blame of food scientists. And not all additives have been linked to potential health risks due to overexposure (and it’s really easy to be overexposed to food chemicals these days). However, food science has not proven to be a panacea, and while there has been good things to come from it, it hasn’t made us any safer than we used to be. If anything, food science has caused most people to lose any sort of cultural knowledge of food, instead trusting it to the supermarket and the labels therein, which is the real crime in Pollan’s eyes.

Ruby Programming Challenge for Newbies

[Ruby Learning]http://rubylearning.com/) has finally decided to jump into offering challenges to support people learning Ruby. I’m a big fan of programming challenges for learning, even if I did miss both the ICFP and Google Code Jam this year. And, I’m a total Ruby Newbie who’s been looking for something to dig into.

Hence, the Ruby Programming Challenge for Newbies (RPCFP). The first challenge is to parse a Subtitle file for a video player, and apply a consistent time variance to the entire file. For instance, shift all the times forward (or backward) by 2.5 seconds.

It’s a fairly simple problem, even with the requirement to parse command line options, and I know I’m planning on implementing TDD in my solution. A test-first, red-green approach is something I’ve been meaning to try for a while.

If you’re new to programming, or like me and just new to Ruby, this looks like a great opportunity to dip into the language.

Shadowgrounds

When Linux Game Publishing first announced Shadowgrounds Survivor, I didn’t think much of it. I love LGP, but my initial reaction was simply that it was another game I’d never heard of, and that it didn’t really sound too compelling, especially since most LGP games tend to be close to the $50 price point. I’d try the demo, but I made no plans to buy.

Then, Michael Simms announced that they were getting the prequel, Shadowgrounds, basically for free out of the port, so they’d arranged to distribute the prequel as well. Even better, since it was an unexpected windfall, they’d be selling Shadowgrounds for about $10.

That is within my impulse range when it comes to supporting Linux game developers, so I bit, springing for the physical copy with the download now option. I love that the industry has gone the instant gratification option with download now.

The plot line of the game is pretty basic for a overhead shooter: Humanity has learned to terraform, and expanded to Mars and Ganymede. Your character was working for the security forces on Ganymede, but an accident at the power plant in the colony got him fired and now he’s a mechanic. The game opens with a power failure that you’re ordered to investigate, and in pretty short order you find out that the colony is under attack from aliens, and you have to help what military presence is still around survive.

Unfortunately, my initial experience with the game was not positive. Load times were high, audio would cut out completely, and the game would periodically crash. Now, I can’t really blame all this on Shadowgrounds, as my computer never quite ran right with Ubuntu 9.04. I was having frequent performance issues, and I suspect they were related to my video card and it’s drivers.

I’d been thinking about upgrading to the 9.10 Alphas, so I decided to do that, since the recent unbootable problem had been solved. Much to my surprise, and pleasure, almost all of my performance issues have been resolved, and I’m hopeful that an impending RAM upgrade (up from 2 GiB) will help alleviate the rest.

Anyway, the game was now playable without any noticeable issues, a fact I quickly found myself grateful for. Shadowground’s gameplay is vaguely Gauntlet-like, in that it’s controlled from an overhead angle, and control of your character is as simple as choosing an angle to face, and firing. There are some events where you have to use items in the environment, or fix broken items in between waves of enemies.

It’s simple, it’s not terribly original, but it works, and it works well. The game designers do a good job of adding twists to the mechanic from time to time, which generally make sense, and they never really overuse any of the little puzzle elements.

The weapons are pretty well balanced, with even the trusty infinite ammo pistol being useful late into the game, and upgrade units, which act as currency to buy upgrades to your weapons, are plentiful enough to keep things interesting, even if you can’t build a surplus.

The writing and story are good. Not great, but it at least makes sense, and the characters are all interesting enough you want them to survive.

Unfortunately this game was built to be co-op, but the only multiplayer it supports is with two keyboards and mice, on the same host. What, no network play?

This would be acceptable, if SDL had support for multiple input devices. Apparently, this will be fixed in a forthcoming version of the library, and LGP has promised a patch to the games.

For me, Shadowgrounds was an easy purchase at the price that LGP is asking. And yes, I purchased it sight unseen because I really do want to support native gaming on Linux. Luckily for Michael Simms and crew, Shadowgrounds has been good enough (if a bit short, given with how close to the end I believe I am), that I’m definitely planning to buy the sequel, Shadowgrounds Survivor, a game I thought I had little interest in.

Can’t wait for the co-op mode to be enabled…

Maladapted Organizational Non-Standards

Washington State University has a marketing group who has put together a set of guidelines and templates for web site maintainers (I don’t use ‘developers’ here, because quite a few of the users aren’t developers). I’m not a huge fan of these templates, but I do completly support the idea of standardizing the web presence for the institution.

Unfortunately, it’s a non-standard. Marketing Communications has no real teeth to force the templates on anyone, and a lot of departments (the one I work for, for instance) haven’t updated. We’re in the process now, but we still haven’t finished it. However, if you follow the link to the University website at the top of the article, you’ll note that the homepage for the institution doesn’t look anything like the way a landing page is supposed to look.

Now, I don’t think anyone really thought too much of this. Frankly, the new homepage is really nice. So, when Information Technology chose to replicate it, or at least something like it, most developers thought it was pretty cool. So did the Vice President who asked for it.

Marketing Communications didn’t feel so good about it. They complained heavily, still are from what I can tell. Apparently, they feel they’re the only ones who should be able to have an interesting home page. Personally, I think it’s close enough to the home page to be reasonable, and that Marketing Communications should make the template guidelines more flexible. Hell, I’m pretty much tempted to write a similar widget in YUI3 and deploy it on at least one of our sites, just to stir the pot a bit.

Standards need to be flexible, and it doesn’t make sense to claim that a certain standard is perfectly okay for them to violate, but not others, when they could simply standardize a look and feel for this sort of landing page, and allow other people the sort of rich interaction they want to provide on the Institution’s. Standards are best when they offer the best parts of them to all the users, not just those you’ve decided to bless to break them.

Especially when the standard you’ve been tasked to create is viewed as little more than a suggestion by the people who are expected to follow it.

Do-It-Yourself Flooring

Catherine and I bought a Condo not long ago (a large part of the reason I’ve been so sparse about updating this lately), and we bought it knowing we’d have some work to do. The place had been previously occupied by smokers, so we had to tear out all the carpet, draperies, and repaint. We knew that going in, and bought a flooring product, a floating engineered hardwood floor, that we’d be able to install. However, we ran into some issues.

First, we had a frost heave in the slab (we’re a ground-floor unit), a heave which is several decades old, given that there was vinyl glue in the crack in the kitchen. Wasn’t a problem with carpet, but would have really messed up our floor. Plus, it turns out we had a 1/2” to 3/4” inch slope in the floor on one of our walls we share with a neighbor. While I had planned on installing the floor, I had not planned on concrete work to prepare the sub-floor. And it was something that I was willing to pay to have done.

Unfortunately, every contractor in the Pullman area was booked out 6+ weeks, which was simply untenable for us at this time. So, we resolved to grind down the heave and fix it ourselves. We rented a concrete grinder from a local hardware store, which was a large motor with these teeth at the bottom that were wedged in with wooden shims. Unfortunately, this device had a tendency to loosen the teeth and hurl the three pound chunks of steel at high speed across the room. Eventually, I had to give up, and return the tool (at no charge to me, thank you, Moscow Building Supply). After calling around, we ended up renting an angle-grinder from a Home Depot nearly a hundred miles away, and using that. It was rough, but it was a hell of a lot safer, and we got the job done.

Then, we had to pour the floor patcher. Now, I described the problem I had to the people at the hardware store, and they sold me on a self-leveling floor underlayment product…that turned out to be meant for pouring over the entire floor (damn you, Moscow Building Supply!). The product was 100% concrete, while what I really wanted would have probably been mostly epoxy. With this product, I needed to do a lot of spreading and trowel work, though that might have also been problems in getting the right consistency of the mix. Oh, about that, if you need to mix concrete, use a corded drill. My DeWalt cordless is a great little drill, but it’s not up to mixing concrete, the batteries just can’t quite hold out. It’s fine for one 50 Lb. bag of cement, but when you have three…. You’re kind of fucked.

Once we got the floor poured, and gave it a few days to dry, we got the underlayment down, we used one with a vapor barrier since we’re ground floor, and begun putting down the floor. The learning curve on the floor is deceptively high. For one thing, until you get the third row in, the floor lacks most of it’s structural rigidity. However, once you get those first few rows, it does tend to go in pretty fast. We were able to do the majority of the three rooms we were working (some 700+ square feet) in a few days of work (most of those were week days too). There were only two really difficult parts. First, we had to cut strips using a table saw for certain portions of the floor, because the rooms just weren’t quite narrow enough (or wide enough, I suppose) to do full boards. And around the closet frames in the bed rooms, where we had to cut notches in the boards in both sides. If we’d had a jigsaw, it probably wouldn’t have been too bad. But using the Dremel saw attachment just required cutting from both sides (not ideal). However, the floor is in, and it really does look great.

Now, we’re on to trim, and cutting trim is an interesting experience because it comes in 12 foot lengths (and you want the long lengths), so it’s tricky to work with, and when you cut it, it’s generally at a 45 degree angle. Now, this is a tricky problem, because you need to make sure you cut the right 45 degree angle, so that boards match up correctly on the wall, or at corners. There were some miscuts, and a lot of my wife and I standing around waving our hands trying to make sure we were about to make the correct cut. But we got most of the big pieces in, and now I’ve just got some work to do in the closets and the entryway.

Am I glad we did this? Absolutely. The floors are beautiful. My concern is that it might not have saved us any money, between the hours we’ve spent on it, and the two house payments since we won’t start moving until this weekend at the earliest. I do know that we’ve had to spend several hundred dollars more on this project than I’d originally thought, though to be fair, quite a bit of that was me being naive about the problems. Renovation is hard, and while I don’t think I’ve been really foolish about it, the fact that we’re coming up on two months of being home owners and still haven’t started moving is amazingly depressing. Sure, the end is in sight, but both Catherine and I want nothing more than to be moved.