Mad, Beautiful Ideas
Paper Review: Robust Flexible Handling of Inputs with Uncertainty

This week I read A Framework for Robust and Flexible Handling of Inputs with Uncertainty, written by Carnegie Mellon researchers Julia Schwarz, Scott E. Hudson and Jennifer Mankoff, and Microsoft Research's Andrew D. Wilson. I grabbed this paper because I've seen several references to it recently by people working on touch interfaces, such as Canoncial with the new Multitouch support they've made a priority for Ubuntu 11.04. Howver, while the information in this paper is relevant to touch interfaces, I think there is a lot of lessons to be learned from this paper for writers of UI frameworks.

The basis of the methods explored in this paper are fairly simple. Knowing that some events are hard to lock down to a single item, like a user touching a space that overlaps multiple buttons in a UI, the framework would support handling multiple events until it's able to determine which one is the one that should be handled, based on changing probabilities as condition change. The first example provided was the user touching near the edge of UI window that had a desktop icon also under the window edge. Is the user trying to move the desktop icon, or resize the window? Under the framework, both events would be handled until it can be determined which one was the 'real' event. Too much vertical motion? User was almost certainly not trying to resize the window, move the desktop icon. Can't determine? Don't do either.

Certainly, this makes a lot of sense in touch. Currently, when designing touch UIs, the current thinking is that touchable controls should be large enough that the user is unlikely to miss. One of the working examples in the paper involves the size of the user's finger basically covering three small buttons, one of which happens to be disabled, and correctly determining the right course of action, which based on their example may not have worked out if that third button had been enabled.

This research was most interesting, because they were also able to provide examples of how this system could improve the user experience for vision- and motor-impaired users. Plus, it can apply to simple mouse interactions as well. In the default GNOME themes for Ubuntu since 10.04, the border of a window is almost impossible to select for window resizing, making it nearly impossible to resize the windows, unless you grab the corners, which may not always be what you want.

The nice thing about this paper, is that it definitely talks about a framework, and I'm a bit disappointed that their .NET code doesn't seem to be available. But since it's a framework, the bulk of the development would need to be done by the UI toolkit developers, and sensible probabilty function defaults could be defined that many developers probably wouldn't generally need to modify.

In a sense, I'm a bit disappointed by all the attention this paper has gotten in the 'touch' interfaces communities, because I really think that it's as important, if not more, for accessibility, and improving ease of use for impaired users and even the rest of us. It may add a bit of overhead, both on the system and developers, but the authors mention in the paper that they built an accessible set of text inputs in about 130 lines of code, that would determine if input made sense for them via regular expressions. Adding support to take voice input was about another 30 lines of code.

My hope, after reading this paper, is that the multitouch working that is going to be happening in Ubuntu over the next few months will impact the rest of the system positively as well. I think that's reasonable, since I've heard that there is a huge accessibility focus at the developer summit this week. Now, there is still a lot of work to be done in this space, I think, particularly as we move more toward gesture based inputs and such, but there are definitely places where it could be applied today, though I think it would best be done at the UI Toolkit layer, so that it's available as broadly as possible.

Next weeks paper: Thumbs up or thumbs down?; semantic orientation applied to unsupervised classification of reviews.