All 4 entries tagged Ui
July 18, 2008
I’m currently writing a few dialogue and interaction menus for my current AIR project, and the thought had occurred to me that having established a reasonable methodology for handling and displaying modal dialogues within my Cairngorm-based app, I was perhaps using them almost by default, without thinking too carefully about whether a modal dialogue was the most appropriate means of interaction. By modal in this context we mean “A state of a dialogue that requires the user to interact with the dialogue before interacting with other parts of the application or with other applications”.
At the same time, Chris and I have been talking about metadata recently (another entry to come, but the premise was that persuading users to input metadata about assets is hard to incentivise). Related to that, Chris sent me this great link to an entry by Jeff Attwood that in turns talks about an entry by Eric Lippert on how dialogue boxes are perceived by users:
* Dialog boxes are modal. But users do not think of them as “modal”, they think of them as “preventing me from getting any work done until I get rid of them.”
- Dialog boxes almost always go away when you click the leftmost or rightmost button.
- Dialog boxes usually say “If you want to tech the tech, you need to tech the tech with the teching tech tech. Tech the tech? Yes / No”
- If you press one of those buttons, something happens. If you press the other one, nothing happens. Very few users want nothing to happen—in the majority of cases, whatever happens is what the user wanted to happen. Only in rare cases does something bad happen.
In short, from a user perspective, dialog boxes are impediments to productivity which provide no information. It’s like giving shocks or food pellets to monkeys when they press buttons—primates very quickly learn what gives them the good stuff and avoids the bad.
I liked that, especially the bit about “Teching the tech” – while it’s quite funny it’s also a pretty accurate reflection of my experience as a user.
This is also related closely to what Chris and I were discussing about metadata; expecting the user to fill in information that has no obvious purpose and slows down the primary task of upload/publish or whatever it is that they are trying to do, is likely to be ignored. If those fields/dialogues are modal or conditional, it’s worth thinking carefully about whether there are alternative ways to complete the operation or gather the infomation. That’s harder to do of course, and there are cases where modal dialogues should be considered appropriate, e.g. where the application is about to do something destructive like deleting or overwriting a file, but there are alternatives, like how IE and Firefox avoid breaking the flow of interaction when blocking certain actions.
June 12, 2007
Writing about web page http://www.bumptop.com
I really like this example of an experimental desktop UI. The subtle use of physics to create a number of effects make the interactions seem more natural and the metaphors used (shelves, piles, marking for attention etc.) work better than I imagined they might. I can see some omissions though currently, for example the icons are just icons (something akin to file thumbnails would be better) and having to drag a file around others so it doesn’t knock a pile over could be annoying, but I would imagine these could be fixed. I guess also that physics engine wouldn’t be adding much overhead. Nice experiment, and particularly interesting that Apple has just introduced a similar concept with its Stacks model for Leopard.
There’s also a good demonstration on TED here by one of the creators.
June 01, 2007
So I said I wanted to find out a little more about how Surface interfaces are actually programmed, and I’ve begun to find out a few interesting snippets. Ars Technica has a good article covering some of the mechanics of the interface and a little about the development workflow:
Surface applications can be written in Windows Presentation Foundation or XNA. The development process is much like normal Vista development, but custom WPF controls had to be created by the Surface team due to the unique interface of Surface. Developers already proficient in WPF have been trained in the idiosyncrasies of writing Surface apps and should be available to customize Surface deployments for the large hotels, casinos, and restaurants at which the machines will first be deployed.
So that’s quite a good reason to start looking at WPF I guess and the XNA bit is interesting, but I’m an Actionscript kind of guy, and Didier Brun from ByteArray has already embarked on an AS3 mouse-gesture recognition project, which I’m now playing with at home. Combined with this excellent guide to gesture programming in AS2 at Gamedev.net I’m starting to understand what’s involved (which is amazing in itself). GR is not simple by any measure – there’s some horrible mathy-type stuff to deal with, mainly because of the amount of variability in user input; scale, position, proximity and variation all complicate the recognition process. but once you create ways of handling and rationalising the variation the rest becomes more straightforward.
I’m currently working on an implementation of Didier’s package into a simple Flex app on my laptop, but I might also take a look at WPF more closely now.
May 30, 2007
Writing about web page http://www.microsoft.com/surface/
News about the Microsoft touch-interface Surface seems to have finally hit the web this week – the MS site (done in Flash – where’s the Silverlight??) gives some pretty exciting examples of how it might be used. I especially like the CD-track picker example – very cool. Creating new kinds of UIs for this application would be very exciting indeed. I love the logo they’ve created too.