I’m fascinated by watching the developments in touch and movement based computer interfaces over the past few years. From the Apple iPhone to the Nintendo Wii, it seems that there is a great deal of excitement over these new interfaces. Nearly every week I see something interesting in this domain. Here, for example is a neat little video demoing how to do IR tracking with Nintendo Wii’s sensor and some IR reflecting tape on your fingers:
Whenever I see these new interfaces, I immediately think of the cortical homunculus
which is a representation of the primary motor cortex, weighting the parts of the body being controlled or sensing in proportion to the amount of the primary motor cortex used. Pandering to our evolved overloading of certain motor and sense skills seems like a necessary requirement for a successful interface. Necessary, but certainly not sufficient. Personally, for example, I find mice to be a quite cramped interface. Sure it works great, but the strange disconnect between my hand movements and the cursor movement, still, after all these years using a mouse, feels odd. I mean, when I was a monkey, and I poked a banana, I would never expect that this poke would cause hairs on a neighboring monkey to move. Which is why I find the touch or more direct point interfaces so intriguing. It seems these interfaces appeal to the talented parts of our sense and motor skills, but also restore a sense of direct feedback (or whatever device is being manipulated.)
The real question, of course, is how long before these interfaces allow me to be a wizard. Thrashing my hands around in the air to cast a spell which will execute on my computer really appeals to my inner geek.