Microsoft recently announced their surface computing platform. It’s where you get to manipulate objects with your hand on a screen. I think after Minority Report, everyone wanted something where you could manipulate virtual objects. Since there, there’s been a realization of that. But few of us had the imagination/drive/ability to actually do something about it.
I had seen simple demonstrations of this type of interface at malls, where a projector and a camera would use occlusion to calculate interactions with the objects. But it was kinda like having a stub to manipulate objects–you couldn’t pick them up and manipulate it. Microsoft’s surface computer seems to have done away with that, and added a sense of interaction between real objects and the virtual ones in the surface of the desktop–so one can load photos, simply by dragging the photo ‘into’ the camera.
As for multitouch sensing aspect of surface computing, it’s not the first. The idea has been around since the 80’s, if not earlier. However, the first demonstration that permeated the web was Jeff Han’s demo at TED. Multitouch-sensors weren’t available commerically, so you’d have to be able to build your own. According to Jeff in the talk, he said it was low cost and scalable. It makes me suspect that many EEs could have built it. But we didn’t.
“People that love software want to build their own hardware.” – Alan Kay
I use to think that this quote was only applicable in the days when software was much closer in abstraction to hardware; when people were writing in assembler and C. Nowadays, the only people that seem to do that are embedded programmers, and having done embedded programming for sensor networks, I can say it’s not half as fun as web or application programming. Having to manage memory, or build your own malloc wasn’t fun, to say the least. It was kinda having to time the spark plugs in your engine to go, instead of just pushing on the gas pedal.
However, I’ve taken a new view to the quote. When I think about all software, they all process information in some way. The input has to come from somewhere, and the output has to go somewhere to realize the bits in some form. However, the inputs are limited by what humans are willing to enter, and more importantly in this post, what kinds of hardware that will collect this data.
I can’t wait until clothes keep track of themselves and match themselves. Technically, it’s possible now to write the software, but one would have to enter the information by hand so that the computer can do the tracking and matching. But if there was hardware for clothes to serve this information, then it expands the space of information for software to operate on.
In this light, I can see where the quote is applicable. To expand the reach of software to access information that is only currently available in the physical world, you’ll have to be willing to build hardware.