A couple of really good videos, from TED. Repetitive, so you need not watch both. But what it boils down to is the ability to better integrate the natural world and the digital world. A combination of off the shelf camera and projector, with the processing power of a mobile phone gives the ability to project information on everyday objects and use natural motions to access and manipulate digital information. More after the break.
Pranav Mistry is an MIT grad student. He talks about a couple of very interesting points in the first video, that tend to enumerate his critical thought process. The first, not too novel idea, is that we should use natural gestures to interact with the digital world. One could argue that this is the same idea behind the pinch and zoom of the iPhone, or the interaction paradigms of Microsoft Surface. The second idea is a lot more intriguing - why limit pixels to the boundaries of a screen. Bring them both together and you now have a natural gesture driven conversation with unbounded pixels.
The contraption is obviously grossly under-powered (both battery juice, and processor cycles). I am equally sure none of the ideas themselves are ground-breaking. But bringing them together, and building in enough forgiveness in the algorithms to actually be able to do valid actions is what makes this such a fascinating study.
Watch the video, especially towards the latter half where the demos actually start.