IKEA | DREAM KITCHENS FOR EVERYONE is a flash site that uses Matrix-like action shots to introduce all angles of a kitchen to would-be buyers.
Not that I’m particularly interested in kitchens, but it does get me thinking about 3D photos. I think the way IKEA flash designers did it was that they had a studio with two or three dozen cameras set up in a semi-circle. And then they took an action shot all at once. Then, they use flash to stitch together the pictures so that it would look like you’re traversing in space, while time is frozen.
I remember my dad buying a 4 lens camera a long time ago. If I’m not mistaken, it’s a crude plenoptic camera. A real plenoptic camera has one lens, and has a microarray of lenses. Anyway, my dad’s camera took four photographs at once, so that when you go to develop it, you can get an actual sense of depth in the pictures. Or rather, it looked like people cut 4 layers of photos together, and put it in a hologram of sorts. The film was expensive to develop, and I think we only had a couple developed.
However, with digital technology, I think it is wholly possible with just one lens. Given that a lens can change focus fast enough, you can generate images that have different depth images. I haven’t read his paper, but that’s what Ren Ng’s Light Field Array camera seems to do. It can focus on different planes, AFTER you take the picture. There’s also Ramesh Raskar’s Line drawing camera. It uses four flashes and uses the shadows in the flashes to create line drawings and videos.
However, unlike matrix photos, there’s seems to be no information captured by a CCD array about the backside of an object when you take a picture of it.
And yet, I wonder. The reason why we can see anything at all is because light is reflected off that object. Sometimes, light bounces off multiple objects before it reaches our eyes. And yet, we only have information about the last object that the light bounced off of. I don’t know enough about optics (maybe it’s time to learn), but could it be possible that the light that reaches our eyes/camera still has information about all objects it has bounced off of? Unless the object is a mirror, light will scatter off of the object, so we will probably get partial information. However, if we were able to collect a bunch of partial information from different sources, it might be possible to piece together the partial information for a cohesive picture of the backside of an object, not unlike how network coding pieces packets together at the sink.
So if light still carried information about an object after the object has scattered it, and if we can detect it, I’m guessing it’s possible to take matrix photos from a single vantage point.