Introducing lftextures
Introducing lftextures

Download: Windows (3M)
Linux 32 64 (3M)
Mac (5M)
Datasets: Path (2M)

Here we have available some binaries of our simple light field viewer, lftextures. When Dan and I first tried to capture a light field, trying to manipulate the data was a real pain. How were we to know if our data was any good if we didn’t have a simple test to show we had captured the scene? We decided to implement the easiest operation, digital refocusing. Here’s the product:

An example of digital refocusing

The controls are simple, use the mouse wheel or Home/End keys to scroll through the scene; the application refocuses as you scroll.

Scroll up = move to background, Scroll down = move to foreground.

And finally,a few macros (for making movies)

Once you’re satisfied scrolling through the scene, try hitting “M” until the on-screen display says “M: 2, 3D” then click and drag the scene back and forth:

An example of a 3D light field representation

  1. DrAltaica

    on December 19, 2009 at 21:26

    What algorithm are you using to refocus the image? I don’t get why objects appear to move right as the focus moves further away.

  2. matti

    on December 19, 2009 at 23:06

    To tell the truth, Dan and I are still mystified that the algorithm I wrote works so well. It nudges each image to the right, but each successive image gets pushed further than the one before it.

    for(i=0; i < NUM_PIC; i++)
      glTranslatef(offset*i, 0, 0);

    I could have moved the screen as the image moved, but I didn’t.

    If you hit ‘m’ once, you get to ‘slide’ mode, which doesn’t really work, but was supposed to be the same idea. The slide mode fixes the center image and then moves the other images around it, the images that come before it go one way and the images that come after it go the other direction.

  3. DrAltaica

    on December 20, 2009 at 08:56

    glTranslatef is opengl function?

    so you are just shifting each image over by X * the camera number them averaging them together?

  4. matti

    on December 20, 2009 at 13:09

    Yes, and yes. That is exactly how the rendering algorithm works.

    Surprising, huh? It was the very first thing I tried, to debug the display to see if all the images were on screen, moved, and blended with each other. Imagine my surprise when the debug routine actually refocused the image!

  5. » Welcome! > FUTUREPICTURE

    on December 21, 2009 at 13:19

    […] GIFs look really bad. Here are some links to higher-resolution images if you don’t want to install our software and play with the included images (or your […]

  6. Jeremy

    on December 21, 2009 at 15:17

    Linux 64 bit is 404

  7. matti

    on December 21, 2009 at 15:50

    Fixed the 64-bit link, and the landscape dataset.

  8. Jeremy

    on December 21, 2009 at 15:45

    My friend and I were discussing how the out-of-focus portions of the photo contain multiply copies of an object. We’ve decided to dub it “meta-bokeh”

  9. DrAltaica

    on December 22, 2009 at 00:19


    I redid you refocusing techniqu in PhotoShop

    then I took a Tomograph of the lightfield(@ “Plane of focus” of the refocused image I made)

    Not much of a difference between them

    If you guys could take a picture of a flat object. Like a book or pizza box just far enough away so that it’s compleatly in view of all the cameras and perpendicular to the otical axis of your camara(So it’s the same shape on the photo as it is in real life). It would help differ

    I think it’s the ratio of the Entrance Pupil’s size to the distance to the Plane of Focus.

    I wonder how you are going to compensate for the fact that you are not using pinhole camera?

  10. DrAltaica

    on December 22, 2009 at 02:03

    I just realized I messed up on the refocusing.

    Here is the fixed Refocusing.

    and the fixed Differance

    You can see the differance on the tire mark on the path.

Leave a Comment?

Send me an email, then I'll place our discussion on this page (with your permission).

Return | About/Contact