Projected Homography
Projected Homography

Download:

Bouncing the calibration sequence off a mirror

This project’s aim is to replicate the results of J. Lee’s projective homography demonstration (citation at bottom).

The original project used costly fiber optic hardware to input a binary positional argument to each of four light sensors (each corner of a rectangular surface, later extended to arbitrary surfaces) via a projector that shows a series of calibration images a.k.a “structured light”, “gray-coded binary” etc.

The approach for this project will be:

First order of business: acquire ambient light sensors and see if we can get a reading out of them:
An arduino reading an ambient light sensor
Pictured: 10K resistor, 472 capacitor, arduino.

const int LED = 13;
const int AMBIENT = A0;
void setup()
{
  pinMode(LED, OUTPUT);
  digitalWrite(LED, HIGH);
  delay(500); // ms
  digitalWrite(LED, LOW);
  Serial.begin(9600);
}

void loop()
{
  // analogRead returns an int 0-1023
  // requires 0.001 ms to read
  int measurement = analogRead(AMBIENT);
  // units are AREF / 1024
  // when AREF = 5V, each unit is ~4.9mV
  // The AREF value can be changed via analogReference/1
  unsigned int decimillivolts = measurement * 49;
  Serial.println(decimillivolts);
  delay(500); //ms
}

With our preliminary step complete, we move on to the more interesting problem: How to interface a fiber optic cable with our cheap sensor?

Cutting the fiber is tricky… each cleave yielded inconsistent results, I had to just keep cutting until it things looked “good enough”. Heat might have helped, anecdotes from a google search suggested otherwise. Ditto for cutting the tops (producing a flat surface) of the ambient light sensors.

I chose superglue (non-gel) to adhere the cleaved cabled to the light sensor for two reasons:

The superglue worked out well enough that no other adhesives were experimented with.

Last, the light sensors are wrapped to prevent light not from the fiber optic triggering them.
A fiber optic cable is attached to the ambient light sensor

At this point I immediately noticed that output from the sensors was cut by 10-12 times (3.1V to 0.28V). I wrote up a small procedure to accumulate light readings over time to compensate for the higher affect of noise on the output:

const int LED = 13;
const int AMBIENT = A0;

// integrate over a flourescent bulb's cycle. Usually 100
// or 120 hz. analogRead() takes .001 ms .
const unsigned int MEASURES = 120;
unsigned int measurements[MEASURES];
const unsigned int FIRST = 8; // 120 >> 3 = 15
const unsigned int FIRST_SHIFT = 3;
const unsigned int SECOND = 15;

void setup()
{
  pinMode(LED, OUTPUT);
  digitalWrite(LED, HIGH);
  delay(500); // ms
  digitalWrite(LED, LOW);
  // we could zero out measurements[]... nah
  Serial.begin(9600);
}

void loop()
{
  Serial.println("# measure");
  // analogRead returns an int 0-1023
  // requires 0.001 ms to read
  for (unsigned int i=0; i < MEASURES; i++)
  {
    measurements[i] = analogRead(AMBIENT);
  }
  Serial.println("# accumulate");
  // We will now accumulate and average, using the
  // fact that our measures are 0-1023 and
  // an unsigned int can hold 64 (or fewer) accumulated
  // values before overflowing.
  for (unsigned int i=0; i < (MEASURES >> FIRST_SHIFT); i++)
  {
    unsigned int block = i << FIRST_SHIFT;
    for (unsigned int j=1; j < FIRST; j++)
    {
      measurements[block] += measurements[block + j];
    }
    measurements[block] >>= FIRST_SHIFT;
  }

  {
    unsigned int i=0;
    unsigned int block = i << FIRST_SHIFT*SECOND;
    for (unsigned int j=1; j < SECOND; j++)
    {
      measurements[block] += measurements[block + j << FIRST_SHIFT];
    }
    measurements[block] /= SECOND;
  }

  // average is now in measurements[0]

  // units are AREF / 1024
  // when AREF = 5V, each unit is ~4.9mV
  // The AREF value can be changed via analogReference/1
  unsigned int decimillivolts = measurements[0] * 49;
  Serial.println(decimillivolts);
  delay(500); //ms
}

Next up was a bit of elbow grease; I made three more, tested them, then installed them in the four corners of a plywood panel:
A surface with four embedded fiber optic cables

I then set up a laptop and projector. In order to test the sensitivity of each sensor, I wrote a small web page that I could move around the screen:
Using a projector to test the readings of each sensor

< style>
	.binary
	{
		height: 33%;
	}
	.on
	{
	}
	.half
	{
		background-color: #777;
	}
	.off
	{
		background-color: black;
	}
< /style>
< div class="binary on">< /div>
< div class="binary half">< /div>
< div class="binary off">< /div>

...and used this to calibrate (find the expected output of) each sensor when exposed to #FFFFFF from the projector.

Now that we have our rig completely set up, we can code a calibration sequence:
Part of the calibration sequence

And use the readings from the sensors to detect the surface in the projected view:
Seeing if we found the four corners of the surface

The output of the calibration reader is a bit unnecessary, but fun. I chose 8 scenes for each calibration sequence (one vertical sequence, one horizontal sequence, for a total of 16 scenes) with black screens between each.

A horizontal calibration scene. Its a binary pattern of alternating white and black stripes of equal width. At each step, the stripes subdivide.
Pictured: Horizontal calibration sequence showing calibration images #2-#5. #1 is a white image, and #6 & #7 are painful to view (seriously).

For each sensor, I detect when the first scene (all white) is detected by all sensors. Then, for each calibration scene, a binary number is appended to. When finished I have two (horizontal & vertical) 8-bit (0-255) numbers for each sensor:
An eight-bit binary number representing the position of the sensor in the projected view
Pictured: One of eight outputs ("151") of the calibration code. The most significant bit (Here labeled #1) will always be "on", since scene #1 is an all-white image. Each bit encodes whether or not white was detected at each step in the 8-step calibration sequence.

We take our encoded position and convert it to screen coordinates for our pyOpenGL application (non-working example):

def parse_bitwise_positions(y_0, y_1, y_2, y_3, x_0, x_1, x_2, x_3):
  '''
  our values range from 128-255.

  The binary representation of the value [i.e. bin(176) is '0b10110000']
  denotes its position in the scene. The MSB (here the 7th) denotes
  whether our sensor is on the projected surface or not (if this bit is
  not present we can not continue, hence we check that all inputs are
  greater than 2**7 .

  The next bit (here the 6th bit) tells us whether we are in the top or
  bottom half of the screen, 0 is top, 1 is bottom. This halves our
  position space. The following bits continue to halve the position
  space until we run out of precision, at which point we are confident
  we know the encoded position.
  '''
  # coerce our string input to ints
  (y_0, y_1, y_2, y_3, x_0, x_1, x_2, x_3) = (int(y_0), int(y_1),
      int(y_2), int(y_3), int(x_0), int(x_1), int(x_2), int(x_3)
    )
  top_left = Point(0, 0)
  top_rght = Point(0, 0)
  bot_rght = Point(0, 0)
  bot_left = Point(0, 0)
  for i in (y_0, y_1, y_2, y_3, x_0, x_1, x_2, x_3):
    if i < 2**MSB:
      print "Bad sensor input!"
      return (0, 0) + (0, height) + (width, height) + (width, 0)

  for i in range(MSB):
    if y_0 & 2**i == 0:
      top_left.y += height / float(2**(MSB-i))
    if y_1 & 2**i == 0:
      top_rght.y += height / float(2**(MSB-i))
    if y_2 & 2**i == 0:
      bot_rght.y += height / float(2**(MSB-i))
    if y_3 & 2**i == 0:
      bot_left.y += height / float(2**(MSB-i))
    if x_0 & 2**i == 0:
      top_left.x += width / float(2**(MSB-i))
    if x_1 & 2**i == 0:
      top_rght.x += width / float(2**(MSB-i))
    if x_2 & 2**i == 0:
      bot_rght.x += width / float(2**(MSB-i))
    if x_3 & 2**i == 0:
      bot_left.x += width / float(2**(MSB-i))

  return (bot_left.x, bot_left.y) + (top_left.x, top_left.y) + (top_rght.x, top_rght.y) + (bot_rght.x, bot_rght.y)

def draw_quad(x_0, y_0, x_1, y_1, x_2, y_2, x_3, y_3):
  gl.glBindTexture(gl.GL_TEXTURE_2D, texture)
  gl.glBegin(gl.GL_QUADS)
  gl.glTexCoord(1, 0);
  gl.glVertex(x_0, y_0)
  gl.glTexCoord(0, 0);
  gl.glVertex(x_1, y_1)
  gl.glTexCoord(0, 1);
  gl.glVertex(x_2, y_2)
  gl.glTexCoord(1, 1);
  gl.glVertex(x_3, y_3)
  gl.glEnd()

draw_quad(*parse_bitwise_positions(*sys.argv[1:9]))

Et voilĂ :
And the output! A projected homograph

The interesting property of this system is that it works with any sized OpenGL window (so long as the first calibration image hits all sensors), with the projector upside down or turned on its side, or even when a mirror is introduced in the path of the projection (!):
Bouncing the calibration sequence off a mirror
A projected homograph bounced off a mirror
Pictured: The image aligned with the sensors on the projection surface. Below: the output of the laptop to the projector.

J. Lee, P. Dietz, D. Aminzade, and S. Hudson. "Automatic Projector Calibration using Embedded Light Sensors", Proceedings of the ACM Symposium on User Interface Software and Technology, October 2004. http://www.johnnylee.net/projects/thesis/


Leave a Comment?

Send me an email, then I'll place our discussion on this page (with your permission).


Return | About/Contact