<< Constant Index

1st experience – December 2008


After the first workshop, we had a list of questions regarding the use of the software.

One of the first reactions was the confusion the mirror image induces. We decided to use the mirroring option of the beamer, so we saw litterally a copy of ourselves, and not a mirror image.
The thing is that we then had to introduce all images as upside down and that the computer screen itself was upside down as well.
How did this work in Les Bains? Was it one of the questions?
Would it be possible to have this option embedded in the code?

“At Les Bains we had the projection at the back of the stage are and the
beamer near the front. This meant that left-to-right on the projections
matched left-to-right on the stage, then top-to-bottom on projection
mapped to backstage-to-frontstage and images could be seen normal way up.
It would be possible to introduce options to flip the video and images
in the software.”

We tried to overlay an image with the ‘hide overlay’ in the trigger OFF-options. We wondered whether it would be possible to decide on the time the image would stay. Now it flips away so quickly we cannot play with it. Maybe this is due to our small playfield?

“could be, the ‘hide overlay’ will just make the screen go back to the
video and the OFF actions are triggered when someone leaves the trigger
zone.I could look into the possibility of having a time duration on the image.”

It happened very often that we were setting the trigger zones with different images on each, and that the machine got confused. Where zone 1 was edited with the image of a coffee and zone 2 with the image of a music score, it would show the coffee in both zones and/or the image in zone 2 would be inactive. I cleared the zones, started from a new zone, left the program and restarted it, but it always happened somehow.

“The images are always determined by the trigger sensors rather than the
zonemaps, so you need to explicity set up triggers to hide and change
images when the zonemaps change.
The solution to this would be to have actions on the zonemaps themselves
so that some things can be activated or dis-activated when a zonemap

We introduced images with transparent background, .png and the following dimensions (640 x 480), but our last series of images appeared as blank screens on the projector, no way to get them right. The thing I’m not sure of at this stage, is whether we worked in inches/cm. This might be a parameter maybe.

“I am not totally sure what might have caused this, although vaguely
remember a similar problem. I think it was due to the formatting of the
It might be that the last set of images were layered, in which case the
software can’t support that, but I can’t remember of PNG images can be
layered. It may also be that the last set of images were at a different
bit depth. ie 8-bit or 16-bit rather 24 or 32 bit.
If you can check through the images and see what made them different
from the ones that worked that would be useful to know.”

It would be great to work with sound (as export from the machine) but also as a field for triggerzones. How can we organise this? Nicolas, could I look into the OCS-messages & Pure Data or Python with you? Simon, what does it mean to set-up a second machine for sound?

“The main thing is that the machines need to be networked to communicate
with each other. It would be good to have a network hub at the workshop
that we can connect several machines to and then experiment with routing
messages to different software options on different machines (ie sound,
text, image).
For one option you could have a look at this:


it is apiece of software that I wrote that displays text and images and
can be controlled via OSC as well as send out OSC messages to
synchronise with other applications. The text and images can be changed
to whatever you wish to use. It would need to be run on a separate
machine with its own monitor or projector.
It is also written in openframeworks, but let me know if you have any
problems running it. the source is included if you need to compile, but
this was written with the new version of openframeworks (version 0.05)
so must be compiled against that.”

We felt we had to move slowly in order to follow ourselves on screen (there is a slight delay). I suppose it is part of learning how to ‘play’ with the machine.

“You can improve the speed a wee bit by ‘nice’-ing up the software, so
run it with a command such as:

nice -19 ./clickToLaunchApp.sh

Unfortunately there are a number of factors that determine the speed of
the process: the camera, the grabbing, the speed of the computer, the
speed of the video processing.”


We had hard times with the light – we did not have the possibility to test it, but wondered whether the best lightning is not just above the camera. Now we used big theatre spots as ‘indirect’ light on the 4m².
The floor was too shiny which made our presences on the camera jump like flees, when we covered the floor with a black cloth, it worked a lot better (I realize now Simon mentions this in the description of the workshop at APT, I had forgotten about it, but it is really important).

“Sorry I didn’t explain about this aspect, to be honest it slipped my
mind. Correct lighting, as you have discovered, is *really* important
and makes a big difference to the effectiveness of the software. It
sounds like you came up with the right solution however. Basically you
need to remove as much shadow as possible. Several lights focused on the
same area from different positions and at oblique angles is best (which
sounds like what you did). These should not be too strong and ideally if
some diffusion material can go over them that can help.”

Comments are closed.