Thursday, September 23, 2004

Re: vannelle images(Yoichiro Serita による変更あり)

From: delfin@cc.gatech.edu
Date: September 20, 2004 7:11:21 PM EDT

Hi everyone,
Waou, I wish I could be there! I am really happy you are having fun with the smoke
patch and I look forward to redefine/modify/enhance any part of the patch
based on your comments (and I am sure I speak for Maria too ;).

Harry, it sounds like you hurt your wrist, are you ok??? I hope nothing
too bad...

Thank you for the images (haven't seen the movies yet, will install the codec right
after this email), which picture shows the space for the membrane exhibit?
How dark will the space be? What kind of cameras will we have? Where will
they be positioned?

In this email I will try to address some of the comments re:visuals, membrane dynamics and
parameter space next, but I would also love to talk to you live, is there
a phone where I can reach you? Xinwei, does your us cell work in europe?
Or you can reach me here on my cell +33 6 60 09 41 76.

For the design of the space, I like the space that evolves throughout the
blog, the idea of contiguous screens that reflect different states but still linked by a
single thematic thread, the idea of flow, of memory, of making people
aware of the space around them, of the memory of the space near/distant
past and making people aware of the tension between bodies, of the
pertubations they create in the empty space around them.

some more specific ideas re:blog:

* "i want to have Others bodies come _into_ definition in respoonse to my
gesturing":

we can play around with the notion that gesturing would be like "clearing"
layers that condensed on a window fogged by time. Layer 0 is empty, layer
1 is past silouhettes (frame difference of past footage), layer 2 is past
footage. Starting with darkness, as people start gesturing, their action
are transformed into "clearing smoke" to reveal past silouhettes, finally
as the gesturing becomes
more intense and/or lasts longer, we reveal the final layer that is the
true video data (the real image of their bodies).
Technically, we would use density+velocity data to come from the frame
difference of the current video but we would use the smoke image as a mask
to operate on the alpha parameter of the pixels, so that smoke clears the
current layer to reveal the next layer buried underneath (so do this
recursively). More layers can be added, we can play for example with
near/distant pasts.

* Another idea: condensation/using the moving body as a sink for the smoke
particles ( I think this was mentionned in the blog). Xinwei, Yoichiro, we
could do this by defining as the input a vector field that traps the smoke
around silhouettes, sort of the inverse diffusion process.
So we would start with smoke and as people gesture, the
smoke crystallizes around their silouhette.
mathematically, we can use the
gradient of g(image) where g is a function defined in:
http://www.ai.mit.edu/people/delfin/8903-sp03/image.html , equation 7.

The vector image shows the vector field. nabla.g is basically a vector field
that creates "valleys" around edges in
the image, so the smoke would get trapped in these valleys....

We can relax this condensation, allow to create bridges between different
bodies, perhaps using a string across the image as a path for the
smoke from one "valley" into another...
Or even use a string as the only density source (source of smoke) and
silouhettes as "sinks" for the smoke, so that they trap smoke while
interacting with the string.

* Yet another way to use the silouhettes is to use as density source the
current video and to make the image clear where
the silouhette appears (for a certain time, so you can make entire patches
of the image come into focus, by making the velocity vector field zero
wherever the frame difference is non-zero) and diffuse everywhere else where there
has been no gesturing (using for example random velocity vector movement
where the frame difference is zero). So your own
movements help crystallize past images, bring the past into focus, for a certain time at least.

Just some thoughts, Maria and I can definetely push on some of those ideas
in october to see if we can get the desired effects...

for the parameter space, you guessed rightly that vel decay, dens decay,
dt and diff have the most pronouced effect.
* Vel decay applies a dampening factor to the vector field from the
previous time step, so the vector field of the current time step should be
more pronouced.
* dens decay does the same thing to the density field. So setting it to 0
means that no smoke is remembered.
* dt is basically a turbulence factor (the higher, the more chaotic)
* diff seriously dampens the smoke, a non-zero value (a small one like 0.1
I think) creates a "ghost" effect instead of a smoke effect.

Let me know as ideas for the visual design evolve and become finalized. I
look forward to working with Yoichiro and everyone else to refine the
smoke patch and create more visuals to create desired effects...

talk to you soon!
Delphine


On Mon, 20 Sep 2004, Yoichiro Serita wrote:

vannelle still images are just uploaded-

http://soleil.us/vannelle/Pictures.html
or just download
http://soleil.us/vannelle/vannelle.tar.gz

0 Comments:

Post a Comment

<< Home