Re: personal/hill/sensorium

Darren Kelly (
Mon, 13 Oct 1997 14:08:39 +0200 (MST)

>We, Palindrome, is rehearsing at the moment, "Minotaur". A piece in which
>cameras and computers generate sound from the dancers movements. but to be
>quite precise, it is not the movement which triggers the sound, but the
>changes in position. This is not quite the same thing.

Dear Dance-tech,

This is not so much a reply to Palindrome, but rather an excuse to
mention something else.

The precise language to describe motion is maths, and the rules
of motion are physics. I don't mean that other languages for describing
movement (tai chi, kung fu, yoga, choreography notes, ...) are not
useful. However, physics and maths notation describe the most general
rules, and hence are incredibly compact without missing ANYTHING (as far
as one can measure).

And unfortunately you are all dealing with motion, so you're going to
have to slowly start using some maths and physics terms to tidy things
up. Having spied on your web pages and mail the last months, I've often
noticed how inexact dance-tech terms are. They are clumsy in the same
way I am when I attempt certain dance moves. I mean this in the nicest

I hope then you all welcome my role as helpful watchdog and clarifier. On
occassion I will send mail such as this (but shorter) to help clarify
terms, put the physics straight, and maybe give you some different (more
exact words) to describe things. And I won't worry if people in turn use
physics terms inexactly.

I'm writing an article for CMJ about all possible mappings between
motions and triggers and modulations, which attempts to explain these
matters in exact mathematical notation and with examples. Hopefully
some of the terms I introduce in that article will unify discussions.

Or to put it another way, it's all frightfully simple, more powerful,
and accurate, if you generalise things properly using maths. The more
you exploit this knowledge in your control software the more options
you'll have, and the easier it will be to CONTROL AND UNDERSTAND those

So back to Minotaur:
>We, Palindrome, is rehearsing at the moment, "Minotaur". A piece in which
>cameras and computers generate sound from the dancers movements. but to be
>quite precise, it is not the movement which triggers the sound, but the
>changes in position. This is not quite the same thing.

Pedantic mode

Neither cameras nor computers generate sound. Speakers do.
What you are doing is "triggering" sound modules, sound cards etc.,
with triggers selected by computer FROM signals FROM cameras.

camera -> computer (triggering algorithm) -> sound module ->speakers

The camera is miles away from the speaker (in signal space). Getting the
language right doesn't require any fancy jargon, it just requires you
to say EXACTLY what you do and EXACTLY what is there, as if an engineer
had to take your instructions and build it together. Your description
put's the cart before the horse, or the camera ends up attached to high
current speaker cables from an amplifier.

On motion

Movement IS defined as "change in position". They (movement and changes
in position) are exactly the same thing. Mathematically, velocity as a
function of time v(t) (a vector in 3D space) is the time derivative of the
position vector r(t):

d r(t)
v(t) = -----
d t

whenever something changes position, a velocity is present.

If I'm guessing correctly about what you are doing, the triggering
you are describing is mapping changes in position to action, but with
a TRIGGER RESET, i.e. you have to leave a position cell and return to
it to make it retrigger. The reason you perceive this as different
from "triggering movement" is that your cells are too big. Imagine
the cells were really, really, tiny, very small, infinitesimal. Then
you have a thing r(t) with which to PLAY, the position at any given
time in 3D. Then, in order to perform an action such as sounding a
note whenever you move from one spot to another, you'd have to
MAP the continuous "movement" to discrete note bytes (0-127). Then
it would become apparent that you are actually triggering "movement".

You trigger whenever something moves (has non-zero velocity), the velocity
being sufficient to move from cell to cell. One could actually measure
the velocity and "integrate" to get position, and thus establish whether
the velocity was sufficiently large to get from one place to another.
In this sense, your algorithm is just a VELOCITY THRESHOLD TRIGGER,
i.e. it indeed triggers "movement".

On triggers

I'd like to encourage dance-tech to think more generally about triggers.

There is a whole art called "signal processing", a branch of mathematics -
widely used by engineers - for selecting TRIGGERs or modulation parameters
from SIGNALS, these triggers and modulation parameters being used to do
something (act) through an action mapping. This area has its own language,
and techniques.

Forget movement, sound, and light for a moment, it has nothing to do with
those things whatsoever. There are just different types of triggers
such as:

threshold triggers (upper and lower, the sort you know)
gradient triggers
continuous triggers (as required for modulation)

In addition, there are MAPPINGS between the trigger quantities and output.
That's all there is:


The process is:
(1) read signals (such as those delivered by motion transducers)

(2) select triggers from signals according to criteria
(or modulation parameters)

(3) output a quantity (such as a Notebyte through a MIDI NoteOn
command according to the desired connection between motion and
music/light/vibration/something-to-be-sensed (OUTPUT DEVICE).

Process (1) is denoted DATA ACQUISITION, usually abbreviated DAQ. It
doesn't matter whether you use transducers, video, motion capture. In the
end you acquire data (signals). This is exemplified by the enormous range
of transducers offered to supply signals to the ICUBE. It couldn't give a
tinker's cuss whether the signal comes from a motion transducer or a
battery. Even if you are using camera to capture motion, it is still a
transducer. The ICUBE is a software driven triggering and mapping
environment, although you probably think of it as a MIDI controller.

Controller, triggers and mappings are much older than MIDI.

Motion-to-sense (MTS)

So where does the music and dance come back in ? Through the
CHOICE of triggers and mappings. Only when you link all (3). We don't
need any new terms for the stages, only for the COMBINATIONS.

I denote processes (1), (2) & (3) together an MTS algorithm. MTS stands
for Motion-To-Sense. By sense I mean ANY sense, but you might like to
think of just sound and light. For every signal type (transducer type,
performance mode, etc.) there is an MTS algorithm exploiting the
properties of that signal generation method, by implementing a chosen
trigger and output mapping. Of course an MTS algorithm only has effect
when cleverly further coupled to give the input (dance motion) and output
(sound and light) a role.

You might prefer DANCE-TO-MIDI or some other term, but this is
actually very specific, and I hope to do far more than just
translate human motion to music on stage with these methods,
so I prefer my quite general term MOTION-TO-SENSE. The techniques
have nothing whatsoever to do with dance, MIDI, music, lighting.
The implementation does.

So, with this example I'd like to invite people to use my term
MTS algorithm to describe the process of converting motion into
sound, light or what have you, irrespective of the device you
use as signal source. As you wish.



Darren Kelly | | |
| \ 0 / |
DESY -MPY- (Deutsches Elektronen-Synchrotron) | * o |
Notkestr.85 | - D E S Y - |
22603 Hamburg | o * |
Germany | / O \ |
| | |
phone: +49-40-8998-4569 | |
fax: +49-40-8998-4305 <--Note change | |
e-mail: | |
Amandastr.40a, | HOME |
20357 Hamburg | zu Hause |
phone: +49-40-4322602 | The PLAYhouse |