CfD_installation

Physical Properties

Continuous Interaction

Below are some pictures of the installation space, pictured with 2 different distances (as denoted by the measuring tape on the ground). These distances represent the range of our long range sensor. Any one that walks by within this distance of the wall will trigger the ‘continuous’ aspect of the interaction. The first image demonstrates a sensor range of 3 meters, and the second demonstrates 2 meters.

Markdown Monster icon

Markdown Monster icon

If we want a maximum distance of more than 2 meters, it will raise the price of the sensor (looking at about 30 dollars). However, we would only be using one or two, so this wouldn’t be the end of the world. Additionally, this longer range sensor would allow us to work with any distance less than 5 meters. In other words, we can always detect a shorter distance with a longer range sensor, but not the other way around.

Somewhere around 2.5-3 meters might be the sweet spot, but we would be interested in hearing additional opinions on this number.

When somebody walks by and triggers the sensor, it will trigger the continuous audio for several seconds (maybe somewhere between 10-20 seconds), before fading out. The specifics of this audio are outlined in the next section. The amount of seconds is another value that can be easily changed (would come down to just changing a numerical constant in the code). This value can be tuned during the testing phases to see what amount of time feels the best.

Focused Interaction

Previously we had proposed using an array of IR sensors on both the horizontal and vertical sections of the window bevel to track (x,y) location of a hand/object inside the window. However, we have slightly altered this to make the implementation a little simpler. The updated approach is to only place IR sensors on the bottom of the window, and use distance readings to track the height of the hand/object. The interaction paradigm will remain the same (height will control which ‘design context’ is ‘activated’, and horizontal position will control pitch).

Visitor’s movements while using the installation could be intentional or random, meaning that: one could intentionally target pitches and try to ‘play’ a melody, or they could simply sweep their hand (or any object) randomly along the window, and still experience some output.

Both sensors for ‘continuous’ (long range) and ‘focused’ (short range sensors in window bevel) interaction can be triggered by any type of object that is in its way - so it is not just limited to a hand or someone walking.

Sound

The sound for the installation will be generated by granular synthesis, comprised of vocal samples that represent the contexts of design, as illuminated by the dataset. “Grains” of the sample will form a harmonious, ambient signal. At some interval, the full word will ‘emerge’ out of this cacophonous signal. This concept is further explored in the examples below.

The audio for the continuous aspect of the interaction should be something that is attention grabbing, and dynamic - this will ensure that the experience doesn’t get “stale” and has some sense of unpredictability. To simulate this, I recorded myself saying “artificial intelligence” and generated the following examples. The result was actually a 2 minute long, continuous audio file, but I broke it up into multiple sections for ease of listening.

example 1
example 2
playing around with pitch of the resonator

The next two audio files exemplify how grain size can be modulated to make the word ‘emerge’ from the mix. They are using the same ‘artifical intelligence’ recording. You will also hear some changes in pitch - these are to show how the signal (or parts of the signal) can be ‘played’.

When both of these are mixed together, they result in the following file. This exemplifies how the ‘playable’ portion of the interaction can be layered on top of, and merge with the ‘continuous’ portion of the interaction.

These samples were generated with the following effect chain:

Granular Synthesis -> Reverb -> Resonator

The resonator is accentuating the frequencies of a Cmaj7 chord and its harmonics. This injects a sense of musicality into the mix, so that this randomized audio stream will always have some sense of harmony. Parameters such as grain size, and file position of the granulation are being modulated by LFOs (low frequency oscillators), as well as being slightly randomized.

The beauty of granular synthesis is that the flexibility of its parameters/implementation will allow us to modify many aspects of the sound. For example, If we want the words to become more understandable, we could decrease the overlap between grains, increase/modulate the level so that it pops out of the mix more, use less reverb, crop the recording so there is more silence before and after, etc. - there are many options for further tuning and customization of the sound.

In Conclusion

These are more or less mock recordings to show what direction we are moving in. We would love to hear feedback or opinions on how this sonification will fit into the space, and align with the goals of the installation.