Strict Standards: Only variables should be passed by reference in /web/htdocs/ on line 289
stc || infobar >> research

Animating in the Audio Domain [sound, code, synesthesia]

Deprecated: Non-static method StringParser_Node::destroyNode() should not be called statically, assuming $this from incompatible context in /web/htdocs/ on line 358


There are several situations when accurate event scheduling is needed. In a few of my recent projects, i was facing a problem continuously: how to keep things synched and launch events and animations really accurately. These projects are mainly audio based apps, so timing has become a key problem. An example is the case of sequencing the sounds in Shape Composer, where different “playheads” with different speeds and boundaries are reading out values from the same visual notation, while the user modifies them. It is not that simple to keep everything synched even when the user changes all the parameters of a player.

As I am arriving to native application development and user interface design from the world of visual programming languages, i decided to port a few ideas and solutions that worked for me really well, this post is one of the first in these series. In data flow languages for rapid prototyping, there are a lot of interesting unique solutions, that might also be useful in text based, production-ready environments (especially for those people who have mindsets that are originating from non-text based programming concepts). A nice example should be the concept of spreads in vvvv , or audio rate controls in Pure Data . In pure data, the name says all: anything is data, even the otherwise complicated operations on the audio signals can be used easily in other domains (controlling events, image generation, simulations, etc). While using the [expr~] object , one can operate audio signal vectors easily, but there is also a truly beautiful concept on playing back sounds: you read out sound values from a sound buffer using another sound. The values of a simple sound wave are used to iterate over the array.


On the image above, there is an example how to use a saw wave, a [phasor~] object to read out values from an array using the [tabread4~] object. The phase (between 0-1, multiplied by the sample rate, aka 44100) is reading out individual sample values from the array. As you shift the phase, you iterate over the values. This is the “audio-equivalent” of a for loop. The amplitude of the saw wave is becoming the length of the actual loop slice while the frequency of the saw wave becomes the actual pitch of the played back sound. The phase of the saw wave is the actual position in the sound sample. This method of using audio to control another audio is one of the beautiest underlying concepts of Pure Data and this concept can be extended well into other domains, as well.


See these animations in movement here

The Accurate Timing example that is, this post might be about is trying to bring this concept into other languages where you can access the dsp cycle easily. With OpenFrameworks it is easy to call functions within this cycle. In fact, anything you put in ofSoundStream’s audioOut() function, will be called as frequently as your selected audio sampling rate. Usually this is 44.1 kHz, which means you can call your function 44100 times per second which might be way more sophisticated than making the call in an update() or draw() function. The audio thread is keeping time, even if your application’s frame rate is jumping around or dancing like a higgs. It might be useful to keep time based events in a separate, trustable thread, and the audio thread is just perfect for this occasion. This example is using a modified version of an oscillator class that is available at the AVSys Audio-Visual Workshop Github repository , which is used to control the events that are visible on the screen.

When building things where time really matters (such as sequencers, automated animations or other, digital choreography), a flexible, accurate “playback machine” can become really handy. The concept in this example is just a simple way of mapping one domain to another: the phase of a wave can be applied to a relative position of the actual animated value, while the amplitude of this wave represents the boundaries of the changing value. Finally, the frequency is representing the speed. Even though, there might be other really interesting ways of animating time sensitive artefacts, the one that is using the concept and attributes of sound (and physical waves) can be truly inspiring to develop similar ideas and connecting concepts between previously isolated domains.

The source for this post is available at: Audio Rate Animations on Github

Workshops 2013/14 winter [Interaction, Process, Design]

Deprecated: Non-static method StringParser_Node::destroyNode() should not be called statically, assuming $this from incompatible context in /web/htdocs/ on line 358


Interaction Modalities, Poetic Interfaces (Bergen Academy of Art & Design)

A collection ollection of resources, code snippets, material and references from the workshops at Bergen Academy of Art & Design, Norway, 2013. These workshops are focusing on the communication of ideas through different interaction modalities, creating models through making (prototypes), learning from mistakes, diving in the unexpected.

The first session (Interaction Modalities) is giving an overall introduction to the term Interaction: the natural feedback between several elements that are affecting each other in a conversation. Interaction involves a diverse set of fields such as perception, cognitive aspects, psychology, formal systems, logic, language (auditory, visual), the concept of data, memory, intuition, error etc. Lecture slides can be accessed here:

The second session (Poetic Interfaces) is about building prototypes, mockups and art pieces where glitch, errors and probability can take part in the compositional process. This can remind one to creative writing where sudden and unwanted events can shape the resulting process, thus a real conversation is induced between the work and te people who are creating and observing it.

Procedural Drawing (Sandarbh Artist Residency, Partapur, India)

A followup of recent procedural drawing workshops, where participants are practicing computational thinking without the use of machines. This event was made together with Artus Contemporary Arts Studio. The language of the workshop is pseudocode. There is no specific type, no predefined syntax, we do not rely on any computers, function libraries, we use only pens, lines, papers, logic, repetition, rules, process. We are following the path that many philosophers, engineers, inventors, mystics have been followed before us: think and draw. The method is relying heavily on fluxus practice and the exercises of the Conditional Design team. The following images are a selection of some executed, completed examples.


InBetween States [Sound, Process, Notation]

Deprecated: Non-static method StringParser_Node::destroyNode() should not be called statically, assuming $this from incompatible context in /web/htdocs/ on line 358


“Varying combination of sounds and “non-sounds” (silence) through time, sounding forms (Eduard Hanslick), time length pieces (John Cage). These definitions of music are referring to both a concrete and a non physical presence that are holding the vision of a special space-time constellation. The process of investigating the distribution of sounds is leading us to the magical state of the musical thought. Waiting, along the observation procedure are constructing an extension of the listeners traditional relationship with music.

What states might a musical thought have? How does interpretation is taking part in the process? How does representation affected? What are the bases of the musical experience? The exhibition is seeking to find these “in-between” states through the works of the New Music Studio , and students of Faculty of Music and Visual Arts University of Pécs and - of course - the presence of the visitors.” (from the exhibition catalogue)

Exhibition Space (2B Gallery)

New Music Studio is a group of musicians formed in the seventies in Budapest. They were focusing on some kind of methodology and philosophy of the contemporary music scene that was different from the canonized mainstream. They brought “cageian” influences and the legacy of fluxus to the regions of eastern european society. The main founders of the group ( Laszlo Sary, Laszlo Vidovszky,Zoltan Jeney, Peter Eötvös, Zoltán Kocsis, András Wilheim et al) amongst other creative minds produced several interesting pieces that were questioning traditional musical interpretation methods, notation, performance and the articulation of musical expressions. Their legacy is hard to interpret hence the large amount of their pieces. These neither fit in the traditional performative space, also difficult to adjust to gallery spaces.

The exhibition is a truly great reflection to this situation: installations, homage works and cross references are introduced to the audience in order to shed light on most of these problematic aspects. The tribute works of the next generation are also deeply experimental and fresh pieces that are using post digital concepts the aesthetics of raw, ubiquitous, programmable electronics where people can focus on content and aural space instead of regular computers and the like. The presented spatial sound installations find their place perfectly in the gallery space. The collected documents and visual landmarks of the exhibition are giving hints on very interesting topics without the need for completeness which is a good starting point to dive into this world deeper. The exhibition is curated by Reka Majsai.

Hommage á Autoconcert (Balázs Kovács & students from University of Pécs)

“Hommage á Autoconcert” by Balazs Kovacs (xrc) [link] & students from the University of Pécs is a sound installation that is constructed of several found objects and instruments. It is a tribute to the so called “Autoconcert” (1972) by Laszlo Vidovszky. The original piece has a diverse set of found objects that are falling from above over different time intervals during the performance. The present installation is dealing with the sound events in a reproducible way: the objects are not falling, but get hit by small sticks from time to time. These events are constructing a balanced aural space where some tiny noise, amplified strings and colliding cymbals can be heard as some acoustic background layer. The sticks are moved by little servo motors, the logic is controlled by a few arduino boards .

Repeating and overlapping of three musical processes (Gábor Lázár)

“Repeating and overlapping of three musical processes” by Gábor Lázár is also a sound installation made with custom micro controllers and three speakers set in plastic glasses as acoustic resonators. The processes are based on ever-changing time manipulations between musical trigger events. Dividing time within these in-between temporal events cause ever-increasing pitch shifting in the auditory field. The overall sound architecture forms a very saturated and diverse experience.

Visual Notations (New Music Studio)

“Visual Notations” by the New Music Studio members themselves are in fact several notation pieces crafted in a beautiful way. Varying nonstandard notation techniques are introduced here: using the graphical alignment of stellar constellations (Flowers of heaven, Sary), spiral like, self-eating dragon (Lap of honour, Vidovszky), ancient hindu symbols (Yantra, Jeney) are all taking part in the semantics and development in these sounding images.

Three pieces by Sáry (reinterpretation & software by Agoston Nagy)

There is also an installation I’ve been involved with: interpreting three pieces by Laszlo Sary. The triptych is letting the observer see and hear the inner sound structures at the same time. The upper region of the screen is displaying the original notation, this can be referred to the piece as a spatial constellation of the sounds to be played. Each work has different notations and different instructions on how to play the notes. The bottom region is displaying a live spectrogram of the actual sound. This is showing a temporal distribution of the events which has different characteristics with each piece.

Musical Spectra of the three different pieces by Sáry

“Sunflower” (1989, based on Snail Play, 1975) has a spiral-like spatial distribution in the sheet, these patterns are clearly visible in the musical spectra: infinite ladders are running up and down as the musicians are playing on three percussive marimbas. “Sounds for” (1972) is based on some predefined notes that can be accessed by the performer in any order they prefer. Larger, distant, sudden triggers are defining a more strict, cityscape-like spectra. The final piece, “Full Moon” (1986) is based on a permutation procedure [link]: the individual notes are played back simultaneously by four (or more) musicians at the same time. Their individual routes are defining the final constellation of sounds. The visible spectra is like a continuous flow, where soft pitch intonations are in focus against the previous two, rhythm oriented works. The setup is made with free & open source tools, using OpenFrameworks on three raspberry pi boards.

Images of the exhibition opening

It is hard to describe all the aspects of the exhibited works and the legacy of the New Music Studio, as their works had many local influences in the East European region at their time. The states of “in-between-ness” however are clearly recognizable on several layers of the whole event: the political and cultural influences of the original artists, the continuous phase shift between the musical and the visual domain, the tribute works where the creators are halfway in-between authors and interpreters. The exhibition is all together a really important summary of a niche cultural landmark in the Hungarian and international contemporary music scene.

Thanks for Gabor Toth for the photos.

Tempered Music Experiments [theory, music, 2d space]

Deprecated: Non-static method StringParser_Node::destroyNode() should not be called statically, assuming $this from incompatible context in /web/htdocs/ on line 358


While trying out different ways for representing musical structures on an interactive 2d surface I made some experiments as part of my thesis and client works. Three systems were created as “side effects” for my current research interests:

“Harmonizer” is a simple musical instrument for studying traditional scales through harmonic movements. Traditional scales are the basis of folk songs, jazz, classical music, including modal scales, pentatonic, different minor / major structures. Harmonic motion is based on simple trigonometric pendulums (sinus, cosines, etc) that describes really organic motions. The goal of Harmonizer is to collide these two concepts and see what happens.

Then, there is the more chaotic sounding “Fragmenter”. It is “temporally tempered”: a sound sample is cut up automatically into slices based on onset events. These snippets are then mapped to different elements in space. The observer can listen through them by simply drawing over them. The drawing gesture is repeating itself so every unique gesture creates a completely different loop.

The last experiment is a “Verlet Music” machine where freehand drawings are being sonified with an elastic string. The physical parameters of the string can be altered by dragging it around on the surface. The string is tuned to a quasi comfortable scale, each dot on the string has a unique pitch. When they come over a certain shape, they become audible.

You can try out Harmonizer here . The synthesis is built with WebPd using WebAudio API, sound might not work clearly on slower machines.

Accidental Calculus [Nature, Code, Design]

Deprecated: Non-static method StringParser_Node::destroyNode() should not be called statically, assuming $this from incompatible context in /web/htdocs/ on line 358


What are the similarities between ancient arabic systems for automation, origami folding, poetic code, adaptive design, legacy of John Cage or western music tradition including gameflow-like dice music pieces from Johann Philipp Kirnberger or Wolfgang Amadeus Mozart? All of these examples are representing a kind of relation between the logic of the mind and some behavioral aspect of our external environment. On one hand, these activities and artists are bringing forward some aesthetic decisions where creative minds let independent elements into the flow of the creative process to take part in the creation of the piece. Hereby the designer is defining frameworks and boundaries for semi-autonom processes which will lead to the evaluation of the final piece. On the other hand these fundamental processes are not only aesthetic decisions, but ways of thinking about our environment and about ourselves. These are borrowing inspiration from scientific, philosophic and mystic traditions that can be applied to many different fields and phenomena.

When dealing with formalized systems we always have to follow the rules of some procedure. Frameworks are usually constructed of some basic set of rules which might lead to complex structures while iterating and repeating these instructions. These simple set of repetitive rules (or algorithms) are applied to achieve different goals in different contexts and situations (think of any language, for example). Many people like to think that randomness adds value and complexity to their systems because we can not predict its behavior so it will surprise the observer with ever changing, never repeating outputs and combinations. According to a recent interview with Michelle Whitelaw (can’t find the direct link but it has appeared in Neural), it is true in terms of variability, but it is not about adding qualitative values for a system. Random events are really useful for simulating chunks of data for unpredictable behaviors: when we need to simulate unpredictable input values for a system for further investigation. Let’s observe three very basic distribution model.

fig. 1 - Random probabilities. Left to right: a ) no random factor, linear steps. b ) random walk with small amount of randomness. c ) random walk with increased randomness. d ) full range random steps

If we know the parameters of our goal exactly, we can get there by very strict and precise instructions easily. Say, we have a point on a plane, we can get there if we add a simple vector from our starting point to the direction of the given location. Simple & solved. If we know only one parameter of our point we can converge into the desired location by adding some random probabilities to our system. This will be less effective but we will achieve the goal sooner or later. Does it mean if we don’t know any of the coordinates of our point shall we use totally random distribution? Well, not really, because a totally random distribution allows repetition (the following states are not depending on their previous ones) so maybe we will never reach the goal. If we really would like to reach our goal, we still need to use some semi random algorithm where we combine order with chaotic behavior. This can be implemented several ways. We can use some kind of basic random walk algorithm which is not very effective, or we can use very sophisticated path finding algorithms that are way more useful. Or, we can use a straight, ordered algorithm with no random probability like a television scan line when its cathode ray is moving from point to point over each line, sequentially. Or, we can generate random numbers and avoid previously generated ones so finally we will stumble in the position of our point. There are many-many ways to solve a problem. What we chose might depend on the context of the situation (how much time do we have, how precise do we need to be, etc).

Specific distribution models are easily observable in our regular daily routines, also. The temporal distribution of our activities are more or less predictable if we find some patterns and repetitive occurrences between successive events. Hungarian scientist Laszlo Barabasi-Albert found an interesting model that can describe key aspects of our behaviors. He calls these successive events “bursts”. He and his colleagues were observing how frequently people respond to mails, talk to each other or even how often move between two locations. They have found that there are surprisingly big gaps and silence between two active events. It’s obvious that we do not speak for a few hours, then we meet someone and talk a lot. We do not read mails for a few days than we respond all to them within a few hours. Our basic communication structure is working like this. Similarly, but less obviously moving between physical locations are showing bursts and calm series, too. It is not a constant average value how much we travel. If you go to vacation, you move a lot, then for several months you are just moving locally between the places of your regular life. This concept can be extended even to our regular routines, relationships and consumer behaviors.

fig. 2 - Brownian motion (left), Levy Flight (right)

This distribution model is a well known behavior in other natural processes and has the name Levy Flight. According to the blog Seeing Complexity:

“Lévy flights are described by a random walk, which is more or less any mathematical expression of a trajectory with successive random steps, distributed with a non-exponentially bounded probability distribution with theoretically infinite variance. So what does that mean? It is a description of motion. Take as a contrast Brownian motion, which is modeled after random movement of particles in a fluid.A Lévy flight is instead a cluster of movements, some short and some long. It was identified by the late Benoit Mandelbrot, as an outgrowth of chaos theory.”

Why are these facts important from a designer point of view? Think of procedures. Our regular routines are constructed subconsciously from trivial procedures. Our current states are more or less predictable. And if we deal with this potential we might build tools that are easier to understand, that are adapting themselves to a personalized usage. An object that is capable to measure its environment and change its internal structure based on these principles can be adapted more seamlessly and easily to someone’s internal representations. This might lead to some adaptive, invisible interfaces where there is no UI or Graphic Design anymore, but something else. Something, which deals with rhythm, temporality, routines and changing interests, fluid contexts and self regulating algorithms. One might call it contextual design, behavior design or anything that is brought forward by some sort of dynamic nature. Observing rhythms around (and inside) us, temporal sequences, music, dance, human relationships is a key element for asking the right questions within these unexplored territories. Software has meaningful dynamic inputs and deeply reconfigurable internal representations so all type of software can adapt its internal structure according to its environment. Software must be open not only in the regular meaning which refers to its source but also in terms of its behavior. Think of responsive design, reactive architecture, or any open ended process from the art world.

As Bret Victor, a great interventionist and interface designer points out: interaction is good when dealing with manipulation software (editors, games, etc). But once we are dealing with information software where we are more likely to learn instead of create, interaction is turning into interruption. An interface that represents information should work with as less interaction as possible, in order to let the perceptual and cognitive flow go seamlessly. These software might use environmental conditions, reconfigure their behavior when needed. This restructuring is based on some data from the users history, actual physical position, current time and other environmental parameters, but also includes social integration, similar interests and rankings from relevant people etc. The interface must be in real conversation with the context of the user. This type of meaningful relationship is originally described by Gordon Pask who adapted human conversational elements when he was dealing with artificial, responsive input & output systems that took form within (and out of) the fields of cybernetics.

fig. 3 - One sequence of the 64 hexagrams of the I Ching, or the Book of Changes. This ancient system is a method of divination based on a binary system and chance operations.

What are the strategies that we can observe in procedural thinking and algorithmic art? Janett Zweig introduces three basic concepts that she found while studying mystical systems, procedural art and computer practices. In her work “Ars Combinatoria” (pdf link) she makes difference between permutation, combination and variation. These three are formalized concepts for manipulating an existing set of entities (numbers, musical sequences, words etc). Permutation is when we have our set and rearranging its elements in different ways without adding or repeating any new element. Combination is to rearrange and select specific elements, reducing its original set. Making variations means to rearrange the elements of the system with the allowance of repetition, redundancy and multiplication. Her interest in these methods are going far more beyond practical definitions: she is investigating the difference between spiritually based or purely process based methods. She is asking when someone is making a mystical experience via permuting the letters of the alphabet (as with the gates of Sefer Yetzirah or the ancient representations of the Tree of Life) or permuting abstract binary symbols (such as with the I Ching system or modern computer programs), is it a creative transformation, or more like a meditative activity?

According to Zweig these procedural systems went through a qualitative shift from a purely mystical state to a more formal, process based methodology during our history. Starting with great “Mystical Universalist” systems, thinkers and philosophers introduced “Symbolic logic” and logical games which led to different “Semantic Interventions”. These are forming the basis for the concept of “Play” and playfulness that can be found in a wide spectrum of our activities including board games, contemporary music, educational systems.

fig. 4 - Spiral experiments: Generating Fermat (or Archimedean) spirals with modified divergence angles.

What happens if we are using a purely ordered algorithm (such as a formula for a spiral) with no random probabilities? Can we predict the result? It turns out that it is far away from the truth, at least in terms of visual representation. As you see on this visual experiment, each shape shows different characteristics compared to each other. A spiral shape is always present because of the clearly defined, ordered algorithm (r*r) = (a*a)*θ. However as soon as you start tweaking one single parameter a tiny bit (in this case, the parameter of the divergence angle of the system which is variable”a” in the code below) completely different spatial distributions arise. An initial value is repeated and iterated over itself within the loop, thus it has a multiplicated effect which brings forward emerging shapes of visual complexity. A simple implementation in Processing language:

float a;         // divergence angle
float c;         // scaling factor
int N;           // number of dots

float r, phi;   // polar coordinates
int j, i;          // row, column coordinates

for (int n=0; n<N; n++) 
      phi = n*a;                                          
      r = c*sqrt(1.0*n);
      j = int( r*cos(phi) );               
      i = int( r*sin(phi) );

Addition: an ellipse() is a function to draw ellipses at specified positions with specified dimensions (these are the four parameters of the function). Also, you have to initialize the variables (a,c,N) to run the code.

Language is definitely a permutation of letters from the alphabet. However the meaning and the core concepts of our consciousness are not contained in the discrete elements or parameters and not even between the configurations of these elements which are defined by syntactical rules and grammatical formulas. If we are bringing forward the world and our environment from concepts that are inherited from language, it is more than just communication tool. It is the incubator of analogies that are the core working mechanisms for human thinking as Douglas Hofstadter points it out in his recent book named Surfaces and Essences. It is the landscape around and inside us (be it visual, textual or mental language), it is the transfer channel of memes, it creates the internal representation of the world around and inside us. Some might say: the music is not in the piano.


Strict Standards: Only variables should be passed by reference in /web/htdocs/ on line 33

Strict Standards: Only variables should be passed by reference in /web/htdocs/ on line 33

Strict Standards: Only variables should be passed by reference in /web/htdocs/ on line 33