ULang is a result of a work-in-progress DLA research study at the MOME Doctoral Institute ( Budapest ) and is made together with a thesis on the language of Visual Sound Instruments (a text on interaction modalities within software based sonic instruments). ULang itself is a playful, practical representation of some of the thoughts that can be found in the thesis. The text will be available after the completion of the author's DLA studies.
Visual Sound Instruments
Language, as a tool for manipulating abstract symbols is serving the basis for my studies on visual sound instruments. Language is a tool for interpreting the world, communicating ideas, describing thoughts and feelings on different levels of abstraction layers. Among natural languages, artificial languages have important roles in shaping different processes of the world. Language as a mimetic communication system has a nature that is generative, open, ever-evolving.
Mimesis is the first form of understanding social signs, which is mainly based on episodic fragments, thus creating an analog-like information transmission. Instead of mimetic (spoken, movie, photo) languages, digital, or cryptographic languages (such as musical notations, computer code, binary systems) are different. Designing with cryptographic languages needs a different understanding than the process applied in the design using mimetic languages.
Today's widely used programming paradigms are based on different abstraction layers. Starting from one dimensional, text based representation, followed by a two dimensional, visual representation, it turns out that on the higher levels of abstractions, these systems are constructed of multidimensional interaction modalities, where behavior patterns, cognitive aspects and the nonlinear nature of time are also taking part in the experience.
Traditional musical instruments naturally have their interaction dimension spaces embedded in them, shaped by their ergonomical aspects, cultural functionalities and resonating bodies. In the case of software based interfaces, these dimensions are not originating a prior from the physical parameters of the object so they have to be built in explicitly into these systems.
This means, that the lack of a physical body induces a new type of inner coherence in software based visual instruments, which drives to unknown territories both designers and players.
If we look at the history of software based tools, there are identical shifts compared to what we find in the history of classical instruments with resonating bodies. The need of scalability, universality generates new paradigms in each lineup of the different tools. In classical music, such paradigm shift is the twelve tone tempered music from the early renaissance. This change made possible to tune the different types of instruments into a common pitch spectrum with the reduction of natural sounds, negotiating the importance of harmonics and natural overtone frequencies. A very similar approach related to digital tools is the invention of the MIDI protocol in the twentieth century, which was made for synchronizing different devices with 128 discrete values for each parameter. The invention of the tempered scale and the invention of the MIDI protocol were pretty similar in their time: they both constrained musical creativity because of the need for universality.
Reconfigurable, dynamic media systems bring forward new kind of creativities where the forementioned dychotomies are always present.
In such systems, the interface includes the playground for the interaction while it is a representational surface, also. The role of the composer, performer, listener are now changed also in the mainstream culture such as with the rise of progressive contemporary communities of the fluxus and related art fields in the middle of the twentieth century. Composers include notational elements in their instruments, which are being interpreted by the listener when consuming it. The prescriptive nature of notation and the descriptive nature of recording, visualization are merged, visual instruments became interactive notations at the same time.
The concept of composer, performer, audience is not valid anymore in these systems. The artworks are tools themselves the best analogy to describe them can be borrowed from the world of games. The relation of music and gameplay are deeply rooted in our language also, think of the terms play the piano, play some music, when do you play? etc. The ecosystem of digital games are much more adaptable to the distribution of contemporary visual sound instruments than the heritage of musical production: instead of buying heavy vinyls or magnetic tapes, people get their tools and music for a few cents from online distribution channels. Since they get their material from these online entry points, realtime response from the audience is an essential component while consuming these artifacts. In the case of software based instruments, feedback from the audience has more and more components that works like the feedback in the consuming process of games.
An immersive game flow is key element in the experience of a sound instrument: exploration, chance probabilities, playfulness and failure are all taking part in the whole. Game makers are making sonic instruments (game levels, if you prefer) where the players are navigating and the composition is brought forward during their personal journey.
On one hand, ULang is an instrument for making music that requires no musical knowledge from the player. On the other hand, ULang is a playground for experienced musicians, too: it is really easy to experiment with different rhythm ratios, overlapping repetitive expressions, tempered scales and the like for the performer.
The main purposes of ULang