fbpx

Synaethesia for all

Samara Mitchell at the Australasian Computer Music Conference

Klipp AV

Klipp AV

Klipp AV

For the electro-acoustic cross-media artists participating in the Australasian Computer Music Conference 2006, the term ‘computer music’ sits about as comfortably as most did in their conference chairs.

Conducted at the Elder Conservatorium of Music in the University of Adelaide, this ambitious and artful event housed a range of workshops, concerts and installations covering everything from experimental audiovisual club-technologies to gene splicing.

The title and theme of the conference, Medi(t)ations, refers to the capacity of computer-related technologies to act as the carrier and/or location for reflecting upon creative events. In a paper titled “Computers, Music and Intermedia: (Re)(Trans)lation”, one of the key coordinators of the conference, Christian Haines, posited that computational technology—as a translator of information from knowledge into data—has the power to liberate information from perceptual and cultural biases intrinsic to the entanglement of media/messaging. He illustrated this by describing the process of opening up an audio file in image manipulation software such as Adobe Photoshop or, vice versa, a JPEG image into audio editing software. Recognisable representations—such as a photo of a pet dog, a sample of Debussy’s Footsteps in the Snow—have been converted into data patterns that are open for interpretation, or retranslation: a means of stripping an image or sound to its essential components so that it may freely pass across perceived media boundaries and in turn, redefine them.

A similar sentiment was evident in the opening keynote address from artist and writer Mitchell Whitelaw, senior lecturer from the School of Creative Communication at the University of Canberra. He discussed an increasing trend across electro-acoustic and audiovisual artforms (VJ-ing for example) towards a synthetic version of neurological synaesthesia. Our unique form of survival is predicated upon the ability to recognise complex patterns. On a perceptual level, our brains tend to dole out little electro-chemical sherbet pops of pleasure when we can produce a recognisable pattern from what at first appears to be raw, chaotic ‘data.’ This “Aha!” or epiphany, is a part of what perceptual theorists call Gestalt Theory. As an example Whitelaw presented a well-known image that is credited to RC James of what at first glance appears to be a sea of black and white blobs. Further scrutiny flips an optical switch to reveal a Dalmatian sniffing across a floor covering of leaves, or perhaps dappled light. Once seen, the Dalmatian cannot be unseen, a perceptual process Whitelaw refers to as “perceptual binding.”

Most of the conceptual and technological approaches discussed in the course of the Medi(t)ations conference were connected in some way to those artists who have developed a range of creative technologies that mimic the neurological phenomena of “cross-modal” perceptual binding. Here the melding of 2 or more senses (such as aural and visual) generates the subjective experience of illumination. Audiovisual club and live performance technologies released over the last decade have become increasingly adept at simulating neurological synaesthesia. Several of these were played at Earpoke, a series of satellite performances presented at the Jade Monkey club and curated by Michael Yuen. Earpoke was an opportunity for local and international guests to share their wares: Adelaide performance artists such as Darren Curtis, DJ Trip, Iain Dalrymple and Supaphatass teamed up with VJ Sustenance.

Putting a number of elegant theories into practice, audiovisual laptop duo, Klipp AV (Swedish for ‘cut apart’) played an enigmatic suite of polyrhythmic grooves that charmed the senses and confused the dancers. During their keynote address, Sweden’s Fredrik Olofsson and Nick Collins discussed a variety of performance tools and techniques that enabled them to capture, remix and play fresh samples of live audio and video in any given environment. Put simply, the set-up consists of one laptop with a DV camera dedicated to live video capture and mixing, and one laptop and microphone dedicated to capturing and processing live and sampled sound. The performance is played out through in-house speakers and data projectors.

Taking a geek-peek backstage is an entirely different matter. This is the place where poetry, musicianship and computational processing arrive in a different costume each night and improvise. Using a complex combination of algorithmic splicing and ‘on-the-fly’ trans-coding, the movements of dancers (or anything really) can be filmed, edited and remixed in real time. This collage of real time imagery may also flow back to inform the nature of the audio. Nick Collins suggested that the potential for working with dancers could add a rich dimension for collaborations in the future. Of course, the level of sophistication that goes into the programming and mapping of Klipp AV’s live performances would amount to very little if it wasn’t for their consummate stagecraft, enabling the duo to make entertaining and aesthetic decisions in response to the fluctuating moods of audience and venue.

Whilst artists such as Klipp AV are emphatic that live performance is the most important element of their work (they don’t record any of their events), the archiving of much electro-acoustic work, according to composer Peter McIlwain, is problematic for several reasons. He analysed common methods for composing electronic or computer-based works, making the excellent point that, unlike traditional musical composition, there is no universal language for annotating and transposing these works. The difficulty in preserving them is heightened further by the level of technological redundancy.

A huge array of tentatively related subjects emerged in the conference and workshops. Many sessions were dedicated to A-Life, or generative programming, as a means of introducing chance elements into an artwork or musical composition. Pierre Proske’s installation, Synchronised Swamp, uses customised software to mimic the phenomena of natural synchronisation—such as pendulum swinging at different rates falling into the same tempo—or in the case of his installation, the dynamic coupling of bird chirps and frog croaks.

Gordon Monro’s Evochord is an evolutionary breeding program of colour and semitones based on a genetic algorithm. Each semitone has a colour according to low, intermediate and high pitching. The dominant colours and sounds that emerge are those that sound the same semitone and colour together in the greatest numbers, resulting in a chord. Each cycle of Evochord is different, due to the random way in which the chord mutates over a variety of pre-determined time scales.

Whilst many of the sessions were theoretically and technically dense, the applications of tools and methods discussed were at times subversively humorous. Rene Wooller’s research into live electronic and automatic music technologies lead to the eventual development of the Morpheus project. Morpheus is the musical equivalent of digital image morphing software, whereby 2 images are used as source material, with the transition from one image to another plotted and then animated into a morphing sequence. Morphing has become increasingly popular through decades of turntabling and remixing. Wooller’s masterful design is a playful experimentation with the potential affect on the ear (and in live performance) of streamlining these transitions. In a conference presentation he closed with a jocular suggestion that such morphing technology could be used to meld the national anthems of warring nations, perhaps at the Olympic games.

In a poetic reference to intermedia and trans-coding, performance artist Catherine Fargher, with composer Terumi Narushima, delivered an entertaining and intelligently subversive paper/performance called “Evolution, Mutation & Hybridity: the influence of Biotechnology Practices in the Development of Chromosome Knitting.” (See Shady Cosgrove, “Knit two together”). Advances in technology and science have often been beta-tested in terms of household consumption. Fargher explains chromosome complexities in a demonstration one might witness in a supermarket or on a home shopping channel. Whilst knitting a quaint crocheted chromosome, ostensibly from lab-acquired genetic material she keeps in her fridge, Fargher is accompanied by composer Narushima producing an ingenious score based upon musical translations from genetic coding and knitting patterns. For artists such as Fargher, the future of artistic cross-modality is perhaps in nanna-technology.

ACMC06, Australian Computer Music Conference, Elder Conservatorium, University of Adelaide, July 11-13

RealTime issue #75 Oct-Nov 2006 pg. 55

© Samara Mitchell; for permission to reproduce apply to realtime@realtimearts.net

1 October 2006