Computer music: software aesthetics

Greg Hoooper

Paul Doornbusch opened the Australian Computer Music Conference with talk of the early days. Sydney, 1950 or ‘51 and Geoff Hill programs the very first music to come out of a computer (one of the first—memory stored as acoustic pulses washing about within 5 feet long lacquer coated tubes filled with mercury). The piece was probably Greensleeves. Hill played it over the phone to his mum, who thought it sounded like a kazoo. Doornbusch has reconstructed the sounds and history on a CD and book. At a later talk Rob Esler gave glimpses of his project to revive some of the classics of electronic composition. Great to be able to actually hear these early works rather than just hear about them.

Conference was busy: concerts, talks, installations, workshops, an informal performance space at night. Ideas in the talks often turn up later in the concerts. The concert hall is a large black barn of a space. In the centre, ringed by speakers, people on chairs are arranged into a tight grid. Arms are folded. Everyone listens. Piece ends, silence, then applause. Repeat until finished. Listening to a machine in company always strikes me as strange. Without a performer there is no need to be with other people except as a convenience—in this case the conference is the only time most of these pieces can be heard. Very different to experiencing music as a sociable (or socialising) medium situated in the active body.

Computer music has been around long enough to have generated its own tradition of sounds and ways for articulating those sounds. Things speed up, things slow down, perhaps it’s time to put away the delay lines for a while. Much would not have been out of place in the (analogue) soundscape of the Barrons’ Forbidden Planet. Contrast the maturity in sound generation with the ongoing problem of maintaining interest in compositional structures across a range of scales—the problem of form in computer music. Computational methods can lead to work that obsesses on novelty in the micro details and excludes any audible evolution across larger time scales. Ends up a random-ish succession of sounds. Unfortunately the information flow of a random series is constant at all scales and it is much more likely that we respond to changes and patterns in the information flow of music rather than to the individual bits of information themselves. Hence we habituate to random sounding music, lose interest, nod off, don’t buy it much.

Luke Harrald successfully tackled musical form by avoiding modelling the audible structure of music directly. Instead, he modelled the “social dynamics involved in music performance” with a system of generative composition based on the tradition of performance indeterminacy developed by Cage, Christian Wolff and others. Under this sort of system the performer’s musical behaviour is constrained and encouraged by a set of compositional rules rather than dictated to by using a strict and determined score. Harrald uses an extension of the Prisoner’s Dilemma equations, normally used to model social situations where cooperation amongst people works out best in the long run (keeping in mind that everyone might be about to shaft everyone else and maybe you’d better get in and shaft them first). The result is the delicate and gentle Surroundings, the highlight of the concert series, sustaining interest both in the moment and the whole.

Other works mixed performer and machine, most used the spatial sound array to great effect. Rob Esler was terrific to watch as the frenetic wild man percussionist does Foley. Angelo Fraietta delivered some excellent manipulation of sounds in space using a very home-made looking, circuit boards protruding, guitar-like controller. Jon Drummond used real time video, projecting dye dropped into sugary water onto the large backdrop screen—the diffusion of the dye drove the evolution of the music. Lovely to look at, hard to make the link between the visuals and the sounds. Scott Sinclair and Joe Musgrove went oppositional with a brutal assault of video and audio feedback that bordered on the unethical. Andrew Brown’s software generated a score that pumped out a few bars at a time to the waiting musicians. Improved as it went along. But for Brown, and the computationally focussed composer, aesthetic judgement is often not an end point but an input into the theory of possible musics their software expresses.

Australian Computer Music Conference 2005, Creative Industries Precinct, Queensland University of Technology, Brisbane, July12-15

RealTime issue #69 Oct-Nov 2005 pg. 42

© Greg Hooper; for permission to reproduce apply to realtime@realtimearts.net

1 October 2005
Close

Join our e-dition list

Sign up for free online e-ditions offering occasional reviews and commentary and curated selections from and response to the RealTime archive 1994-2017.