Generative & Visual Music

Introduction to Generative Music & Visual Music

Week 2 UTS – 50847 Visualisation and Sonification Studio

Lecture notes by Damian Castaldi

With such an expansive area to cover I will be touching on specific aspects of generative musics and visual music for both the artist/composer, programmer and the audience/listener/viewer. The listener/viewer in some cases is also the user with a form of control over a device or interface to create electronic music, composition, sound design, live performance, interactive applications and audio for games. The device/interface can be seen as a new type of electronic musical instrument or new performance tool that provides new ways for musical and visual expression and exploration.

The 3 main areas I would like to cover today and on Friday include:

1) What is generative music?

2) Human interaction & audio-visual technology. How do we interact with generative music and visual music? What are the applications used for?

3) What is Visual Music?

1) What is Generative Music?


A brief look at the work of Brian Eno


In 1996 Brian Eno released the album titled “Generative Music 1” with SSEYO Koan software. In doing so he popularised the term “generative music”, describing music that is ever-different & changing, created by a system.

More recently he has collaborated with Peter Chilvers to produce the Bloom generative music app for iPhone and iPad –

Demonstrate the App on your iPad …

The Koan software has since developed into the Mixtikl, Noatikl and Liptikl softwares for the iPhone/iPad/iPad touch and Mac and Windows computer systems and both trial and free versions are currently available on the Intermorphic website at

Intermorphic (IM) was incorporated at the beginning of 2007 by Pete Cole and Tim Cole, known as IM. They have a passion for, and long history of, developing and making available their own trans-platform generative (music) technologies, audio technologies & applications and have seen these deployed on mobile devices (Window Mobile, Symbian, TaoOS [Linux, Nucleus, other embedded OS]) to PCs (Windows, Mac, VST, AU and browser plugins).

Brian Eno 1996

“Some very basic forms of generative music have existed for a long time, but as marginal curiosities. Wind chimes are an example, but the only compositional control you have over the music they produce is in the original choice of notes that the chimes will sound. Recently, however, out of the union of synthesisers and computers, some much finer tools have evolved. Koan Software is probably the best of these systems, allowing a composer to control not one but one hundred and fifty musical and sonic parameters within which the computer then improvises (as wind improvises the wind chimes).

The works I have made with this system symbolise to me the beginning of a new era of music. Until 100 years ago, every musical event was unique: music was ephemeral and unrepeatable and even classical scoring couldn’t guarantee precise duplication. Then came the gramophone record, which captured particular performances and made it possible to hear them identically over and over again.

But now there are three alternatives: live music, recorded music and generative music. Generative music enjoys some of the benefits of both its ancestors. Like live music it is always different. Like recorded music it is free of time-and-place limitations – you can hear it when and where you want (Eno, 1996).

I really think it is possible that our grandchildren will look at us in wonder and say: “you mean you used to listen to exactly the same thing over and over again?”

Brian Eno – Music for Airports, at least one of the pieces on there, is structurally very, very simple. There are sung notes, sung by three women and myself. One of the notes repeats every 23 1/2 seconds. It is in fact a long loop running around a series of tubular aluminum chairs in Conny Plank’s studio. The next lowest loop repeats every 25 7/8 seconds or something like that. The third one every 29 15/16 seconds or something. What I mean is they all repeat in cycles that are called incommensurablethey are not likely to come back into sync again.



GENERATIVE MUSIC is music that is ever-different and changing, created by a system. How the system works and what you feed into the system is most important. The best systems allow you to control hundreds of musical and sonic parameters within which the computer (system) then improvises. Some of these parameters might include: harmony, scales, movement from note to note (large or small steps), instrument selection ….

Generative music is generally associated with the styles/genres: ambient, industrial ambient, techno, electronica, computer-generated music, algorithmic composition and interactive music.

The rules of Generative music are probabilistic – they are rules that define a kind of envelope of possibilities. The machine improvises within that set of rules.

Generative music is the opposite to putting on a record and knowing you are going to hear the same thing you heard last time. It is unpredictable music, which you can play whenever you want just like a record, but it won’t be the same thing each time.

Generative music uses very simple rules, clustering together, which can produce very complex and beautiful results. Very simple beginnings with very complex endings that never repeat themselves and will never be identical. It has an organic quality with some sense of movement and change. Every time you play it something slightly different happens.

Anything that uses a very small amount of information smartly and is economical. Like planting seeds into your computer and then using the computer to grow those seeds for you.

In the words of Brian Eno some of the differences between classical or symphonic music and generative music are:

Classical music, like classical architecture, like many other classical forms, specifies an entity in advance and then builds it. Generative music doesn’t do that, it specifies a set of rules and then lets them make the thing. In the words of Kevin Kelly’s great book, “What Technology Wants”, generative music is out of control, classical music is under control.

Now, out of control means you don’t know quite what it’s apt to do. It has it’s own life. Generative music is unpredictable, classical music is predicted. Generative unrepeatable, classical repeatable. Generative music is unfinished, that’s to say, when you use generative you implicitly don’t know what the end of this is.

This is an idea from architects also, from a book called How Buildings Learn, the move of architecture away from the job of making finished monumental entities toward the job of making things that would then be finished by the users, constantly refinished in fact by the users. This is a more humble and much more interesting job for the architect.

Generative music is sensitive to circumstances, that is to say it will react differently depending on its initial condition, on where it’s happening and so on. Where classical music seeks to subdue them. By that I mean classical music seeks a neutral battleground, the flat field. It won’t be comfortable — with a fixed reverberation, — not too many emergencies, and people who don’t cough during the music basically.

Generative forms in general are multi-centered. There’s not a single chain of command which runs from the top of the pyramid to the rank and file below. There are many, many, many web-like modes which become more or less active. You might notice the resemblance here to the difference between broadcasting and the Internet, for example.

You never know who made it. With this generative music, am I the composer? Are you if you buy the system the composer? Is Jim Coles (?) and his brother who wrote the software the composer? — Who actually composes music like this? Can you describe it as composition exactly when you don’t know what it’s going to be?

Generative pieces of music use the moiré principle but in a more sophisticated way. They make moirés of different types of elements, not only of single notes or similar sounds, but moirés of basically rules about how sounds were made. In physics, a moiré pattern is an interference pattern created, for example, when two grids are overlaid at an angle, or when they have slightly different mesh sizes.(Eno 1996) See image BELOW and


Brian Eno discusses an early influence in generative music with Steve Reich’s ‘It’s Gonna Rain’ –

Play an excerpt of ‘It’s Gonna Rain’

It’s Gonna Rain by Steve Reich (not available on Youtube) but look at Phase to Face


2) Human interaction & audio-visual technology. How do we interact with generative music and visual music? What are the applications used for?

The device or interface? Personal control of the music/sound, Enjoyment (fun, entertaining); emotional, intuitive, creative, instinctive, intellectual, physical and psychological interaction, medical, therapy.

Some questions to think about:

– is it technology driven?

– is it participatory/collaborative?

– is it a performance?

– does there have to be an audience?

Some good examples:

Björk Biophilia on the iPad (demo) –

• iSyn Virtual Music Studio – A full featured virtual music studio for Apple iPhone and iPod Touch >>

• Amon Tobin ISAM –

What are the applications used for? Composition, live performance, gaming, sound design, interactive applications.


Nodal >>

Nodal download –

Tutorials –

Download the free app. We will look at this application in the tutorial

• Make your own iPhone app without coding skills.


3) What is Visual Music? Seen Sound?

The Electronic Arts Experimentation and Research Centre – CEIArtE of the National University of Tres de Febrero host a symposium called Understanding Visual Music –UVM. On their website they offer the following description of Visual Music.

The term “Visual Music” is a loose term that describes a wide array of creative approaches to working with sound and image. It´s generally used in a field of art where the intimate relationship between sound and image is combined through a diversity of creative approaches typical of the electronic arts.

It may refer to “visualized music” in which the visual aspect follows the sound’s amplitude, spectrum, pitch, or rhythm, often in the form of light shows or computer animation. It may also refer to “image sonification” in which the audio is drawn -in some way- from the image.

Sometimes visual music describes a non-hierarchical correlation between sound and image, in which both are generated from the same algorithmic process, while in other instances, they are layered without hierarchy or correlation altogether.

Sound and image may be presented live, on a fixed support or as part of an interactive multimedia installation. (UVM, 2013)

Understanding Visual Music- UVM 2013 was particularly interested in the process where the research and creation through interdisciplinary collaboration in different fields of art, science and new technologies becomes a key for the artistic results. Animation, electroacoustic music, image processing, sound design, and digital arts in general, can crossfade with the most diverse techniques and technologies or even with unexpected fields of science, generating the complex interweave that results in a “visual music” work. >>

Further Organisations / Research / Opportunities

hexagram concordia – Center for Research-Creation in Media Arts and Technologies –

• Canadian Electroacoustic Community – CEC eContact! 15.4 – Videomusic: Overview of an emerging art form –

The Electric Canvas

Further reading / Organisations / Research groups / Opportunities

• SSEYO Koan Pro, Generative Music 1, Ark website. Article by Brian Heywood, published in Sound On Sound, October 1996


• What Technology Wants – by Kevin Kelly (co-founder of WIRED)


• Virtual Music – Computer Synthesis of Musical Style by David Cope


• The Long Now Foundation


IRCAM – Institut de Reserche et Coordination Acoustique/Musique


IRCAM is an internationally recognised research center dedicated to creating new technologies for music. The institute offers a unique experimental environment where composers strive to enlarge their musical experience through the concepts expressed in new technologies. These technologies are developed as a result of challenges posed both by new musical ideas and the new domains investigated by scientific teams.

NIME – New Interfaces for Musical Expression >>

Music, Mind and Machine Group – MIT Media Lab


The Music, Mind, and Machine Group at the MIT Media Laboratory is a research group focusing on music and the cognitive investigations describing it. Ranging from the automatic detection of features in existing audio content to the future of efficient transmission over the internet, the group’s mission is to determine and characterise music cognition on both micro and macroscopic scales. The wide variety of projects and research use empirical and theoretical methods to learn more about the structure of music (tempo, timbre, rhythm, etc.) as well as its effect on sociological groups (genre determination, human emotional response, prediction of human rhythm). This group envisages a new future of audio technologies and interactive applications that will change the way music is conceived, created, transmitted and experienced, and we are active participants in every facet of this exciting future. Be sure to explore our web site, try out our demonstrations on site, and feel free to ask any questions about our research.

STEIM – the Studio for Electronic-Instrumental Music


Centre for research & development of instruments & tools for performers in the electronic performance arts. Laboratory, workshop, international meeting place, artist hotel, production office, live electroacoustic music, DJ’s, VJ’s, theatre and installation makers, video artists and nomad studio.

solange kershaw & damian castaldi