@DBG3D @t36s Okay, I found another nice excerpt, a bit more minimal than the above, but maybe also more clear to hear the approach described earlier. Just to explain once more, all the samples used are only one-shot single notes (produced by Simon Pyke/Freefarm). All melodies, chords, chord progressions, rhythm and the overall arrangement are fully generated (mostly but not exclusively) via cellular automata. The composition system also had other means to create/control, e.g. probabilistically trigger the recording of notes/events of selected tracks/channels for a few bars and then replay these phrases later, maybe using a different time scale, transpose, mirror and/or with different instruments... This proved to be highly effective (and musical) in terms of longer progressions and to create more interesting multilayered compositions/progressions. Some phrases were kept in a memory pool for up to 12 hours (the piece ran for 3 months)...
As you can hopefully tell, the visuals for that installation were audio-responsive (not really audio per se, but responding to the events of the composer). Likewise, if the visuals would become too agitated/intense, an event would be sent to the composer to quickly dial down/thin out the musical intensity (e.g. trigger tempo change, mute tracks, lower velocity etc.). This hybrid, coupled two-way feedback worked very well in practice and there were so many moments I wish I would have recordings of...
#GenerativeArt #GenerativeMusic #MusicComposition #CellularAutomata #AudioReactive #Installation #VictoriaAlbertMuseum #Video