Algorithmic Music Generation
Project Overview
An interactive art installation.
A table upon which rests an assemblage of odd object sits at the center of an illuminated space. As people approach the table a synthwave baseline emerges from hidden speakers. As objects are interacted with elements of the composition smoothly mix into the sound and light scape, following a cycle of discovery and interaction.
Sensors
The table of objects are simple toys and common objects repurposed to act as tactile sensors. Water pistols, egg-beaters, staplers, the table... and hidden sensors tracking the movement of people.
Each object has various sensors hidden within - and small ESP32 modules powered from rechargeable batteries. Each sensor value is communicated to the MQTT backbone. The assemblage of objects collectively enable a wide variety of sensors:
- Inertial sensors can detect movement or orientation
- Binary switches can detect presses
- Proximity sensors in table can detect the presence of objects or people
- Spatial location of people can be turned into up to 3 analog axes.
- Interaction with the space provides a stream of many changing binary and analog data streams.
The sensor data are all sent to a MQTT server as a data backbone. This MQTT server runs on the same Raspberry Pi that will perform the music generation step.
A separate Python program logs into the MQTT broker and receives any changes of the sensor values. Analog sensor signals are mapped to 0-1024 MIDI sliders. Various switches or binary sensors are mapped to MIDI key presses on particular channels. The MIDI data is then sent to guide the algorithmic music generation step and/or lighting.
Music Generation
Music generation via algorithm on the SonicPi program using multiple interacting live loops. The output is hopefully a coherent, aesthetic musical "composition" guided by listener interaction in real-time.
The music is not generated entirely randomly, but rather the composition is semantically arranged into movements and concepts that should work relatively well together. SonicPi allows for high-level coding of concepts implemented in Ruby.
Top level song is broken into parts, here called movements, such as chorus, build, massive drop, teeth grinding peak, spacey intro, outro.
Compositional elements are also arranged into groups. For example, bassline, percussion, squanky synths, etc. The elements morph character depending on which movement they are in, but should have the same basic qualities. Also the MIDI signals controlling each element should come from the same sensor-laden object. This helps users learn which musical elements are being affected by their interactions with the space.
The MIDI stream generated by the user interaction table is used steer the SonicPi algorithm, shifting between movements.
Analog signals as MIDI slider values can be used to amplify elements in the mix, change filter and phaser values on various effects, shift through more intense sample loops, etc.




