MMM : Multi-Track Music Machine

Multi-Track Music Machine (MMM) is a generative music creation system based on Transformer architecture, developed by Jeff Enns and Philippe Pasquier. The system generates multi-track music by providing users with a fine degree of control of iterative resampling driven by machine learning.

Based on an auto-regressive model, the system is capable of generating music from scratch using a wide range of preset instruments. Inputs from one track can condition the generation of new tracks, resampling MIDI input from either the user or the system into further layers of musical compositions.

MMM allows for the note density (i.e. the number of note onsets) to be specified for each track, which can generate varying results in resampling. The system also allows for a set of bars to be edited, selected, and resampled, affording the user a fine degree of control.

The project is currently in active development. Forthcoming Max for Live plugins and further extensions will pave the way for experimental development with musicians, composers, and sound artists, engaging them to use the system in their own unique processes.

To see the system in action, watch the videos below, listen to some of the examples, or try the demo yourself on Google Colab.

 
 

In this second demonstration, MMM is used to generate a 16 bar multi-track progression.

 
 

In the demonstration above, the project starts with MIDI information from The Beatles' "Here Comes the Sun", which is then resampled to generate new instrumental tracks. An additional track is created with MIDI data from "Pump up the Jam" by Technotronic, and selected bars are resampled to instantly create new variations.

Abstract

In contrast to previous work, which represents musical material as a single time-ordered sequence, where the musical events corresponding to different tracks are interleaved, we create a time-ordered sequence of musical events for each track and concatenate several tracks into a single sequence. This takes advantage of the attention-mechanism, which can adeptly handle long-term dependencies. We explore how various representations can offer the user a high degree of control at generation time, providing an interactive demo that accommodates track-level and bar-level inpainting, and offers control over track instrumentation and note density.

For more details, you can read the paper.

Research Papers

Ens, Jeff, and Philippe Pasquier. "MMM: Exploring Conditional Multi-Track Music Generation with the Transformer."arXiv preprint arXiv:2008.06048 (2020).

Previous
Previous

Spire Muse: A Virtual Musical Partner for Creative Brainstorming

Next
Next

MoComp: A Tool for Comparative Visualization between Takes of Motion Capture Data