 |
Metacreation Lab Newsletter | February 2025 |
|
 |
|
|
 |
Introducing MIDI-GPT at AAAI 2025
We’re excited to share that our paper on MIDI-GPT, a generative system for computer-assisted music composition, has been accepted in the 39th Annual AAAI Conference on Artificial Intelligence (AAAI). The conference will take place from February 25 to March 4, 2025, in Philadelphia, Pennsylvania.
MIDI-GPT leverages Transformer architecture to infill musical material at both track and bar levels, with controls for instrument type, style, note density, polyphony, and more. Our experiments show it generates original, stylistically coherent compositions while avoiding duplication from its training data. The system is already making waves through industry collaborations and artistic projects.
|
|
 |
|
|
 |
Introducing GigaMIDI Dataset
We’re excited to announce the release of GigaMIDI, the largest symbolic music dataset available for research, featuring over 1.4 million unique MIDI files, 1.8 billion note events, and 5.3 million tracks! Perfect for music information retrieval, computational musicology, and generative music research.
Identifying expressive performances in MIDI can be tricky, but GigaMIDI introduces new heuristics—Distinctive Note Velocity Ratio (DNVR), Distinctive Note Onset Deviation Ratio (DNODR), and Note Onset Median Metric Level (NOMML)—to detect expressive tracks. Our curated expressive subset includes 1.65 million tracks (31% of the dataset), making it the largest expressive MIDI collection to date.
Read more here: https://www.metacreation.net/projects/gigamidi-dataset
We’re continuously refining GigaMIDI, adding new features, and expanding with additional subsets—stay tuned for more updates in 2025.
|
|
 |
|
|
 |
Upcoming Autolume Workshop in Berlin
Metacreation Lab’s PhD student, Arshia Sobhan, hosts a 2-day workshop on Autolume in Berlin as part of MANIFEST:IO Symposium on Feb 22 and 23. Autolume is a no-coding generative AI system that empowers artists to train and create with their own datasets, offering greater creative control and avoiding the biases of large pre-trained models. This hands-on workshop guides participants through model training, real-time generation, and interactive art creation, including audio-reactive visuals. Perfect for artists of all backgrounds eager to explore personalized generative AI workflows.
In collaboration with Joshua Rodenberg, Arshia will also perform "Reprising Elements", an audiovisual performance that blends the traditional art of Persian calligraphy with generative AI and sound art. Autolume plays a key role in this performance, driving the generative visual elements.
|
|
 |
|
|
 |
Philippe Pasquier at CBC Radio Canada
The director of the Meracreation Lab, Professor Philippe Pasquier, has been contributing to CBC Radio Canada's Panorama program (a French language segment) with bi-monthly segments on AI applications and implications. Listen to his latest discussion with the host Grégory Bernard aired on Feb 4 from the link below.
|
|
 |
|
|
 |
Our Sponsors and Supporters |
|
 |
|
|
|