Category Archives: Uncategorized

Automatic Sound Design

According to the International Organization for Standardization, a soundscape is “an acoustic environment as perceived or experienced and/or understood by a person or people, in context.” The concept of soundscape was proposed initially by Simon Fraser University (SFU) professor, R. Murray Schafer, in the 1960s. Schafer, along with SFU Professor, Barry Truax, and their student, Hildegard Westerkamp, formed the famous World Soundscape Project through their  groundbreaking work in sound studies. From then on, SFU has been a leader in soundscape studies.

soundscape studies

At the Metacreation Lab, we are focusing on automatic sound design. The goal is to build a system that automatically retrieves, analyzes, selects, processes, and mixes soundscape recordings to compose emotional, artificial soundscapes for video games, movies, virtual reality environments, and sound arts. Our papers and artworks can be found at the following links:

http://audiometaphor.ca/

http://metacreation.net/sound-synthesis-2/

http://metacreation.net/project/impress/

http://metacreation.net/emo-soundscapes/


Jianyu Fan is currently a Ph.D. candidate in the Metacreation Lab at Simon Fraser University. Previously, he worked as a research assistant at Bregman Media Laboratory at Dartmouth College.  His research interests lie in the field of Affective Computing, Machine Listening, Human-Computer Interaction and Computational Creativity. In particular, he builds intelligent systems to model human perception and cognition of sound design and video editing practice to better understand the motivation and intention of the creative process.  As an artist, he has been playing the piano for over 23 years. His piano works have been presented at international conferences and art festivals.

Quantifying the Quality of Generated Music

In many cases, generative models are trained on a curated collection of musical works, with the aim of teaching the generative model to compose novel music exhibiting the stylistic characteristics of the selected works. In contrast to human-composed music, which is created at a relatively fixed rate, generative models are capable of composing a large quantity of music. Unlike humans, computers do not need to take breaks, and generative models can be parallelized across several computers with relative ease. This raises issues when it comes to evaluating the quality of the music generated by these models, as humans are unable to critically listen to music for extensive periods of time.

In order to address this issue, we developed quantitative methods to evaluate the quality of generated music. StyleRank ranks a collection of generated pieces based on their stylistic similarity to a curated corpus. Experimental evidence demonstrates that StyleRank is highly correlated with human perception of stylistic similarity. StyleRank can be used to compare different generative models, or to filter the output of a single generative model, discarding low-quality pieces automatically.StyleRank


Jeff Ens holds a BFA in music composition from Simon Fraser University, and is currently completing a Ph.D. at the School of Interactive Art and Technology as a member of the Metacreation Lab. His research focuses on the development and evaluation of generative music algorithms.