Audio Metaphor

Audio Metaphor is a research project aimed toward the design of new methodologies and tools for sound design and composition practices in the areas of film sound, game sound, and sound art. We continue to identify the processes involved with working with audio recordings in creative environments, and address these in our research by implementing computational systems to assist human operations. We have successfully developed Audio Metaphor for retrieval of audio file recommendations from natural language texts, and, even used phrases automatically generated from Twitter to sonify the current state of the Web2.0. Another success has been in the segmentation and classification of environmental audio with composition specific categories, which was then used to in a generative systems approach allowing users to generate sound design by simply entering text. As we point Audio Metaphor toward perception and cognition, we will continue to contribute to the music information retrieval field through environmental audio classification and segmentation, and moreover, be instrumental in the design and implementation of the new tools for sound designers and artists.

Members: Miles Thorogood, Jianyu FanPhilippe Pasquier, Arne Eigenfeldt.

Papers & Posters:

  • Thorogood, Miles, Jianyu Fan and Philippe Pasquier. “Soundscape Audio Signal Classification and Segmentation Using Listeners Perception of Background and Foreground Sound”. Journal of the Audio Engineering Society. Special Issue (Intelligent Audio Processing, Semantics, and Interaction), Oct 2016.
  • Fan, Jianyu, Miles Thorogood, and Philippe Pasquier. “Automatic Soundscape Affect Recognition Using A Dimensional Approach”. Journal of the Audio Engineering Society. Special Issue (Intelligent Audio Processing, Semantics, and Interaction), Oct 2016.
  • Bizzochi, Jim, Arne Eigenfeldt, Philippe Pasquier and Miles Thorogood. “Seasons II: a case study in Ambient Video, Generative Art, and Audiovisual Experience”. Electronic Literature Organization Conference. British Columbia, Canada. Jun, 2016. Electronic Literature Organization.
  • Bizzochi, Jim, Arne Eigenfeldt, Miles Thorogood and Justine Bizzochi. “Generating Affect: Applying Valence and Arousal values to a unified video, music, and sound generation system”. Generative Art Conference. 2015. 308 – 318
  • Thorogood, M., Fan, J., Pasquier, P. BF-Classifier: Background/Foreground Classification and Segmentation of Soundscape Recordings. In Proceedings of the 10th Audio Mostly Conference, Greece, 2015.
  • Fan, J., Thorogood, M., Riecke, B., Pasquier, P. Automatic Recognition of Eventfulness and Pleasantness of Soundscape. In Proceedings of the 10th Audio Mostly Conference, Greece, 2015.
  • Eigenfeldt, A., Thorogood, M., Bizzocchi, J., Pasquier, P. MediaScape: Towards a Video, Music, and Sound Metacreation. Journal of Science and Technology of the Arts 6, 2014.
  • Thorogood, M, Pasquier, P., and Eigenfeldt, A. (2012). “Audio Metaphor: Audio Information Retrieval for Soundscape Composition” Sound and Music Computing (SMC). Copenhagen, Denmark, PDF
  • Thorogood, M., Pasquier, P. (2013). “Computationally Generated Soundscapes with Audio Metaphor” In Proceedings of the 4th International Conference on Computational Creativity (ICCC). Sydney, Australia,
  • Thorogood, M., Pasquier, P. (2013). “Impress: A Machine Learning Approach to Soundscape Affect Classification for a Music Performance Environment” Proceedings of the 13th International Conference on New Interfaces for Musical Expression (NIME). Daejeon + Seoul, Korea Republic .

Website: audiometaphor.ca   http://digital-media.ok.ubc.ca/projects/aume/

 

 

 

 

 

Projects