Members

Directors Post Docs PhDs Masters MFA Research Assistants Alumni PhDs Masters MFA Research Assistants Collaborators    

MAVi: Aesthetic Movement Visualization Tool

Movement data is fascinating to data artists for its richness of expression and great potential. We explore this kind of data, for the sake of generating video sequences, and create MAVi, a new tool for video creation that allow movement data visualization, real-time manipulation, and recording. The tool is implemented in unity and currently support […]

MASOM: Musical Agent based on Self-Organizing Maps

Musical Agent based on Self-Organizing Maps (MASOM) is a musical software agent for live performance. MASOM plays experimental music and free improvisation. It learns by listening to audio files such as recordings of performances or compositions. MASOM extracts higher level features such as eventfulness, pleasantness, and timbre to understand the musical form of what it […]

Longing + Forgetting

Artificial Intelligence is omnipresent. From video games to online love, from driving cars, flying planes, online trading to state-of-the-art military strategy – the notion of ‘artificial agent’ is now the prevalent paradigm to embodying machines in virtual bodies. From the actions of multiple simple agents can emerge complex movement and assemblages. Longing + Forgetting presents […]

Mova: Using Aesthetic Interaction to Interpret and View Movement Data

Mova is an interactive movement tool that uses aesthetic interaction to interpret and view movement data. This publically accessible open source web based viewer links with our movement database allowing interpretation, evaluation and analysis of multi-modal movement data. Mova integrates an extensible library of feature extraction methods with a visualization engine. Using an open-source web-based […]

Affective Movement Recognition and Generation

We build models to teach machines to recognize and synthesize affect expression through human movement. In particular, we investigate question such as: Can people perceive valence and arousal in motion capture without facial expressions? Do people agree on what they see? Can we train machine learning models using human ratings to classify movements of different […]

Impress

Impress is an automatic soundscape affect recognition systen aimed toward helping sound designers to find more streamlined workflow to add suitable sound effects for films, asisting soundscape composers to create emotional soundscape compositions to evoke a target mood, and enabling engineers to design mood enabled recommendation systems for retrieval of soundscape recordings. We will continue to […]

Automatic PureData Patch Generation

Synthesizers are hardware or software instruments designed to generate sounds. Given a set of target sounds, the question is: what is a (or the best) synthesizer capable of producing it? This research explores a method for automated synthesizers’ design to produce a given target sound. The synthesizer’s architecture and its parameters are grown using Genetic […]