|
Metacreation Lab Newsletter | May 2024 |
|
|
|
|
|
New Paper at the Third Generative AI and HCI Workshop during CHI 2024
We are excited to share the news of an upcoming paper presentation by Metacreation Lab's PhD student Ahmed Abuzuraiq and the lab's director, Philippe Pasquier, at the third Generative AI and HCI workshop during CHI 2024 in Honolulu, Hawaiʻi.
Their paper, titled "Seizing the Means of Production: Exploring the Landscape of Crafting, Adapting and Navigating Generative AI Models in the Visual Arts," promises to be an insightful exploration of the intersection of visual art and artificial intelligence. In their paper, they map out the landscape of options available to visual artists for creating personal artworks, including crafting, adapting and navigating deep generative models. They also argue for revisiting model crafting, defined as the design and manipulation of generative models for creative goals, and motivate studying and designing for model crafting as a creative activity in its own right.
|
|
|
|
|
|
Introducing Autolume: A No-Coding, Neural-Network-Based Visual Synthesizer
After months of dedicated research and development, the Metacreation Lab for Creative AI is thrilled to introduce Autolume: A neural-network-based visual synthesizer.
Autolume is a no-coding, user-friendly visual synthesizer that leverages the artistic potential of Generative Adversarial Networks (GANs) without requiring technical skills. It streamlines dataset preprocessing, model training, and real-time generation, making it accessible to non-technical users. Its unique features include interactive art generation through OSC messaging and the focus on a small-data approach to generative AI, enabling artists to use personal datasets for greater creative control and authenticity.
|
|
|
|
|
|
Autolume Mzton at the 2024 NoiseFloor Festival
Autolume Mzton, a collaboration between the Metacreation Lab’s alumnus Jonas Kreaasch and the lab’s director Philippe Pasquier, will be shown at the NoiseFloor festival, on May 27, 2024.
Autolume Mzton explores dystopian themes through generative analogical music and AI-driven video. This piece exemplifies media art at its peak of automation, where creation is entirely autonomous and the human creator is distant, driven by algorithms. This process mirrors biological growth, enhancing its dystopian, post-human essence. Despite this, elements such as musical gestures, patching, training data, and coding reflect human creativity, while the generative visuals evoke horizons, sunsets, and notions of utopia, contrasting with the underlying theme.
NoiseFloor, founded at Staffordshire University in 2010, is an event that showcases international electronic music and includes both performances and academic presentations. It has evolved to reflect the artistic and research interests of the university's staff, providing a platform for professional artists and students alike. NoiseFloor 2024 will be hosted by the Lisbon School of Music, (Escola Superior de Música de Lisboa), ESML.
|
|
|
|
|
|
Introducing Calliope v0.11: A Co-Creative Interface for Multi-Track Music Generation
We recently released a new version of our Calliope web environment for computer-assisted MIDI musical pattern generation. The system rests on MMM, our multi-track music machine model to generate, re-generate or fill in completely new musical content based on existing ones and their instrumentation. In this way, you use existing musical content, yours or that of others, as a prompt for your generation requests. The interface offers controls for polyphony, note density and note length for each track of the music piece.
|
Call for Participants
We are seeking participants to help evaluate the potential for adopting music composition systems by both novice and experienced composers. Participants will be asked to create a short 4-track musical piece using a Digital Audio Workstation (DAW) and MIDI, followed by completing a survey. The study, which takes approximately 2 hours, is open to anyone with basic music software knowledge, regardless of their composing experience. As per our research ethics approval, you must be 19+ to participate.
For any question or further inquiry, please contact researcher Renaud Bougueng Tchemeube directly at rbouguen@sfu.ca.
|
|
|
|
|
|
Our Sponsors and Supporters |
|
|
|
|
|