A Use Case for Diffusion Models in the Generation of Hybrid AI, Multi-Modal Live Performances

Authors

  • Voyce Sabrina Durling-Jones
  • Aigars Ceplītis

Keywords:

text-to-image, image-to-image, Stable Diffusion, hybrid AI performance, practice-led research, 30th Annual CEEMAN Conference

Abstract

In September of 2022, one month after Open AI’s Stable Diffusion was released to the public, the authors of this text presented a Hybrid AI Multi-Modal Live Performances (AI MMLP) at the 30th Annual CEEMAN Conference in Bled, Slovenia, where a sequence of animations based on keynote addresses and generated using Stable Diffusion were projected on two large screens. The animations were experienced in conjunction with a musical score and an interpretive ballet solo performance, all designed to enhance the hybrid inter-medial nature of the piece. While now common in the mainstream, using text-to-image and image-to-image machine learning models at the time were just beginning to gain momentum among some tech savvy visual artists. This article offers insight into the importance of experimentation by artists as new AI approaches become accessible in the public sphere and provides an example of how once experimental techniques are now deployed across disciplines to produce novel and impactful approaches to generating moving image visualizations through human-computer creative collaboration.

Reflecting on the 22nd of September 2022 performance and from the viewpoint of practice-led researchers interested in experimenting with humanistic applications for AI, this article presents a use case for OpenAI Stable Diffusion in hybrid AI performances and offers commentary on how an audience of Business Education rectors, deans and administrators perceived the experience as viewers, and in the case of keynote speakers, as contributors.

Downloads

Published

21-12-2023

Issue

Section

Articles