Skip to yearly menu bar Skip to main content


Creative AI

Creative AI Session 5

East Ballroom C

Jean Oh · Marcelo Coelho · Lia Coleman · Yingtao Tian

Thu 12 Dec 11 a.m. PST — 2 p.m. PST
Abstract:
Chat is not available.


Full Presentation
Autonoetic Intelligence: A Human-AI System for Being with the Time Being

Peggy Yin · Pat Pataranutaporn · Kavin Winson · Auttasak Lapapirojn · Pichayoot Ouppaphan · Monchai Lertsutthiwong · Patricia Maes · Hal Hershfield

Our futures often feel estranged from us, leading us to poor, short-term decision-making. In this paper, we present human-AI interactions with future-self agents that empower deeper emotional relationships with our psychological future selves, helping individuals cultivate better long-term thinking. We investigate how human-AI interactions can scaffold — and inhibit — creative future simulation, wayfinding, and self-continuity. Finally, we call for the incorporation of more autonoetic elements into human-AI systems, to cultivate the timescape of human experience towards long-term wellbeing.


Full Presentation
Be the Beat: AI-Powered Boombox for Music Generation from Freestyle Dance

Zhixing Chen · Ethan Chang

Dance has traditionally been guided by music throughout history and across cultures, yet the concept of dancing to create music is rarely explored. In this paper, we introduce Be the Beat, an AI-powered boombox designed to generate music from a dancer's movement. Be the Beat uses PoseNet to describe movements for a large language model, enabling it to analyze dance style and query APIs to find music with similar style, energy, and tempo. In our pilot trials, the boombox successfully matched music to the tempo of the dancer's movements and even distinguished the intricacies between house and hip-hop moves. Dancers interacting with the boombox reported having more control over artistic expression and described the boombox as a novel approach to choreographing creatively. Be the Beat embodies the ambiguity of human and machine creativity, inviting a reexamination of the traditional dynamic between dance and music. With this boombox, the lines between creator and creation, leader and follower, human and AI, are continually blurred.


Full Presentation
Dialogue with the Machine and Dialogue with the Art World: A Method for Evaluating AI as a Tool for Creativity

Rida Qadri · Piotr Mirowski · Aroussiak Gabriellan · Farbod Mehr · Huma Gupta · Pamela Karimi · Remi Denton

This paper proposes a novel dialogic and experimental evaluation method for generative AI tools in the context of creativity. Expanding beyond traditional evaluations such as benchmarks, user studies with crowd-workers, or focus groups conducted with artists, we draw on sociologist Howard Becker's concept of \emph{Art Worlds} to demonstrate dialogue as a methodology for evaluation. We present two mutually informed dialogues: 1) dialogues with art worlds', placing artists in conversation with experts such as art historians, curators, archivists, and AI researchers, and 2)dialogues with the machine' facilitated through structured artist- and critic-led experimentation with state-of-the-art generative AI tools. We demonstrate the value of our method through a case study with artists and experts steeped in non-western art worlds, specifically the Persian Gulf. We trace how these dialogues help create culturally rich and situated forms of evaluation for representational possibilities of generative AI that mimic the reception of generative artwork in the broader art ecosystem. They also allow artists to shift their use of the tools to respond to their cultural and creative context. Our study can provide generative AI researchers an understanding of the complex dynamics of technology, human creativity and the socio-politics of art worlds, to build more inclusive machines for diverse art worlds.


Full Presentation
DreamLLM-3D: Affective Dream Reliving using Large Language Model and 3D Generative AI

Pinyao Liu · Keon Ju M. Lee · Alexander Steinmaurer · Claudia Picard-Deland · Michelle Carr · Alexandra Kitson

We present DreamLLM-3D, a composite multimodal AI system behind an immersive art installation for dream re-experiencing. It enables automated dream content analysis for immersive dream-reliving, by integrating a Large Language Model (LLM) with text-to-3D Generative AI. The LLM processes voiced dream reports to identify key dream entities (characters and objects), social interaction, and dream sentiment. The extracted entities are visualized as dynamic 3D point clouds, with emotional data influencing the color and soundscapes of the virtual dream environment. Additionally, we propose an experiential AI-Dreamworker Hybrid paradigm. Our system and paradigm could potentially facilitate a more emotionally engaging dream-reliving experience, enhancing personal insights and creativity.


Artwork
E(*)star, navigating the landscape of emotion language

Ferdinand Kok · Marieke M. M. Peeters · Stefan Leijnen

This cross-disciplinary installation merges artificial intelligence (AI), linguistics, and interactive art to explore the rich, ambiguous terrain of languages describing human emotion across cultures. At its core is an AI system that guides visitors through a vast, multi-lingual emotional lexicon.This installation explores the inherent ambiguity of human experience and emotion. Language, our tool of choice for describing feelings, paradoxically both clarifies and complicates our emotional understanding. It collapses the infinite spectrum of feelings into discrete terms while simultaneously giving rise to new emotions tied to abstract concepts. Language itself even is an ambiguity—a dynamic system born of consensus, connecting sounds and symbols to experiences and phenomena. We find that, as we create systems to manage ambiguity, we inadvertently generate new ambiguities. Humanity is seemingly in an ever tail-chasing moment trying to crystallise ambiguity in something concrete, yet creating even more ambiguity along the way. This (dare I say, ambiguous) play is an intriguing part of life, and connecting AI to it should allow for even more daring ambiguities to come to being.


Full Presentation
Exposed to AI: The Paradox of Trust and Vulnerability in Human-Machine Interactions

Zoe Zhiqiu Jiang

In this paper, we explore the paradox of trust and vulnerability in human-machine interactions, inspired by Alexander Reben's BlabDroid project (2012–2018). Under this project, small, unassuming robots successfully elicited personal secrets from individuals, often more effectively than human counterparts. This phenomenon raises intriguing questions about how trust and self-disclosure operate in interactions with machines, even in their simplest forms. We study the change of trust in technology through analyzing the psychological processes behind such encounters. The analysis applies theories such as Social Penetration Theory and Communication Privacy Management Theory to understand the balance between perceived safety and the risk of exposure when secrets are shared with AI. Additionally, philosophical perspectives such as posthumanism or phenomenology serve as a means for engaging with these broader questions concerning privacy, trust, and what it means to be human in the digital age. Rapid incorporation of AI into our most private areas challenges us to rethink and redefine our ethical responsibilities.


Artwork
For All Mankind

Rishabh Chakrabarty

Rishabh Chakrabarty(2000 — )For All Mankind, 2023Large Scale Immersive Installation3D stereo-generative projection on screen (6.64m x 3.75m) Augmented by 5.1 4D Audio and 3D GlassesFor All Mankind is is a large-scale immersive experience where individuals embark on a journey as interplanetary explorers, traversing the alien terrains of Mars leveraging first Stereo-3D generative AI model trained on extra-terrestrial imagery of the Martian surface, utilizing holographic glasses and four-dimensional spatial soundscapes to capture the eerie silence and potential mystic whispers of this distant world.At the core of this project is the first-of-its-kind stereo-generative diffusion model, trained on stereo pairs of Martian terrain.Our model generates synthetic 3D anaglyph images that surpass the capabilities any current state-of-the-art models till date.The installation at the Bright Festival in Florence augmented audiences' senses using holographic 3D glasses and spatial immersive composition, aimed to test whether audiences could distinguish between real and generated imagery.It highlights the increasing difficulty for people to discern real from generated content, exploring how this ambiguity affects our concept of objective truth, human agency, and interpersonal connections. The work posits that constant exposure to ambiguous reality may significantly impact human cognition, decision-making processes, and overall psychological well-being.For all Mankind, represents a significant advancement in the field of Creative AI, combining cutting-edge technology with artistic vision to explore the profound implications of AI-generated realities. It not only showcases technical innovation but also initiates crucial discussions about the future of human perception, agency, and societal dynamics in an increasingly AI-driven world._For Immersive Viewing use any 3D Glass and 5.1 Surround Sound Compatible Headphones


Artwork
Memoroscope

Keunwook Kim

Imagine you’ve gone on a trip with your family and friends. What if you could revisit those precious memories without sifting through hundreds of photos dumped into a shared album that you never actually look back at?   Memorscope is an innovative memory blending device that merges the deeply human experience of face-to-face interaction with advanced AI technologies to explore and create collective memories. Inspired by how we use microscopes and telescopes to examine and uncover hidden details, Memorscope allows two users to “look into” each other’s faces, using this intimate interaction as a gateway to their shared memories. Through this process, the device leverages AI models like those from OpenAI and Midjourney, which introduce different aesthetic and emotional interpretations, resulting in a dynamic and evolving collective memory space. This space transcends the limitations of traditional shared albums, offering a fluid, interactive environment where memories are not just static snapshots but living, evolving narratives shaped by the ongoing relationship between users.   Memorscope, therefore, stands at the intersection of technology and art, redefining how we perceive, share, and engage with our memories, making the act of remembering a collaborative and emotionally resonant experience.


Artwork
Navigating Ambiguity: Investigative gameplay with a Large Language Object in "A Mystery for You"

Haoheng Tang · Mrinalini Singha

"A Mystery for You" is an educational game designed to develop critical thinking and fact-checking skills in young learners. The game combines a Large Language Model (LLM) with a tangible interface, creating a generative investigative experience where players act as citizen fact-checkers. By eliminating traditional touchscreen interactions, the game promotes thoughtful engagement through a haptically rich Large Language Object (LLO). The LLM generates unique and ambiguous scenarios in each playthrough, while the physical mechanics of the LLO provide structured interactions, balancing the unpredictability of AI with player agency. This innovative approach leverages and manages ambiguity to enhance the investigative experience and game replayability factor while building media literacy skills.


Artwork
Neural Artefact Black

Immanuel Koh

Neural Artefact Black is arguably the world’s first built physical public art-bench that is generated directly in 3D with a custom fine-tuned (via dreambooth) stable dreamfusion model and fabricated in an artisanal way with 100% upcycled wood. Commissioned in March 2023 by Arts House Ltd (on behalf of Singapore’s National Arts Council) and completed in July 2023, Neural Artefact Black (or ‘Re-Store’) forms part of the Civic District Placemaking and Public Art Bench project called ‘Benchmarks’ (https://artshouselimited.sg/cvd-whats-on/benchmarks/benchmark-details/re-store and https://www.youtube.com/watch?v=A_fEpYoz368). The art-bench is sited in front of the Asian Civilisations Museum and along the historic Singapore River, thus situating itself conceptually among the antique Peranakan wooden furniture collection in the former and the long-disappeared small wooden boats (sampans) on the latter. The artistic intention is to blend learnt features of both types of artefacts – digitally with their scanned imagery, and materially with the use of abandoned wooden furniture and retired boats.


Artwork
Neural MONOBLOC Black

Immanuel Koh

Neural MONOBLOC Black is a series of 8 furniture pieces generated directly in 3D with a custom fine-tuned (via dreambooth) prolificDreamer model and fabricated in an artisanal way with 100% upcycled wood. It was completed and launched as an exhibition (23rd April – 7th May 2024) at Singapore’s National Design Centre, with support from DesignSingapore Council and University of the Arts Singapore (UAS), and as part of a satellite event at the Computer-Aided Architectural Design Research in Asia 2024 (https://caadria2024.org/)Neural MONOBLOC Black reflects on the world’s most widely/cheaply/quickly produced and disposed chair – the typically white stackable plastic MONOBLOC chair. The Monobloc chair is also the most common chair imagery on the internet, thus automatically finding its way into any datasets used to train today’s most powerful foundation AI models such as ChatGPT and Stable Diffusion. The exhibition presents 3 acts of aesthetic subversion through the Monobloc and raises questions on our all-too-human conception and perception of what design is and can be.


Artwork
Neural Tides: Oceanic Neural Granular Synthesizer

Sabina Hyoju Ahn · Ryan Millett · Seyeon Park

is a multi-granular synthesizer that uses an artificial neural network trained with sound samples from the coastal areas of isolated islands—Hakrim-do and Ulleung-do—in South Korea. These islands have been mapped as sound particles within a latent space, enabling users to freely explore and listen to their coastal environments. The synthesis process mimics the natural degradation of styrofoam, blending nature with artificial elements to reflect the integration of marine debris into the natural world. We employed granular synthesis to illustrate how plastic breaks into small pieces by waves and wind, merging with natural materials on the coast. This process mirrors granular synthesis, where sound is divided into particles and new sounds are created by adjusting grain size. The instrument is designed for precise manipulation, allowing users to navigate the latent space and select sound particles. The custom interface is user-friendly, featuring six knobs, a motion sensor, and a touchpad for easy control, enabling users to navigate sound sources, adjust time segmentation, and add effects. The instrument’s case has been 3D printed with algae-based filament, making it biodegradable. transforms the visual experience of the sea into an auditory one. Our aim is to promote environmental awareness in a lighthearted way with this playful instrument while also sharing and connecting our ideas with others.


Artwork
Poespin-Wherever your body reach, there is a poetry

Yiqing Li · Yihua Li · Yetong Xin · Allison Parrish · Hongyue Chen

PoeSpin is a human-AI cocreating writting system that transforms ‘pole dancing’ movements into poetry.It challenges societal prejudices against pole dance by transforming movement into poetry through AI, embodying the theme of "Ambiguity". This human-AI collaboration blurs the lines between physical performance and literary creation, questioning traditional notions of authorship and artistic expression.Using three approaches - AI-generated circular poetry, 3D semantic space mapping, and vector transformations of motion data - PoeSpin creates surreal, evocative poems that defy clear interpretation. This ambiguity invites viewers to reconsider their perceptions of pole dance, challenging the boundary between 'high' and 'low' art forms.By recontextualizing pole dance as a profound, poetic medium, PoeSpin explores the ambiguous interplay between human creativity and machine intelligence, ultimately aiming to liberate this misunderstood art form from stigma and objectification.


Short Presentation
RECITYGEN Interactive and Generative Participatory Urban Design Tool with Latent Diffusion and Segment Anything

mo di · Mingyang Sun · Chengxiu Yin · Runjia Tian · wu yanhong

Urban design profoundly impacts public spaces and community engagement. Traditional top-down methods often overlook public input, creating a gap in design aspirations and reality. Recent advancements in digital tools, like City Information Modelling and augmented reality, have enabled a more participatory process involving more stakeholders in urban design. Further, deep learning and latent diffusion models have lowered barriers for design generation, providing even more opportunities for participatory urban design. Combining state-of-the-art latent diffusion models with interactive semantic segmentation, we propose RECITYGEN, a novel tool that allows users to interactively create variational street view images of urban environments using text prompts. In a pilot project in Beijing, users employed RECITYGEN to suggest improvements for an ongoing Urban Regeneration project. Despite some limitations, RECITYGEN has shown significant potential in aligning with public preferences, indicating a shift towards more dynamic and inclusive urban planning methods. The source code for the project can be found at RECITYGEN GitHub


Artwork
Symbiosis

Runjia Tian

"Symbiosis" is an exploration of the evolving relationship between artificial intelligence (AI) and human creativity, presented through a real-time generative art installation. The work envisions AI not as a passive tool, but as an active participant in the creative process—a digital entity that interacts with and responds to human expression. In "Symbiosis," users engage with the installation through movement and verbal descriptions, which are captured and interpreted by AI to generate real-time visual responses. This collaboration between human and machine blurs the boundaries between creator and creation, prompting reflections on how AI might redefine the nature of artistic expression in a world where technology and humanity are increasingly intertwined.Through this interactive experience, "Symbiosis" challenges the notion of AI as merely a functional device, suggesting instead that it has the potential to become a partner in creative storytelling. By translating human gestures and spoken words into dynamic visual forms, AI in "Symbiosis" reveals a new dimension of communication—one where emotions and intentions are transformed into visual art, creating a dialogue between the physical and the digital, the human and the artificial.


Full Presentation
Text2Tradition: From Epistemological Tensions to AI-Mediated Cultural Co-Creation

Pat Pataranutaporn · Chayapatr Archiwaranguprok · Phoomparin Mano · Piyaporn Bhongse-tong · Patricia Maes · Pichet Klunchun

This paper introduces Text2Tradition, a system designed to bridge the epistemological gap between modern language processing and traditional dance knowledge by translating user-generated prompts into Thai classical dance sequences. Our approach focuses on six traditional choreographic elements from No. 60 in Mae Bot Yai, a revered Thai dance repertoire, which embodies culturally specific knowledge passed down through generations. In contrast, large language models (LLMs) represent a different form of knowledge—data-driven, statistically derived, and often Western-centric. This research explores the potential of AI-mediated systems to connect traditional and contemporary art forms, highlighting the epistemological tensions and opportunities in cross-cultural translation. Text2Tradition not only preserves traditional dance forms but also fosters new interpretations and cultural co-creations, suggesting that these tensions can be harnessed to stimulate cultural dialogue and innovation.