Augmentation2.jpg

A.I. Anne: neurodiverse Inclusion in creative collaborations between humans and A.I. entities

Janet Biggs, BFA: Artist, director, researcher

Richard Savery, Ph. D. candidate, 2017 - 2021: Composer and music technologist

NEW INC. the New Museum’s cultural incubator, NYC: Institution affiliation

 

Abstract:

The creative and performing arts have often been effective in communicating the range of challenges faced by autistic individuals, but the arts have not been used as a tool to facilitate and drive research, communicate rigorous scientific inquiry, and spotlight the potential of creative collaborations with the autistic community.

Janet Biggs, Richard Savery and Mary Esther Carter have presented live and live-streamed performances that feature “A.I. Anne,” a machine learning entity that exhibits a range of responses based on Biggs’ autistic aunt’s behaviors and communication methods. The performances test novel ways of identifying and considering neural diversity and equity. A.I. Anne was created to drive multifactorial neurodevelopmental diverse inclusion in building and expanding technology.

Our team has been training and patterning an AI entity on one autistic individual’s phenotypic data to produce a series of innovative performances that explore the inclusion of neural diversity in creative collaborations with technology. The performances combine improvisation, vocalization and physical movement interwoven with storytelling.

Our research and performances drive partnerships meant to improve and enable greater public understanding of autism spectrum disorder (ASD).  Prompted by memories of lived experiences with her aunt, artist Janet Biggs asked composer and music technologist Richard Savery to create an A.I. algorithm that includes some of her aunt’s phenotypic traits.  Biggs’ aunt vocalized and hummed, but never communicated through spoken language due to apraxia. A.I. Anne can vocalize, communicate emotions and respond to human emotions, but does not use traditional spoken language. A.I. Anne can “hear” and respond in real time, allowing for interactive exchanges that range from utterances to sophisticated tonal duets. These exchanges elicit deep emotional responses from both performers and audience members, while acting as a feedback loop for future research and generations.

The current generations use state of the art convolutional conditional variational autoencoders to react dynamically based on emotional responses from singer Mary Esther Carter. These emotional interactions utilize a narrow form of emotional exchange, limited by a narrow range of data input. Increased datasets will enable Biggs and Savery to leverage more diversity in response type and drive new understandings and performative outcomes.

Biggs and Savery have expanded their collaborative network to include experience designer/creative director, Kate Machtiger, who is autistic, and improvisational violinists Earl Maneein and Mylez Gittens.

In addition to musical emotion interactions, widening our datasets will expand A.I. Anne’s language processing and understanding. Currently A.I. Anne is able to listen to text and respond musically based on the semantic meaning and vocal prosody, but has a limited range of responses and a confined understanding of input. Preliminary work has been completed focusing on A.I. Anne generating language, however this is based only on a generalized, non-specific text data set.

Our project will not only facilitate broader avenues of communication and representation, but will also lay the groundwork for future inclusion of neurodiversity in building and expanding technology while promoting innovative research and understanding.

 

Documentation of performances:

https://vimeo.com/488543925

 

References:

Shimon Sings - Robotic Musicianship Finds its Voice

Richard Savery, Lisa Zahray, Gil Weinberg

Handbook of Artificial Intelligence and Music, Springer, 2020

 

A ConvNet for Ethical Robotic Musical Generation and Interaction

Richard Savery, Lisa Zahray, Gil Weinberg

Artificial Intelligence & Creative Music Practice, Routledge, 2021

 

Emotional Musical Prosody for the Enhancement of Trust in Robotic Arm Communication

Richard Savery, Lisa Zahray, Gil Weinberg

Trust, Acceptance and Social Cues in Human-Robot Interaction, Ro-MAN 2020

 

Emotional Musical Prosody: Validated Vocal Dataset for Human Robot Interaction

Richard Savery, Lisa Zahray, Gil Weinberg

2020 Joint Conference on AI Music Creativity