Ben Hayes & Hector Plimmer

Ben Hayes & Hector Plimmer

GMD Senior Lecturer Craig Burston caught up with GMD graduate Hector Plimmer to hear more about his imminent performance alongside a musician, and AI, as part of the 2022 Grace Jones curated Meltdown Festival.

CB: I’m really looking forward to next Sunday when you will be performing with your creative partner Ben Hayes as part of Meltdown.  I read you were incorporating Artificial Intelligence into the project-performance. What is the appeal of AI to yourself as an artist?

HP: There is something incredibly magical about hearing or seeing something that is completely new, yet strangely familiar. AI has the ability to provide that and people are constantly pushing the limits of what’s possible. I think witnessing human culture seen through an AI lens is fascinating and says a lot about not only the technology but humans as well.

CB: And following on, are you hoping to test or prove anything with the introduction of AI?

HP: I think a lot of people worry that AI will take over in some way. We want to show that there is a wealth of inspiration to be found by using AI in a collaborative sense. For this performance we have generated hours of completely new and never before heard sounds, which have provided the starting points for each track in the show. A lot of AI generated music is pretty unhinged and crazy but within some parts are genuinely amazing moments including full on instrumental solos, people ‘talking’ and even clapping. We use those moments and build upon them in order to create something neither Ben nor I, nor AI could create.

CB: If you cast your mind back to your degree show work, you produced / drew / constructed some fantasy non-Earthbound landscapes that were translated by other musicians into scores. Is the new project an extension of that for you?

HP: In some sense you could say it is! At the root of it is collaboration and sharing ideas whether that be with humans or algorithms. In this example AI is providing us with the stimulus and we are building upon that. I was thinking another interesting idea could be to generate AI music and then see if that could be replayed by humans… maybe next time!

CB. Since you graduated from GMD at LCC you have now released two full solo albums. How does this project with Ben differer to your previous or recent work. What opportunities does it bring?

HP: This project allows me to explore the more dance focused realm of music, a lot of the generated music just lends itself so well to that for some reason. I should also mention that I would not be able to be involved with this project at all if Ben were not involved. I landed in the world of AI music kind of by accident and not knowing what any of it means. I still don’t really. Ben is currently doing a PhD in Artificial Intelligence and Music, so he is really the brains of this whilst also being an amazing musician himself.

CB. Which other artists using or integrating AI into the creative process inspire you?

HP: Holly Herndon & Arca are two big ones. it really shows in how unique their art is. I’m sure there are lots of people doing incredible work using AI, I’ve only really scratched the surface.

CB: As you are both a musician and a visual artist/designer, how important or valuable is it to assert a degree of control over both aspects of live performance, and are you relinquishing some of that control to AI?

HP: The last time Ben and I did this show I made all the visuals to accompany the music. I wanted to have them react to the music in order to differentiate between what we were doing on stage and what the AI was doing. Ultimately this approach wasn’t very effective in serving that purpose. I think when people come to see a show incorporating AI they want to be weirded out to some degree and I don’t think my flashing geometric shapes provided that. This time around all of the visuals have been generated by AI using two different models; one is called a GAN, or “generative adversarial network”, which actually uses a kind of collaborative process in itself to learn to create images, whereby one side of it has the role of ‘creator’ and the other side has the role of ‘critic’, which tries to judge whether an image is real or fake, so as it generates images it’s constantly checking in on how accurate they are based on the examples its been trained on and adjusting as it goes. The other model is called a Guided Diffusion model. It’s a text to image based generator that you can feed text prompts and it will generate an image it thinks best fits the prompt.

We’ve put this whole performance together from scratch so it has been a bit of relief being able to let AI do a lot of the visual leg work. The results are perfect for what we’re going for too.

CB: Finally, I’d love Grace to sign a couple of my records, any chance of you could put me touch?! 🙂

HP: She’s actually playing at the same time as us on Sunday, so bring your records… you might bump into her!

Ben Ayers & Hector Plimmer, live at the Purcell Rooms, Southbank, London, Sunday 19th June 2022.
https://www.southbankcentre.co.uk/whats-on/gigs/ben-hayes-hector-plimmer?eventId=900528
Hector’s solo work can be found here: https://hector-plimmer.bandcamp.com

Ben Hayes & Hector Plimmer

Ben Hayes & Hector Plimmer