After seeing Shoeg’s project Infiltrate at LEV Matadero, we decided to catch up with him in Barcelona to find out more about his work, and to try and decipher the fascinating performance we saw that intrigued us to discover what technologies he uses to create his live AV shows.

Primarily I understand, you would consider yourself to be a musician, am I right? Or how would you label yourself? When did you decide to experiment with the A/V side of your show?

In the last years I’ve changed that way of seeing myself, so I would say I’m an artist. It’s not only sound anymore, I feel really that I am trying to express myself also through my code, my visual stuff, even my movements. I’m also collaborating with dance companies, where it is quite important to know how you move on stage, and this made me aware of that. So, for example I try to play without the table and computer blocking the visual line to the audience. I have also changed my relationship with sound, focusing more on textured layers instead of pitch.  

I started as a “musician”, but my visual side has been always there. I’ve been working for 15 years as a video editor, and I always had this fascination about image and sound synchronicity and feedback. 

Shoeg - Oudeis
Image from Shoeg’s project – Oudeis

Have you created the visual part of the show yourself or collaborated with a visual artist? (If so, who and why?) If not, tell us about how you developed the project and any challenges you faced in dealing with both elements of the performance.

I almost always create my own stuff. I’m not closed to collaborating with other people, but I tried to involve other artists in the past and for a reason it almost never happened, except for when I worked at the very beginning on the project with Ana Drucker, but after that I spent 2-3 years without a visual show, and I was really missing it. At some point, I wanted it back and I decided I had to refresh my coding knowledge to achieve what I wanted. I studied Computer Science for a couple of years, so at least I had a starting point – more or less.

I wanted to build a real time reactive visual system, that could be completely autonomous in a live set. The idea was to set up a bunch of rules, and do something sound reactive that could last 45 minutes in a live set without getting boring. So first challenge in this process was choosing which tools suited my needs the better. I tried, for example, Open Frameworks, which was a bit too complicated for my coding skills. Later, I knew about game engines like Unreal or Unity, which are free and you can do a lot of things scripting, easier to code. It’s also great to have this good amount of documentation and works done by other people online. I’m curious now about what Touch Designer can do, but for the moment Unity allows me to have a precise control of what I need. 

Shoeg - skin
Image from Shoeg’s project – Container

On the other hand, I wanted to work with objects from the real world in 3D aesthetics. I could model them with Blender, but I have no idea. So I learned some 3D techniques, like photogrammetry or 3D scanning. I remember wanting something more “perfect”, but discovered almost by accident the beautiful imperfections this techniques introduce in the models.

We recently saw your performance of your latest project ‘Infiltrate’ at LEV Matadero. What tools and set up are you using for the show? 

All the sound was generated using a couple of Etee sensors that the guys at Tangi0 lent me for a couple of months. These devices capture my hand and finger motion, as well as pressure data, and that is converted into MIDI signal through a Max MSP patch. Finally, MIDI is sent to the Virus and Digitakt. I had to bring hardware synths to the live sets, because I need a lot of polyphony to build these big layers of sound, and I couldn’t achieve it in virtual synths. Then, the visual stuff is a Unity app reacts to the sound mix. 

Infiltrate - LEV Matadero
Infiltrate at LEV Matadero, photo by Hayley Cantor

How does the use of this technology improve, or add to the quality and experience of your show for you, as an artist?

It allows to express myself in ways I could’ve never imagined. I’ve never performed as comfortable and with wide palette of possibilities with an instrument until I discovered motion sensors combined with the computer. The ability to map any behaviour to any response allows you to optimize your abilities in order to get what you want. This can’t ever happen with “traditional” instruments, you have to adapt to the instrument rigidness and background. I also see the coding process as a prosthesis, an extension able to repeat mechanical operations while you pierce through them.

What does the future hold for Shoeg in the world of live performance?

In the near future, I have to improve a lot of things: I want to make my hands more prominent on stage and be less computer dependent. People keep asking what is happening with the sensors, and I want to make it a bit more understandable. I also have this long list of ideas to code which don’t have time to make, and I would also like to collaborate with other people. But before that, I want to record a new album. I hope I’ll be able to work on it in the next months. 

You can find out more about Shoeg’s work through his artist page.

Hayley Cantor

Hayley Cantor

Hayley is a multidisciplinary Graphic Designer & VJ whose career follows a colourful journey from working in optics to graduating in Psychology, working in mental health and homeless services, VJing to Graphic Design. She has a long-term passion for musical and new media art projects. She makes her mark with technicolour live visual performances under the artistic name VJ AYL.

Leave a comment

Leave a Reply