STOLEN MUSIC at IRCAM

Stolen Music was originally conceived for my final project as part of the IRCAM Cursus, and presented in 3537 in Paris on 16th September 2022, but my interest in stealing material (often in morally dubious ways) goes back to at least 2012, with my pieces Tweet Piece #2 and CrowdScoreSing.

There is virtually no original material in Stolen Music.

[aims]

The basic aim of the project is to explore two main ideas that matter to me:

This project owes a huge debt to Claudia Jane Scroccaro, who was my main supervisor, as well as the whole pedagogy department at IRCAM, particularly Pierre Jodlowski, Sebastién Naves and Jean Lochard - who all directly contributed to this project in important ways.

My contribution to this concert unfolded in and around the other pieces in the concert, which were by Julie Zhu, Aida Shirazi (in close collaboration with the dance artist Stefanie Inhelder), Qingqing Teng, and Filippos Sakagian. In this case, their material was taken and used with their blessing and they were consulted about my plans for it - however, this was done mainly because we are all friends and it wouldn't have been right to misuse their material in a concert which was of such importance to all of us. They are also all due a big thank you for their generosity in agreeing to let me work with their music in this context! In general, I have no qualms about going against the wishes of other composers (and I expect others to feel free to treat me in the same way).

[thanks]

PRE-CONCERT: FREED SOUNDS INSTRUMENT

As the audience entered the concert space, they were given an envelope with mysterious contents...

Kazoo in envelope

They were also given a small card inserted into the programme notes which contained a QR code linking to a webpage containing an instrument for playing random sounds from freesound.org, alongside one or two event pieces from a set of 32. The instrument is also embedded in this page (click the red text to expand it).

The audience were given the option of using this webpage while they waited, and it was silenced remotely right before the concert began.

[PLAY THE INSTRUMENT]

FREED
SOUND
INSTRUMENT

TO PLAY THIS INSTRUMENT:

Use the buttons to add or remove sounds




Currently playing:
[What to do / How it works]

If you wish to reproduce the effect of this part of the concert, I would play this soundfile: event sounds in the background.

  • use your internet-connected device to play sounds
  • you are now an improvising performer taking part in a distributed loudspeaker performance
  • the events on the page handed to you are from a set of 32.
  • they may be used to inform your improvisations (or you can ignore them)
  • this website will be deactivated before the concert begins
  • all the sounds are streamed live from freesound.org, where they were uploaded by people from all over the world
  • the sounds are chosen at random and have not been curated (the script picks a random sound ID between 1 and 100000 each time)
  • Special thanks to the team behind the Freesound API for allowing and facilitating this kind of use.
  • You can find the full set of Events for Freed Sound here


freesoundimage1
freesoundimage2

Technology used in this section:

JavaScript, FreeSound API, printed materials

FIRST PIECE: BOX, by Julie Zhu

[A percussionist hidden in a box...]

This piece involved a percussionist (Olivia Martin) hidden inside a large wooden box, painted black and placed in the centre of the room. The percussionist made sounds using some percussion instruments (esp. bass drum, thundersheet) that were with her inside this claustrophic space, as well as drawing on the inside surfaces of the box. The contents of the box, as well as its surfaces, were amplified and spatialised across 12 loudspeakers (and two transducers attached to the thundersheet) in the room. The effect was to place the audience partly imagining being inside the box, and partly distorting the reality of the space inside and outside the box.

Julie and the team decided to have the percussionist begin making sounds before the concert, while the Freed Sounds Improvisations were still going on, meaning that we had already weakened the clear boundaries around each piece by the time the concert began.

IRCAM listing

TRANSITION: Doors Into Worlds

part one:

This video was projected on all four sides of the box used in Julie's piece. During the video, the percussionist opened the (previously invisible) door of the box and left it.

[about the material]

The sonic material used is an array of YouTube videos of ASMR pen scratching (as I couldn't use material that was *really* from Julie's piece because all her material was processed live). The video part (obviously) is a bunch of clips of doors, mainly from famous films and TV shows. An additional element in the video is a Coke can, which was originally going to be a larger part of the project overall but even with its reduced role it makes sense to use such an iconic trademark. It also foreshadows a red colour palette for later on the concert.

Because the box needed to move out of the centre of the room to allow space for the remaining pieces in the concert, I prepared a second video, which was projected onto the wall. This change happened seamlessly and once the audience's gaze had shifted, the production team began the process of safely moving this large box (which was on a platform with wheels) to the side of the hall, parting the audience as they went.

part two:


[about the material]

This clip was designed to hold space, and it uses a looped clip of a scene from the film Les convoyeurs attendent, with the contents behind a door chroma-keyed out and with all kinds of footage washily displayed behind it. When the production team had moved the box, the video fades down regardless of where in the video it is.


Technology used in this section:

MUBU library for Max (for automated audio segmentation w/ onset-detection; and for real-time concatenative synthesis), Reaper DAW with various reverb settings: the audio part was made from loading in files, auto-segmenting them and then improvising with them laid out on a graph according to audio descriptors. The resulting tracks were then layered in Reaper and processed through different reverbs, aiming to achieve different spaces every time a door opened. The video is entirely fixed media and made in DaVinci Resolve, using the Fusion part of the software for chroma-keying some of the doors.

SECOND PIECE: Né entre corps, by Aida Shirazi
with choreography by Stefanie Inhelder

[shadows between body] [click for more...]

The beautiful central visual idea of this piece was to place the choreographer behind a canvas screen and throwing her silhouette on the screen with lighting from behind. The simple idea became way more than the sum of its parts by combining the shadows thrown from multiple light sources to abstract the body of the dancer (who was otherwise augmented only by some hair extensions). Inhelder was often a purely geometric shape, but always hinting at her human body before eventually collapsing into a much more realistic silhouette towards the en of the piece.

Shirazi's sound emerged from an exploration of a bilingual text (Persian/French, the poem of the title) which is only ever heard in fragments or cast through heavy processing.

The combined movement and sound is very abstracted (especially to somebody who has limited French and no Persian), but the effect carries the weight of the text across and there's a heavy intensity to the piece

At the end of this piece, the silent opening of the next transition appeared immediately on the closest projection screen to this piece's position in the room.

IRCAM listing

TRANSITION: Shadow Puppets

Looking for a visual link that could lighten the atmosphere without parodying or attacking Aida and Stefanie's work, I hit on the idea of using shadow puppets. I found a set of whimsical videos online and realised I could build a weird interspecies love story out of them.

[more on the material]

A big preoccupation for me while working on these sections was that the tone of the pieces around my transitions was mostly dark, serious and intense. I wanted to try and find a way to freshen the audience up so that they were ready for another intense piece, without seeming to make fun of any of the pieces. It's a fun challenge to try and make short, playful work that still has a kind of weight or beauty of its own (whether I succeeded or not is up to others to say!)

The dark/light contrast in the colour content of the material allowed me to play some more tricks with transparency (e.g. hiding different images in the light and shadow portions of a rabbit).

So the video here is predominantly shadow puppets, with some more "random" material creeping in which mostly foreshadows later sections. The audio is entirely derived from Aida's piece: she sent me the full audio stems (each of her sounds separately) of her piece and I was able to batch process them and use them in a kind of rudimentary musical instrument to create some improvisations. I created the video first and tried to create long tumbling whispers tied to the gestures in the video.

The piece was projected on the wall, and at the end of the transition, a separate projection of red fog came up on another screen and revealed the position of the performer for the next piece, which starts immediately. The red fog is designed to loop and then fade down when ready.


Technology used in this section:

Similar to the Doors transition, MUBU library for Max (for automated audio segmentation w/ onset-detection; and for real-time concatenative synthesis), Reaper DAW with various reverb settings: the audio part was made from loading in files, auto-segmenting them and then improvising with them laid out on a graph according to audio descriptors. I chopped parts out of the resulting files, layering them into tumbling whispers tied to the gestures in the video.

The video is entirely fixed media and made in DaVinci Resolve, using the Fusion part of the software for the fancy chroma-key effects, as well as for the particle fog at end.

THIRD PIECE: Ghost shouting, Ghost screaming, by Qingqing Teng

Gui han, Gui jiao

This piece is for singer with a really crazy constrictive costume, sort of reminiscent of a snake. Qingqing has an amazing way of making super intense, loud, aggressive and bracing sounds - she says herself that she tries to create music with an energy that seems to flow naturally from within, and in this case it's definitely true, with the energy being ferocious and uncapped.

There's a video part which uses red bands of colour as an alternative to stage lighting, but which also turns into a large number of different abstract geometric motifs, as well as more connoted material that reflects the material explored in the piece.

The title is a literal translation of "Gui han, Gui jiao", which is an expression from Qingqing's hometown, which means something like a cry of indifference to a strange phenomenon (although I'm not sure I've done a good job of paraphrasing Qingqing's French text here). The singer represents Qingqing, a person swimming along in the ocean, unable to escape: it's one of those pieces that tumbles around in a complicated inner life.

IRCAM listing

The final sound of Qingqing's piece is this amazing jackhammerish pulse, and I used that as the start of my next transition.

TRANSITION: 3D-WORLD

The next part of the concert strings together a few short sections of my work. Each section is short and the transitions between them are abrupt and jarring. The first one takes the viewer inside a post-human 3D-world, into total immersion in my digital life, through my YouTube history:

[Read more...]

This next section is where I took over the concert for a while and stopped making small transitions between other people's work. I knew that it made sense for Filippos' piece to go last, which meant there were really only three spots for me in the programme. While it might have been more radical or cohesive for me to deal ONLY in the interstitial moments of the concert, there weren't really enough of them to allow me to make a full statement. So I began to think about other material I could steal, and my mind went back to YouTube (I had some previous unfinished work which made use of masses of YouTube clips). As prep for the project, I downloaded my entire YouTube history and extracted a bunch of 5-second clips from all the videos, which served as part of my new material. Needless to say, none of this stuff has been taken with permission.

The effect is supposed to be an overwhelming media-saturation, with content flowing by too quickly to grasp even a significant amount of the meaning, and our brains flit between picking up on patterns of colour, movement or sound, and grabbing on to salient moments in the mess. Of course there's an obvious parallel with the way we all live our digital lives (is this really THAT much more intense than Instagram??), but by placing thousands of different copyrighted videos in this context the relevance of the specific content becomes virtually nil (a point previously made in related work Peter Ablinger, among many others; although the most direct point of comparison I can think is the TV series Chuck.

Technically, the video for this section was extremely demanding and took most of the summer to work out - described below for nerds... It's the work I am proudest of from that perspective.

The audio here is derived exclusively from the video clips used, which means there is a great degree of chance in the initial sound, and a very slow (quite sculptural) mixing process to try and coax the interesting sound out of the texture.


test

Technology used in this section:

Max, Jitter, Python, JavaScript, OSC, Orchidea, Reaper, DaVinci Resolve

The audio in most of this section is simply the audio of the underlying videos, with a lot of post-processing in Reaper.

When you hear orchestral sounds, these have been derived by sending the main sound (from videos) through Orchidea orchestration assistant, essentially creating an approximation of this soundfile using a database of orchestral sounds. I created custom Python patches to run batch processes so that I could work more quickly with Orchidea. I found the results much better in the command line version than using the Max patch.

The videos are created through (a) a set of procedural generative video programs I designed and (b) layering in DaVinci Resolve.

The moving 3-D clouds of videos were created in Max MSP, combining elements from the new jit.gl.polymovie that comes with Max 8.2+ with a 3-D image world patch released by Federico Foderaro. Foderaro's patch only works with still images, but combining it with the polymovie patch was easy enough, although it took significant modification to get it to work. Finally, this patch was augmented with a complex file-loading system that allowed me to load in tagged videos from my library (e.g. by searching for "cat". This portion of the patch worked in combination with a separate Python file.

The 2-D video collages were created using a Python script I developed that relies on the PyMovie library. Various parameters (size of videos, position in screen, duration of clips) are controlled over time with a separate Max patch. The great disadvantage of using Python here is it precludes "improvisation" as there is no real-time element. This makes it slow and costly to experiment with. However, even with my limited programming skills, there is almost no practical limit to the number of videos I can use with this script, whereas the limits when working with live video can be hit very easily within of few minutes of beginning to play (NB: there may be some memory management tricks I am missing in Jitter, but they seem to be pretty advanced if so).


TRANSITION: TIK-TOK KAZOO LESSONS

(See the previous video c. 1:43 for this section.)

For reasons that will become clear later, the real point of my piece was to get the whole audience playing kazoos along with me, so naturally I needed to teach everybody how to play this.

[read more...]

Subverting the formality of a new music context is always fun, but there's an interesting serious point in using videos generated with TikTok in the context of IRCAM. The building is a kind of mecca for people interested in making music with technology. In the early days, one of the important things about IRCAM was that it had a supercomputer: the relevance of that to most computer musicians has eroded, and centres all over the world are technologically equal or superior by now. And in fact, the laptops most normal people work on have been capable enough to create innovative computer music for a decade or two. But still it has remained the domain of the expert who can afford to spend time (and often money) learning to use arcane software. Apps like TikTok, which offer sophisticated audio effects and video filters like movement tracking driven by deep-learning, for free and with an interface designed to be intuitive and immediate enough for children to use in a spare moment.

It's still probably unwieldy to make complex multimedia art exclusively in apps like TikTok (although I want to try this...), but the reality is that we are getting close to the point where this will be practical.

Anyway, that last clip is a one-minute crash course in kazoo playing told through TikTok videos (which are also available individually on my TikTok account @kazooteacher69

Technology used in this section:

TikTok, DaVinci Resolve, Reaper

This is one section in the piece where original recordings were used - however, they are used in the context of viral TikTok memes, using readily available combinations of audio tracks and video filters, as well as making use of the notorious text-to-speech operation available in the app. Some of these filters are very sophisticated.

The videos were layered on top of one another in DaVinci Resolve, with some minimal additions using the same video patches mentioned in the 3D-World section.

PRESENTATION: HOW TO PLAY IRISH FOLK MUSIC

Immediately after the TikTok videos ended, a light came up in front of the projection position, and I walked out channeling the spirit of Steve Jobs (although it might be more accurate to say I was Elizabeth Holmes 2.0).
My mission was to sell the audience the idea that, firstly, the kazoo was an Irish folk instrument, secondly, that the music we were about to play was an ancient Irish folk tune, and thirdly that we were going to play it using a revolutionary, AI-driven karaoke machine I had created during my studies at IRCAM.

[RANT ON AI LOGOS] .......

MY AMAZING SLIDESHOW

A mockup of my IRCAM presentation.

Good evening. Thanks so much for coming out tonight to the one and only, revolutionary Cursus concert 2022. Pause. I think I speak for everybody when I say, it’s truly great to be here.
I hope you’ve all enjoyed and understood our little infomercial. By now, you all know how to play the kazoo – but I'm really here to teach everybody how to play a little Irish folk music.
Not a lot of people know that an early version of the Kazoo was first brought to the USA by Irish immigrants during the Great Hunger, Pause, hit play on video... and in fact, the instrument originated by combining and miniaturising the Irish bodhran and the traditional Irish humming-flute
So, as an Irish musician in Paris, it makes a lot of sense to showcase my own culture at this once-in-a-lifetime IRCAM event

So, with your gracious help, we're going to play a well-known Irish folk song together using some groundbreaking technology I've developed here during my Cursus – and of course, using our kazoos. Pause, pick up kazoo, hold it aloft and point at slide...

But first, I want to make totally sure you've all understood how to play the kazoo, and, more importantly, teach you how this AI-driven performance is going to work.

Are you all ready?

IF YOU HAVE A KAZOO IN YOUR HOUSE, YOU SHOULD RETRIEVE IT NOW!

So, first, please let me demonstrate – and then we’ll all hum along together afterwards.
Click the button below for an interactive kazoo learning experience.
Now we'll make it a little bit more complicated and play a famous tune together.

Click the button again when you're ready.

God, that's gorgeous! Sounds like we have some musicians in the house! audience
OK now, together with the world class research teams at IRCAM, I have developed a game-changing system for real-time sing-along folk sessions, and this is the first ever public demonstration.
Our neural-net-driven system is called KARAOK.AI
it’s the only one of its kind in the world,
and together, we are about to MAKE HISTORY!
immediately move to next slide and play video without taking a beat.
This is KARAOK.AI. Pause.
Watch how the red kazoo bounces intuitively across the screen to keep your singing in time with the music.
Click to the next slide and we'll play along with it.
Play along using your kazoo or the handy kazoo keyboard below. Pause.
OK, now that you know how our intuitive new system works, we can get to grips with some Irish folk music.
This is one of my country's most treasured and ancient folk songs,
"Níor aimsigh mé a bhfuil á lorg agam go fóill".
One more thing... Please enjoy the Karaok.AI.

Of course, the reality was that my KARAOK.AI software (which you saw above) was just a fixed-media video put together in a video editing programme. But it would have been impossible to prove that in the concert hall, had anybody called me out!

As seen, the very end of the show directs the audience's attention to a website. This is a real site. At the time of this concert, it was a relatively minimal site containing a notice that the entire project was dedicated to the public domain as well as some further information about the band Negativland. The site will be added to on an ongoing basis over the next year or two, and the aim is that it will be a hub linking to information about public domain music and for discourse about copyright in the arts, but that it will also contain a repository of interactive generative music software relying on a corpus of copyrighted material.

The website can be found here.


FINAL PIECE: Dionysian Skin by Filippos Sakagian

[electricity & intensity]"Skin" refers to sound, and "Dionysian" refers to intensity. The piece is about energy and intensity. I have the feeling here of setting chaos in motion. I'm not trying to turn it into non-chaos, or give it form. I'm not trying to make sound something it isn't. So it will not be music but electricity. IRCAM listing