Working from home? Switch to the DIGITAL edition of Stage Directions. CLICK HERE to signup now!
Generic selectors
Exact matches only
Search in title
Search in content
Search in posts
Search in pages

Design Insight: Q&A with Sound Designer Cricket Myers

Michael S. Eddy • Design InsightNovember 2020 • November 4, 2020

Recently, Tri Cities Opera, in collaboration with Luma Projection Arts Festival, Enhance VR, and Opera Omaha co-produced Miranda: A Steam Punk VR Experience. The production was presented in Virtual Reality (VR), as well as streamed on YouTube, as a 20-minute condensed version of the originally 80-minute opera with a story and score by composer Kamala Sankaram. This headlining event of the LUMA Festival featured live motion-captured performances during every showing. It was a real time interactive narrative best experienced via a VR headset, but also available over a PC.

Miranda transports viewers to a dystopian future where growing class disparities have reached epic proportions and the criminal justice system serves only as a parody of what it once was. When a wealthy woman dies under mysterious circumstances the three suspects’ lives are on the line. They each testify in an aria for the chance at freedom. The online audience served as judge and jury to decide the guilty party. Miranda was originally commissioned, developed, and produced at HERE through the HERE Artist Residency Program (HARP).

For this unique presentation of live performance viewed in a virtual world, LA-based theatrical sound designer Cricket S. Myers, along with her sound engineer Christian Lee handled the audio aspects. The two drove out to Rochester, NY from LA, bringing along with them the sound system and audio gear, to support Miranda. We caught up with Myers as she and Lee drove back to LA after the production.

VR arena from Miranda: A Steam Punk VR Experience

What were some of the challenges of audio for this production?
One of the challenges of doing it in VR, was a lot about understanding how the virtual reality world handles sound. We came into it with the intention of doing a lot of spatial audio, which feels more natural in the VR world. I don’t think we saw at first that we didn’t have the time to really spend working on it. The first time we got everything functioning the spatial audio curve was set that in such a way that if you’ve got more than say three meters away from the singer, two meters away from the singer, their volume dropped off dramatically. We couldn’t have that because in a room like the void scene the characters are 60 feet away and I heard no voices. Well, we knew that can’t happen. Matt [Gill], he was the VR developer and he was amazing, he said, ‘let me just flip the curve the other way.’ So then you have to be more than 150 meters away in the VR before anything happened, which worked great but really we didn’t solve the spatial audio. We had to stop trying, there just wasn’t the time to really spend working on it. The volume stayed the same, but the voice was locked into more of a stereo pan. Even when the people in VR headsets turned their head, the voices stayed in the same location and moved with their head instead of staying with the characters. We got the audio to were it ran the way we wanted it as best we could but we didn’t have another day to experiment with the spatial audio. If we get to do this again, I want to make the spatial audio work. In speaking with the composer, we also want to make the orchestra spatial in the next iteration. Then, as the audience moves forward or back, the orchestra moves in relation to them. That’s what I want to fix. 

With the limited amount of time we had, getting the balance of the facial movement but keeping the balance of the orchestra to the singers was most important. So we let the spatial audio go and we ended up going with a more ambient sound for everybody and then dealing with how the audio transferred and how it synced. Also, with all the different ways they output it, because we had feeds going straight to VR headsets and we had a YouTube feed, so keeping those all in-sync and keeping the sound balance the same all the way across each platform was another challenge.

Since audio engineer Christian Lee was in the car during the interview with Myers we asked him about a challenge specifically due to to working in VR.

LEE: There were a lot of wireless frequencies between audio and then the VR sensors on this production. It turns out that the VR motion capture sensors are really sensitive to having metal around, which of course we promptly then put two metal mic packs on each singer. So, there was some maneuvering of where could put our packs in relation to their sensors, on the singer’s body. Also, if our two packs got too close together, sometimes their audio would go wonky or our IEM would go out. We managed to find places on the body where everything was stable. 

How did the audio design support the narrative?
I think how it relates back to the narrative, so much of that is going back into music. In opera, the music tells the story. So, for the music direction, I spent a lot of time working with the tracks, changing the mix, and changing the orchestration, as all the music was prerecorded before we got into rehearsals.

In rehearsals, we did a lot of work because we took an 80-minute opera and trimmed it down to 20 minutes, or 18 minutes, really. As the music director found they were missing emotions and missing moments where we needed to breathe, where we needed to spend a little more time, the two of us worked together to edit the tracks to get those moments back in a way that the music could really resonate and let the story be told.

VR room from Miranda: A Steam Punk VR Experience

Speak about how you created the room tones.
They sent us pictures of what all the rooms [virtual rooms/the sets] looked like, and then Christian sat with all the different reverb and effects in the console and set different sounds for each room. A lot of them were very naturalistic, but we talked a lot about the courtroom. We wanted it to feel grand, even though technically there were no walls around it, so there wouldn’t be a lot reverb. We didn’t want it to sound big and full, because the music was more percussive, so we didn’t want to go too washy with it. But then in the room we called ‘the void’, was a very emotional scene. There we went even bigger and fuller and had a longer reverb and a more dramatic sound, because the music leads to such sounds also.

Originally, we didn’t have any reverb on the orchestra at all, just what was there when it was mixed. The orchestra is supposed to be in the same room in the virtual piece, even though we don’t see it that much, so we put the same reverb on them, then it felt that they were in the same room, and that worked fine. 

Talk about mixing it all together—the pre-recorded orchestra and the live singers.
Conductor and musical director David Bloom had a member of his company mix the tracks together and sent me a stereo track as well as all of them separately, which was great when we went back instead of re-editing and changing things. We could move instruments if we needed to. We used [Figure 53’s] QLab to combine the music with all the prerecorded vocals, because two of the five singers were prerecorded and did not perform live. Then all of that went into the console and everything came in on different faders, so Christian mixed the pre-record and the live. 

The three performers that sang live were each in an isolated booth that was built with draped walls. Part of that was for sound isolation as well as partly for COVID reasons. They could take their face mask off when they were singing or speaking in their isolated area. Each of those performers had motion capture suits on as well as the helmet/tablet rigs.

Talk about the pluses and minuses of working with the performers in the curtained booths.
When they were designing the booths, I was part of the conversation. So, I could make sure that we weren’t going to get crazy reflections off the walls. So, they used the drapes that are normally on the stage to hang around the booths. Two of the singers were prerecorded in the space, but when we played them back, they felt dryer. We heard the room less on the pre-records than we did on the three live singers. So, we added a little bit of reverb in QLab to make the prerecorded parts sound closer to the live. The live singers had a lot of freedom to move around in their booths, which was nice and felt safe and isolated from everyone. I really liked having the live performers to work with.

Tell us about your wireless mics and in-ear monitors selections.
Each actor had their own feed that they could get the balance of what they wanted. It was interesting to watch the opera singers figure out how much of their own voice they wanted to hear, because, they’re not used to having that option. Some of them thought that it was cool; some came back later, who didn’t want it at first, to then in the end take the feed. It was also the same with how much they wanted to hear each other compared to the tracks. It was nice to have the flexibility and have the ability to give them their own feed. Then we had a feed to stage management on top of the ones that went out to VR and YouTube as well. We used the Shure PSM 900 wireless monitor system for the IEMs. For the wireless mics, we used Shure Axient Digital system paired with DPA 4061 lav mics. All the Axient was run via Dante, which was nice.

What were some of your other key equipment choices?
The DPA 4061 lavs were definitely my highest priority. I had a lot of volume to deal with because these singers are used to filling 2,000 seat houses with no microphones. I wanted to make sure that we had a mix, and a mic element, that could handle that volume. For the console, we ended up with the Yamaha QL1. I wanted a larger console, because we wanted more flexibility with the outputs because we were walking in not knowing exactly how the setup was going to be. Flexibility was important to me, budget is important too though, so budget-wise, we ended up downsizing to the QL1. Which in hindsight was great because we ended up driving the gear across country and the QL1 fit better in the car. But it did mean that we were pretty much maxed out with our mixes and our physical outputs. We rented the audio gear from Launch out of Los Angeles. They were fantastic. Dan [Tator] let us come into their shop and set up and test everything before we loaded it into the car and drove it across the country.

Leela Subramaniam as Miranda

I understand that instead of one continuous track, you split the audio into five separate segments. Why did you take that approach?
We discovered one of those things we didn’t know happened in VR—until it happened. The VR computer gets a ton of information thrown at it really quickly. It was shifting the sound in an attempt to keep everything in sync. So, if the images are struggling to keep up, the system will slow the sound down to key to the images and then speed it back up. Which, of course, can’t happen in opera. We needed the music to sound like the music. We ended up splitting the one 18-minute long track into the five pieces. We took the four points in the show where the computer consistently choked and cut the track there. The scene would end, and the stage manager would run all the transition cues until the computer was then in the next scene. All the warping that it might’ve done, happened in silence instead of over music so that worked. Then it meant we had to go back and rethink the transitions, and we created music for those transitions. There were a lot of late nights and early mornings working it out!

Was there anything on this project you didn’t expect?
It was a new technology and it was a new way of doing things. That’s what’s exciting and challenging to me. I admit that I enjoyed that because I knew that I had Christian backing me up. That when I came to him and said, ‘we’re changing everything’, he could do it. Every day, there was something I did not expect, but that’s the joy of it.  

Myers work at www.cricketsmyers.com

The Latest News and Gear in Your Inbox - Sign Up Today!