Stage Directions has merged into PLSN magazine and we will no longer be updating this website but we invite you to join us at Stage Directions on the PLSN website by clicking here . You will find all the latest theater news, buzz, and happenings, the SD Digital Issues Archive and some fun SD Extras to keep you informed on all things theater!

CLICK HERE to get to Stage Directions at PLSN and keep up with all the latest theater news!

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

A Lone Voice: Joshua Reid’s Sound Design for Jefferson Mays’ One-Person A Christmas Carol

Michael S. Eddy • January 2021Sound Perspective • January 6, 2021

Jefferson Mays in Charles Dickens’ A Christmas Carol

For Charles Dickens’ A Christmas Carol, the beloved holiday classic, actor Jefferson Mays served up a tour de force performance, playing over 50 different roles. Conceived by Michael Arden and Dane Laffrey, this wonderfully moving adaptation by Mays, Arden & Susan Lyons of the classic tale originally premiered at Los Angeles’ Geffen Playhouse in 2018. For the 2020 holiday season it has been restaged and filmed at New York’s United Palace theater, streamed from November 28 through January 3 with a portion of the ticket sales benefitting partner theaters all around the U.S. that have been devastated by the pandemic.  This beautiful production is again directed by Arden and the creative team included Laffrey serving as scenic and costume designer, lighting designer Ben Stanton, projection designer Lucy Mackinnon, and sound designer Joshua D. Reid. 

We caught up with Reid right after filming to discuss how he approached his sound design to support the narrative and Mays’ take on the over 50 roles in the show for filming. Joining the team early in the original rehearsals allowed him to help shape the soundscape. “I was very fortunate to be a part of the creation of this show from the first day of rehearsal, working with Michael and Jefferson with a sound system in the rehearsal room,” says Reid. “We discovered that with all of the characters Jefferson had to embody, the sound of the show needed to help make these transitions as seamless as possible. I was with them for about four weeks before we got into the tech process. So with tech and previews, that really gave me a seven-week window to help discover how sound became part of the show with them.” 

Building an Aural Soundscape
The first way that they dealt with the character transitions was with vocal reinforcement, by creating vocal profiles and locations for each of the characters within the show. “Most of these are very subtle shifts in reverb and location placement; but sometimes they are more drastic, such as the ghosts,” Reid comments. “The more subtle shifts in how the voice is treated help the audience member understand either which character is speaking, or where the character is. For instance, the narrator has a different room resonance than Scrooge does when he is in the counting house; and Scrooge has a different resonance in his bedroom… and so on with the other 50+ characters of the show. Sometimes these are subtle changes in reverb and placement throughout the theater, and other times (such as the ghosts) they’re dramatic shifts in pitch, reverb, echo, and tone processing. All these vocal changes are built into the cue structure of the show, which the sound mixer took as manual cues.”

The second way that Reid reinforced the show was with the various soundscapes and music choices. “We discovered in rehearsal that building an aural landscape around Jefferson helped our ability to tell the story,” explains the designer. “The audience could expand their imagination more on the script, the characters, and the acting when they could also imagine the world surrounding them. All the environments of the show therefore have some sort of naturalistic environment surrounding them; embodying where the location is—how busy are the street noises, what part of town are they in—and how the room would react to sound. The music choices were given a great amount of thought in conception as well; constantly asking ourselves, ‘Why is this piece here? What does it say about the story?’.  Coming from a very heavy musical theater background where we would ask ourselves why the character would be using song and music to tell the story.”

Jefferson Mays in Charles Dickens’ A Christmas Carol

The last, and subtlest element of the sound design is one which Reid is particularly pleased with the results. “In recreating the aural environments of the show, we also felt the need for additional vocalizations to create these worlds. Elements such as the boy through the window, and the guests at Fred’s Christmas party are the most prominent example of these; but placed throughout the show there are carolers on the street, Bob Cratchit’s conversation with Fred at the counting house door, young boys yelling greetings at each other outside of the schoolhouse, etc. In keeping with the tradition of a one-person show, all these vocalizations are recordings of Jefferson as an extension of his own performance onstage. All considered, we spent more than three full days of recording variations on these elements to use in the production.”

Processing it all Live and in Real Time
For the vocal profiles, Reid used Ableton as a live vocal processor for the show. Reid continues, describing the equipment and the process, “The ability for Ableton to work in real time was a necessity since we would be doing this in a live-time scenario. Maxine Gutierrez, the sound mixer for the production, operated the level of the vocal mic and the taking of the QLab cues. Built into the structure of the QLab cues is the control for which vocal profile Ableton is using to treat Jefferson’s voice. There are more than 50 vocal profiles and reverb treatments used throughout the show for the various characters and locations, with the ghosts being the most obvious vocal treatment applied. What was different with this, was that all these effects needed to be processed live and in real time. There are a lot of great plug-ins that are available for post-production audio, but not all of them can be used within the real-time tolerance of a live performance. Finding the right combination of effects for each of the voices, that also fit within the tolerance of a live production, took a lot of time to get right.” 

While some might think that the voices of the ghost—and the effects on them—are recorded, “they are live,” says Reid. “That was another thing that we toyed with during the rehearsal process, whether we record those ghosts and have him emulate along with them. And really what we found was that it became more natural when Jefferson had control over those voices live.” Fortunately for Reid, Mays really had all the different vocalizations down. “One of the things that’s really, really great about Jefferson is that he’s very in tune with his surroundings. The vocal treatment of the ghosts also gave Jefferson something new to experiment with as an actor. Because he could hear how his own voice was being changed and treated, he would change the timbre and tone of his own voice to reflect both the character and the output of the processed vocal. It became a great collaboration between Jefferson and I, to figure out how the voice needed to be treated and how his own vocal inflections would help his own transition between the different characters. A lot of that came through the rehearsal process. He was listening to the sound cues of the show from day one of rehearsal—not only at the Geffen, but also for the rehearsal process for the filming of the New York show.”

The next steps for Reid were to layer in the combination of Foley and sound effects needed to create and place the characters in the soundscape of the show. “The majority of the sound effects in the show have been pulled from sound effects libraries that I’ve collected and built over the years,” Reid comments. “A lot of the material was then changed to fit within the structure of the show, as well as matching them musically in tone and pitch to the aural landscape or music that was surrounding them. Of course, there are some sound effects that are so specific to our use in the production—walking across the floor, or the cane on the floor, or doors opening, closing, and doors latching—that we recorded, either in the rehearsal space or during the tech process.” 

The creative team, which all had worked on the show from its beginnings, were very closely in tune, and worked to support each other’s departments. “We were very collaborative, in terms of expanding on ideas, Reid explains. “For instance, when Marley’s ghost come up the staircase and you hear the chain thuds, in cadence, Ben and Lucy really wanted to do something with lighting and projection there. So that turned into, with every footstep I sent a trigger to lighting and projection, which they jumped on board with, and did their own creative work on top of what we were doing in sound.”

The front of house sea of monitors, creative desks, and tech tables socially distanced during filming at the United Palace theater.

Natural Reverberation
During the filming of the production at the United Palace, Reid and his sound team came up with a solution to deal with the void of the empty theater and its potential effect on capturing good audio. The theater sound team included Associate Sound Designer Daniel Lundberg, Assistant Sound Designer DJ Potts, FOH Audio Engineer (A1/Mixer), Maxine Gutierrez, and Deck Audio Engineer (A2), Mike Tracey. “Our process took place in a few steps,” says Reid. “The major new addition to this process was having Jefferson on in-ear monitors, so that we could lower the volume of playback in the house and concentrate on getting much cleaner vocals for film. During rehearsals, Jefferson was on microphone and using his in-ear monitors to get used to how he would hear the show. This continued during our technical rehearsal period. Toward the end of tech, we started turning down the stage and the house system until everyone on the production was hearing the show through in-ear monitors, headphones, program speakers, or the intercom system. This allowed us to get crystal clear, isolated vocals of the show. A great side effect of this is that the theater itself acted as a natural reverb chamber. Without amplification, we were also able to record audio from various points in the theater that resonated his natural voice differently.”

Key Equipment Choices
Reid outlines for us some of his key equipment choices for the production filming, which was supplied by Masque Sound and Production. “Our system primarily consisted of QLab for audio playback, Ableton for vocal processing, and Pro Tools for multitrack recording. At FOH, Maxine operated the show and controlled the vocal levels from a Yamaha CL5 Audio Console, and the system was networked together over Dante. Jefferson actually wears three wireless microphones for the production. Getting all those elements to not only work together within the crazy New York RF landscape, but also between each other, was also really key. We used Sennheiser MKE1 elements, paired with Sennheiser SK6212 transmitters and EM6000 receivers. In addition, he also wears a Shure PSM1000 in-ear system (P10R & P10T) to hear the show and communicate with the director.”

Theater Versus Film
Coming from a theater background, Reid had to make some adjustments to deal with the filming. He feels that operating the show with in-ear monitors was a big adjustment, technically as well as creatively. “Because we had performed the production live at the Geffen for an audience, and because Jefferson is so in tune with the acting environment he is in, we focused a lot of time during the rehearsal and technical rehearsal making sure that what he was hearing was an accurate representation of the show in real time and as he would have heard it live on stage. It was important to re-create a bi-aural environment that he could feel comfortable and natural performing in.” 

If Reid were speaking with another theater sound designer, he says, “my advice would be—try to figure out the medium in which your art is going to be delivered and work backward from there. In film, as in live theater, we construct the show based on what the audience hears but it is constructed in different ways. With TV and Film, the way audio is constructed is completely different than how we may do it in live theater. After all, we need to replicate a design eight shows per week, they only need to capture it correctly once. Our advantage with this show was that the production was already essentially built. We were able to discuss with our film crew what they needed from our audio team, and how to de-construct our production so that they could layer it back together. Having those conversations very early in the process was key for us to know as the process evolved and moved forward in the theater.”

The resulting captured performance for streaming is a truly remarkable production of A Christmas Carol. Having seen the stream, I can honestly say that though you may feel that you’ve seen all the versions possible of A Christmas Carol, you have not if you have not seen this version. Whether it is the streamed version or a remounted stage version in the future, Jefferson Mays’ performance and the work of the entire creative team makes this a must see—and hear—stunningly singular production.  

The Latest News and Gear in Your Inbox - Sign Up Today!