WFS can also be used for theatrical sound design to play soundtracks, Foley or prerecorded speech, add a sonic character to a prop or to reinforce voices. But what are the limits of realism from a technical stand point as well as from the dramaturgy?
Wave field synthesis allows the placement of sounds anywhere on stage. It is really well suited to reinforcing the voice of a comedian so the audience can hear the softer nuances of his lines in a large venue without his voice having a greater scale than the body. It is also possible to work on the directivity of the voice by tailoring the high frequencies to follow the orientation of the actor.
Without this we get a wall of sound impression with a disconnected and disproportionate amplified voice. It also greatly enhances the comprehension of the dialogues between several actors on stage since the psychoacoustic mechanisms of our hearing that localise sounds are able to analyse what comes out of the WFS sound system.
But is realism always what we want to have?
The disconnection between an acoustic voice and its amplified sound in a regular stereo system can also be part of the sound design. This wall of sound is something that we’re well used to nowadays whether it is when we listen at home on a stereo or at a live show.
The sound design might want to make us hear the medium itself at time. At that point the artificial is part of the artistic process. Imagine a scene with a strong cinematographic intention where it is intended to flatten the depth of the stage for instance.
On the other hand WFS can allow to have depth without placing speakers on or above the stage. This limits the sound pollution into the head worn microphones of the comedians. We can even place sounds beyond the walls of the venue. An atmosphere music can be placed behind everything and it is possible to soften its sound by working on the HF directivity and its reverb level and tone.
We can also try to have the acoustic equivalent of a curtain opening. Imagine the music is very frontal, taking up the whole front of the stage with little reverb. If you move it upstage increasing at the same time the reverb and the level and time curvature of the sounds you can go from a wall to the sound of a prop on stage such as a radio or a TV set. There is a wide range of sonic aesthetic possibilities to be explored.
Let’s consider the relation between the sounds produced by the comedians and the props on stage and the wave field synthesis sounds. WFS will increase the precision of the placement with the curvature of the delays in the different speaker arrays, the attenuation of sounds with distance between the source and the different speakers and the high frequency directivity.
But as they are when we play a recorded Foley sound through the WFS system will lack realism compared to live sounds produced on stage. Indeed it is missing the acoustic coupling of the dry sound with the acoustics of the venue. Our hearing analyses all the reflections with all their complexity to understand where the different sounds are coming from. Without reflections it is as if we are hearing sounds in an anechoic chamber. They feel like ghost sounds. They are missing a material presence compared to the stage sounds. And our hearing actually will hear the reflections coming from the speakers themselves and that bounce in front of them betraying the position of the speakers in the array. In this case we hear the sounds coming from the line of speakers once again rather than their virtual position. To increase realism it would mean calculating the reflections of the sounds against the stage floor and walls.
A first step is to use the WFS system to mix the feeds to the reverb units. For instance you can place a pair of positions at the front of the stage left and right feeding a short bright reverb full of early reflections, same upstage but with a longer reverb and on the left and right sides too. For the sides you can have two pairs with different tone and length. This will help give a more complex quality to the virtual acoustics that will generate better understanding of the positions. The wet signals coming out of the different reverbs are reinjected in the WFS system for placement and mixing. They should be positions not far off the feed locations.
Make sure to avoid feedback. Do not feed the reverb returns to the reverb sends. It is also good to have a local attenuation or compression when a source comes close to the reverb to avoid hotspots.
Once well set we can have a good enough realistic impression that mixes virtual sounds with live stage sounds. These reverb effects are not artificial effects here. They are used to match the dry recorded sounds with the acoustics. It may be necessary to use other reverb units for more creative effects.
The next step would be to generate early reflections in the WFS system by placing symmetrical copies of the sources as if they were reflected by the walls and floor with eventual filtering according to the type of material the surfaces are made of and the angle of incidence.
Here the number of calculations for the WFS processor skyrocket. For 4 surfaces (floor, upstage, stageleft and stageright walls) the number of calculations if multiplied by 5 with the first reflections only and by 17 for first and second reflections. It is then important to have a fine tuned processing algorithm or go for parallel computing: FPGA? OpenCL/CUDA on the GPU?