Did you ever wonder about the processing behind the Spat Revolution Immersive Audio Engine? How latency is handled, or how Spat is handling speaker alignment to achieve smooth spatialization? Or are you interested in getting to know more about how Live Theatrical show setups can be done using Spat Revolution?
A blog read by Hugo Larin, Business Development for Spat Revolution
POWERING THE SPAT ENGINE, THE AUDIO INTERFACE AND THE SYSTEM LATENCY
Spat Revolution is a real time stand-alone software application for processing immersive audio, and regarding the processing behind it; although some immersive audio solutions will come with dedicated hardware with fix capacities, Spat doesn’t run on any specific hardware. The power of today’s modern computers have proven us what can be achieved, instead Windows and MacOS is supported, with hardware recommendations provided, a solid graphic card, sufficient amount of ram and multi core processing, in order to achieve good results.
The ability to run a generic hardware also means that a vast pool of audio interfaces are available for the system setup. You could obviously then feel like asking what the actual latency of the system is. In Spat Revolution the latency is defined by the audio hardware components (local audio interface or network AVB, Dante or AES67 virtual audio entities) and the buffer size. The system actually shows the total latency in a status window in the software, combining the OS reported latency of the audio component and the Spat buffer setting. The actual Spat setup and configuration (number of sources, rooms and such) don’t have impact on the latency. It’s predictable and fixed.
Figure 1: Hardware IO and Devices in Spat Revolution
When latency is critical we do have the option of a very low buffer size setting (again, taken the hardware qualification into account). Spat has been tested for In Ear Monitor scenarios, so this is a good example of how low latency can be achieved, as we know latency will be critical if delivering binaural content to a vocalist or musician.
Figure 2: Latency report in Spat Software
Multiple Rooms, Panning Techniques, Dealing with Speaker Setup for Live
SPEAKER ARRANGEMENT ALIGNMENT PROVIDED IN EACH SPAT ROOM
Although Spat is not intended to replace the loudspeaker management for tuning, it does offer the ability to compute each speaker output gain and delay in order to compensate for the location compromises that may need to be done because of physical limitations. The auto compute will make a delay and gain calibration to the central ref. point, when trying to provide extremely smooth transition and when your sources are actually moving in the soundscape all the time. Note that the actual panning computing will use the compensated speakers, the Virtual Speaker, resulting in very smooth feeling when moving the sources.
Figure 3: The Speaker Config window and the ability to ”Computer” Gain and Delay.
See physical location of speaker in grey and the virtual speaker in orange
More information on custom speaker arrangement available from the following Custom Speaker Configuration article
HOW USERS ARE DEPLOYING SPAT WITHIN LIVE AND THE ‘’VIRTUAL ROOM’’ SPEAKER ARRANGEMENTS
Spat Revolution is becoming more and more used in live theatrical shows, and many of these systems have common workflows. QLab is often used as the playback and show control system, while for live sound console the Avid VENUE S6L is often the console of choice. The Spat Send plugin is used, for example, in these scenarios to provide the mixing engineer with the ability to access the source parameters on the control surface, and for automating source parameter changes via the snapshot system. Something that can be done using generic OSC commands with other mixing system providing that ability, such as the Digico SD consoles.
That said, a common scenario is, using the QLab network cues to change any parameters of sources, rooms and reverb in Spat. This includes making 2D moves of sources in the soundscape. QLab is very common for show control, where their network and midi cues gets used to drive a variety of systems simultaneously. An interesting point of using QLab and Spat in the off location creation process is how you can start the show creation and move to separate computers and add audio mixing console in the equation.
The subject of remote control in itself will deserve a new blog!
The common integration varies from using 32, 64 or more console post fader channel audio feeds sent via MADI or Network audio to Spat, feeding the sources. Speaker outputs then either get returned to the desk in order to feed the various matrix of the mixing system, or are picked up directly by MADI or Network audio and sent to the loudspeaker management system.
In a recent big top production, Spat was dealing with a circular stage positioned at the center of the four big top masts. The audience was sitting on three of the faces, while the last face was the actual backstage. Each of the three audience spaces where covered from a L and R speaker cluster position on the mast, while a third speaker position was in the center at a higher elevation. A single behind stage L and R speaker cluster position was used as well (L and R when facing the main front face).
All around the big top was 6 rear ‘’surround’’ speakers, that were positioned in a way that each pair of 2 speakers where covering an audience space as rear speakers.
The multi-room concept was deployed for this. A first virtual room consisting of 5 outputs (L, C, R and 2 rears) similar to a surround was used to create a multichannel bed that was delivered to the 3 audience spaces. This allowed for delivery of an immersive base mix to each of the spaces using a VBAP panning technique.
While the rears were used for the base surround bed, a separate virtual Virtual room using the 6 surround speakers was deployed in order to be able to do rear effects to the complete audience, for example, when wanting to spin a sound around the big top. This virtual room included as well the backstage L and R speakers and the gain and delay alignment of Spat aligned all speakers to their virtual position. The key of adding the two backstage speakers was to give a feeling of sound coming from behind the stage or spinning behind the stage, and again, a VBAP panning technique was used for this virtual room.
Another Spat Room was using all single and cluster loudspeakers, this time with a KNN panning technique in order to allow to position a source anywhere in the audience, in order to really give the feeling that source was emanating from where the designer wanted or where the action was.
To complement this, some sources were able to be patched in a binaural room which became a feed that could be sent to recorders and to provide an spatial experience on headphones.
Figure 4: All Speaker with a KNN Pannings
Figure 5: 5.1 Soundscape deliver to multi zones
Figure 6: Various different Speaker Arrangements and Panning Types used simultaneously in one and the same project
Another theatrical project include the strategy of sending the 12 artist microphone post fader feeds to a Room for doing a real time tracking of these artists (actors on stage). Spat’s ability to integrate with the tracking system allowed for the attachment of tracking devices (know as beacons) to source actors. For this a DBAP panning technique was chosen, as no assumption could be made as too where the audience would be sitting and a central listening point didn’t exist. This provided a good signal distribution while offering some localisation, as the actors were being tracked on this very wide stage. A portion of the speaker rig was used for this one, where the 7 front clusters and their delays were used only. Another Room for a different effect and using a different set of loudspeakers was used too, which received sources from a fix number of console aux buses (send buses to the SPAT immersive audio software)
Interested in this conversation? Follow our different blog articles!
Stay tuned – Subscribe to our newsletter