Topics covered in this article:
- A brief history of Dolby Atmos and concepts
- Dolby Atmos Content Creation Tools
- Consumer playback devices and services
A brief history of Dolby Atmos
In 2012 Dolby introduced a new sound experience for movies. For those who saw Disney Pixar’s animated film Brave in Dolby Atmos, the audio presentation went beyond conventional surround sound by providing a truly immersive sonic experience for the listener, with sound coming from above as well as from the front, sides and back. That movie also marked the beginning of a new creative approach for audio content creators.
Dolby Atmos receives a lot of attention for its ability to reproduce sound from many speakers, including those placed overhead. It is important is to understand the new approaches used in the process of creating Dolby Atmos content. Before diving in, there are some core concepts to unpack:
Music in Surround Sound vs Music in Dolby Atmos
Mixing music in Surround Sound has existed in various forms for years, starting with Quadrophonic sound. Since then, mixing music in 5.1 surround became the de facto surround format. 5.1 adds a Center channel and Left and Right surround channels to the spatial palette in addition to a dedicated LFE (low frequency effects) channel.
Mixing in Dolby Atmos changes that approach and frees the mixer from having to think about channels, instead allowing them to place sound anywhere in 3 dimensions. This creates a sound field that envelops the listener and that can translate to multiple playback environments.
Atmos First
Creating music in Dolby Atmos is simultaneously future proof and provides backwards compatibility. Dolby Atmos mixes sound amazing on a wide range of Dolby Atmos enabled playback devices. Dolby Atmos mixes also sound great when played back on non-Dolby Atmos enabled playback devices in regular stereo. Additionally, the Dolby Atmos workflow makes it easy to derive a stereo mix for delivery to traditional stereo platforms where Atmos is not supported.
The best results across formats are achieved by working in Dolby Atmos from the beginning and deriving stereo, rather than working in stereo first and embellishing the mix later to add immersive elements.
Bed and Object Audio
As discussed above, traditional Surround music is created by bussing or panning sounds between channels that correspond to speaker locations: Left, Center, Right, Left Surround, Right Surround and LFE (low frequency effects). For 7.1, Left Rear Surround and Right Rear Surround speakers are added.
Mixing in Dolby Atmos still allows for this time-tested approach, adding Left and Right overhead channels. In Dolby Atmos these channels are referred to as Bed audio. Bed audio can be from stereo up to 7.1.2 (the .2 being the Left and Right overhead channels). There are a variety of use cases for Bed audio depending on the creative process of the mixer, the capabilities of the DAW, and the delivery requirements for mastering.
Dolby Atmos introduces the concept of audio Objects. Objects aren’t panned to specific outputs, instead they are panned in 3d space. Object audio is captured along with 3d positional X (left/right), Y (front back) and Z(up/down) coordinates, as well as size, which expands the sound field of an Object. This size and positional information is captured as any changes are made in real time and is referred to as Object audio metadata.
A mixer can choose to create Dolby Atmos music with Beds, Objects or both, but it is audio Objects that allow for most creative expression when creating immersive music.
LFE and Bass Management
The topics of the LFE channel and Bass Management are not unique to Dolby Atmos, but are important to understand for those without experience mixing in surround. The LFE channel has its roots in Cinema, where a separate track was required to carry low frequency audio without overloads. As the name implies, it is used for Low Frequency content and is reproduced by a subwoofer. However, the LFE channel is not the only audio reproduced by the subwoofer. Bass management is a process wherein a crossover frequency is employed so low frequency content from any speaker is directed to the subwoofer.
The LFE channel is addressed using Bed Audio only (Object audio is full range) and is discarded when Dolby Atmos content is played back in stereo or when deriving stereo PCM for delivery. In general, the LFE channel should be used sparingly and only for very low frequency audio that could cause clipping in Object audio. To ensure low frequency content will be heard in stereo, it is best to leave it in other Bed channels or Object audio
Rendering
Dolby Atmos is an adaptive format meaning that it can be played back on a wide variety of Dolby Atmos consumer devices. The process of reproducing the Bed audio and Object audio to the playback device/environment is done by an Object Audio Renderer (OAR); this is referred to as Rendering. Rendering takes place during both Dolby Atmos content creation and consumer playback. For listeners, the sound will be rendered to the optimal speakers to reproduce the most accurate immersive audio experience true to the mixer’s creative intent.
Binaural Rendering
When creating Dolby Atmos content, the mixer typically monitors over loudspeakers. Many consumers also listen to Dolby Atmos over loudspeakers (or speakers in TVs, soundbars, smart speakers etc.)
Dolby Atmos is also an amazing experience over headphones. When creating Dolby Atmos content, mixers can monitor over headphones using the Binaural Renderer. The Binaural Renderer renders all of the Bed and Object audio to create a compelling immersive mix over headphones using head-related transfer function (HRTF) filters. This replicates the experience of listening to immersive audio on speakers as closely as possible while wearing headphones.
In general, a mix created while monitoring via loudspeakers will reliably translate to the binaural renderer. A mix created on headphones only via the binaural renderer can translate well to loudspeaker playback. However, best practice is to always verify a mix over loudspeakers. For the purposes of this training course, you can monitor using loudspeakers or over headphones using the Binaural Renderer.
To experience the difference between stereo audio and binaural audio, click the link below to access the Dolby Atmos Music visualizer which allows you to listen to music and switch between stereo and Dolby Atmos. Note that LFE audio is preserved with the binaural renderer.
https://www.dolby.com/atmos-visualizer-music/
Wider Dynamic Range
Dolby Atmos music spreads the sound field around you and the listener. This allows for greater space in the mix, for subtle notes and timbres to exist freely rather than needing to cut through louder sonic elements. You will find yourself reaching for less EQ, less compression, resulting in a more naturalistic music track.
Loudness Targets
With traditional channel-based music production it is typical to aim for a mix to be as loud as possible without clipping. Mixers often employ master bus compression and limiting to achieve this.
Working in Dolby Atmos is different. Dolby Atmos Music mixes need more headroom and Dolby Atmos Music delivery employs Dolby codecs that use metadata to ensure proper playback level.
The loudness target specified allows for your single Dolby Atmos mix to play over speakers, in headphones, and in many other environments without modification. The loudness target specified will help to ensure translation.
When mixing music in Dolby Atmos Music traditional master bus dynamics processing is not available. Mixing to loudness targets may require attenuation control with groups and/or VCAs. This will be covered later in this course.
Dolby Atmos Content Creation Tools
To accommodate such a wide range of listening environments and playback devices, Dolby has created a variety of tools and technologies to allow for the creation of Dolby Atmos mixes for music, film, TV, and games.
Trial Dolby Atmos Renderer software, creation, delivery, distribution specifications, and supporting resources can be found at dolby.com/music/create.
Dolby Atmos content creation tools can be divided into a few different categories and functions. These tools can be combined to provide an optimal workflow that fits with how you currently work.
Core tools and functions for Dolby Atmos content creation are:
- DAW software to record, edit and mix. The DAW software may have the ability to create Dolby Atmos object metadata natively or a Dolby Atmos panning plug-in may be used.
- A Dolby Atmos Renderer for rendering the audio and metadata to the playback speakers and/or to headphones.
- A Dolby Atmos Renderer or DAW that can record or export (bounce) a finished Dolby Atmos master file for encoding and distribution.
DAWs with Native Dolby Atmos Panners
- Apple Logic Pro
- Avid Pro Tools Ultimate
- Blackmagic Designs Resolve
- Steinberg Nuendo
- Merging Technologies Pyramix Premium
DAWs using the Dolby Atmos Music Panner Plug-in
- Ableton Live
- Apple Logic Pro
- Avid Pro Tools (both Ultimate and non-Ultimate)
The Dolby Atmos Renderer software
- Dolby Atmos Production Suite (running as an internal Renderer on the same computer as the DAW)
- Dolby Atmos Mastering Suite (running as an External Renderer on a dedicated workstation with MADI or Dante I/O)
DAWs with Native Rendering to Speakers
- Apple Logic Pro
- Blackmagic Designs DaVinci Resolve
- Steinberg Nuendo
DAWs with Native Rendering to Headphones
- Apple Logic Pro
DAWs that can import and export a Dolby Atmos Master file
- Apple Logic Pro
- Avid Pro Tools Ultimate
- Blackmagic Designs DaVinci Resolve
- Steinberg Nuendo
Monitoring a mix in Dolby Atmos
Mixing Dolby Atmos Music should be done on loudspeakers when possible. The reference loudspeaker configuration for Dolby Atmos Music production is 7.1.4 (seven ear level speakers, subwoofer, and four overhead speakers).
Where the reference configuration is not practical a smaller layout of 5.1.4 can be utilized.
Quick Start Videos for specific DAWS:
The link below provides a series of quick start videos for ProTools Ultimate, Logic Pro, and Ableton Live. These can be useful as a visual guide to DAW setup for your first Dolby Atmos content creation experience.
Dolby Atmos internal Renderer signal flow
The diagram below provides a quick visual reference to signal flow using the Dolby Atmos Production Suite running internally while monitoring on headphones. Smaller session work and training can be accomplished using a laptop and headphones, however, for larger sessions and finishing work on mixes speakers should be utilized. These systems will be outlined later in the course.
Consumer playback devices and services
Dolby Atmos is available on an ever-expanding range of home theater devices, soundbars, and TVs as well as mobile phones, tablets, laptops, and gaming systems. All of these systems incorporate Dolby Atmos technology with content available from a multitude of streaming platforms spanning the globe. Dolby Atmos content is available as video with audio or as an audio only experience for playback over speakers or headphones.
In addition to understanding the tools used in Dolby Atmos content production, it is important to understand how the consumer can listen to Dolby Atmos content. Below are examples of the many available playback devices. This is a high-level overview, as the number of Dolby-enabled devices is growing rapidly
Digital Media Appliances (DMA) and Services:
- Movies and TV
- DMAs
- Apple TV4k
- Roku
- Amazon Firestick
- Amazon Echo Studio
- Services
- Apple TV+, Apple Music
- Netflix
- Amazon Prime
- HBOMax
- DMAs
- Atmos Music
- Apple Music
- Amazon Music via Amazon Echo Studio
- Tidal
- Playback Systems
- Home Theater
- Audio Video Receivers (AVRs)
- Soundbars
- Atmos enabled upward firing speakers
- TV’s with integrated Atmos speakers
- Smart Speakers
- Amazon Echo Studio
- Apple Homepod
- Computers, smart phones
- Apple devices; iphone, ipad, laptops
- Lenovo
- Samsung
- Home Theater
Navigation
Next: Module 1.1 - Dolby Atmos Basics
Previous: Dolby Atmos Music Training Curriculum Table of Contents