Topics covered in this article:
- The Dolby Vision Analysis tool
- L1 metadata
- Transitions and metadata
To deliver a Dolby Vision Master, the following steps will have to be completed:
- Preparation for Dolby Vision Analysis
- Create L1 metadata using Dolby Vision Analysis
- Check the L1 metadata using Dolby Content Mapping (CMU)
- Performing the necessary modifications to the metadata through the Trim Pass thereby creating L2/L3/L8 metadata (optional)
Dolby Vision Level 1 (L1) Metadata
Level 1 or L1 is the first set of metadata that is created as you work towards delivering a Dolby Vision Master. L1 metadata is created using the Dolby Vision Analysis tool that is built into Dolby partner solutions like Resolve, Baselight, Transkoder, Cortex, etc.
The Dolby Vision Analysis tool is:
- used to analyze the finished HDR grade/master and capture the creative intent in a digital/mathematical form.
- not a plug-in from Dolby but is natively built into the partner software solutions/systems based on guidance and specifications from Dolby.
Click here to view larger image
The Dolby Vision Analysis tool performs a pixel-level, frame-by-frame analysis of each shot on the timeline and derives three values that together form the L1 metadata. The Dolby Vision Analysis happens automatically and does not require any user intervention.
Click here to view larger image
The three derived values are:
- Minimum (min) – Lowest black level in the shot
- Average (avg/mid) – Average luminance level across the shot
- Maximum (max) – Highest luminance level in the shot
These values are created per shot, and at the end of the analysis pass, every shot on the timeline will have its own, unique L1 metadata attached to it.
Creating L1 Metadata on Resolve
There are a few ways to run the Dolby Vision Analysis on Resolve, and they can be used depending on the type of content and the way the colorist prefers to work.
Click here to view larger image
- Analyze All Shots – Automatically analyzes every clip/shot on the Timeline, from the beginning to the end, and stores the results individually for every shot.
- Analyze Selected Shot(s) – Only analyzes selected shots on the Timeline.
- Analyze Selected and Blend – Analyzes multiple selected shots and averages the result, which is then saved to each clip. It is very useful when analyzing multiple clips that have identical content and are intended to look the same.
- Analyze Current Frame – A quick way to analyze clips where a single frame is representative of the entire shot. This is a useful feature when trying to obtain an optimum analysis in (for example) challenging light situations, like a scene in a nightclub with flashing lights.
Once you analyze a clip, the Min, Max, and Average fields automatically populate with the resulting L1 metadata, but these fields are not editable. The numbers indicate the percentage value of PQ, e.g. a max. value of 0.750 would indicate 75% of PQ which reflects to 1000nit.
Click here to view larger image
A few points to note about L1 metadata are:
- L1 metadata is mandatory for Dolby Vision content, and any content without L1 metadata will not be accepted for Dolby Vision encoding, playback, or distribution.
- L1 metadata is usually created per shot and is assumed to be unique for every shot on the timeline or in a piece of content like a movie or an episode of a television series or a documentary. But there are some exceptions, namely:
-
- Multiple shots or groups of shots can have the same metadata when a colorist copies the L1 from one shot to one or more shots on the timeline. This is sometimes done to match and apply the same mapping to similar shots in a scene.
-
Click here to view larger image
-
-
-
-
- Multiple shots or groups of shots will also have the same L1 metadata if the colorist selects the shots and uses the Analyze Selected and Blend option.
-
- Dolby Vision metadata is created per frame to make smoother transitions of metadata across several frames. A few scenarios where this would be applicable are:
- Dissolves or Cross Fades — When a dissolve or cross fade is applied between two shots, per-frame metadata is generated to create a smooth transition from one shot to the next. The per-frame metadata on each frame of the dissolve will include L1 (analysis) as well as L2/L3/L8 (trims), depending on the metadata that exists on either side of the dissolve.
-
-
- Dynamic or Animated Trims — Whenever there is a need to create a dynamic or animated trim within a shot due to a transition in the grade or the light/color composition of the shot, the metadata is generated per frame to create a smooth transition from one state of the image to the other. The per-frame metadata on each frame of the animation or dynamic will include L1 (analysis) as well as L2/L3/L8 (trims), depending on the trim parameters that are being changed across the range of frames.
[Note: Animated/dynamic trims are currently not supported on DaVinci Resolve (in v16.2.7). On other systems like Baselight and Nucoda, the colorist can assign one set of trims (keyframe) at the beginning of a shot and assign a different set of trims (keyframe) at the end of the same shot, and the system will automatically generate per-frame metadata that creates a smooth transition in the mapping from the beginning of the shot to the end.]
Navigation
Next: Module 2.7 – Checking Dolby Vision Level1 (L1) Metadata using the CMU
Previous: Module 2.5 – Creating Dolby Vision Metadata – Preparation for Dolby Vision Analysis