<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE rfc SYSTEM 'rfc2629.dtd' [
-<!ENTITY rfc2119 PUBLIC '' 'https://xml2rfc.tools.ietf.org/tools/xml2rfc/public/rfc/bibxml/reference.RFC.2119.xml'>
-<!ENTITY rfc3533 PUBLIC '' 'https://xml2rfc.tools.ietf.org/tools/xml2rfc/public/rfc/bibxml/reference.RFC.3533.xml'>
-<!ENTITY rfc3629 PUBLIC '' 'https://xml2rfc.tools.ietf.org/tools/xml2rfc/public/rfc/bibxml/reference.RFC.3629.xml'>
-<!ENTITY rfc4732 PUBLIC '' 'https://xml2rfc.tools.ietf.org/tools/xml2rfc/public/rfc/bibxml/reference.RFC.4732.xml'>
-<!ENTITY rfc5334 PUBLIC '' 'https://xml2rfc.tools.ietf.org/tools/xml2rfc/public/rfc/bibxml/reference.RFC.5334.xml'>
-<!ENTITY rfc6381 PUBLIC '' 'https://xml2rfc.tools.ietf.org/tools/xml2rfc/public/rfc/bibxml/reference.RFC.6381.xml'>
-<!ENTITY rfc6716 PUBLIC '' 'https://xml2rfc.tools.ietf.org/tools/xml2rfc/public/rfc/bibxml/reference.RFC.6716.xml'>
+<!ENTITY rfc2119 PUBLIC '' 'http://xml.resource.org/public/rfc/bibxml/reference.RFC.2119.xml'>
+<!ENTITY rfc3533 PUBLIC '' 'http://xml.resource.org/public/rfc/bibxml/reference.RFC.3533.xml'>
+<!ENTITY rfc3629 PUBLIC '' 'http://xml.resource.org/public/rfc/bibxml/reference.RFC.3629.xml'>
+<!ENTITY rfc4732 PUBLIC '' 'http://xml.resource.org/public/rfc/bibxml/reference.RFC.4732.xml'>
+<!ENTITY rfc5334 PUBLIC '' 'http://xml.resource.org/public/rfc/bibxml/reference.RFC.5334.xml'>
+<!ENTITY rfc6381 PUBLIC '' 'http://xml.resource.org/public/rfc/bibxml/reference.RFC.6381.xml'>
+<!ENTITY rfc6716 PUBLIC '' 'http://xml.resource.org/public/rfc/bibxml/reference.RFC.6716.xml'>
+<!ENTITY rfc6982 PUBLIC '' 'http://xml.resource.org/public/rfc/bibxml/reference.RFC.6982.xml'>
]>
<?rfc toc="yes" symrefs="yes" ?>
-<rfc ipr="trust200902" category="std" docName="draft-ietf-codec-oggopus-00">
+<rfc ipr="trust200902" category="std" docName="draft-ietf-codec-oggopus-03">
<front>
<title abbrev="Ogg Opus">Ogg Encapsulation for the Opus Audio Codec</title>
<code>V6B 1H5</code>
<country>Canada</country>
</postal>
-<phone>+1 604 778 1540</phone>
+<phone>+1 778 785 1540</phone>
<email>giles@xiph.org</email>
</address>
</author>
-<date day="19" month="November" year="2012"/>
+<date day="7" month="February" year="2014"/>
<area>RAI</area>
<workgroup>codec</workgroup>
<section anchor="terminology" title="Terminology">
<t>
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
- "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
- interpreted as described in <xref target="RFC2119"/>.
+ "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this
+ document are to be interpreted as described in <xref target="RFC2119"/>.
</t>
<t>
audio data packets.
Each audio data packet contains one Opus packet for each of N different
streams, where N is typically one for mono or stereo, but may be greater than
- one for, e.g., multichannel audio.
+ one for multichannel audio.
The value N is specified in the ID header (see
<xref target="channel_mapping"/>), and is fixed over the entire length of the
logical Ogg bitstream.
regular, undelimited framing from Section 3 of <xref target="RFC6716"/>.
All of the Opus packets in a single Ogg packet MUST be constrained to have the
same duration.
-The duration and coding modes of each Opus packet are contained in the
- TOC (table of contents) sequence in the first few bytes.
A decoder SHOULD treat any Opus packet whose duration is different from that of
the first Opus packet in an Ogg packet as if it were an Opus packet with an
illegal TOC sequence.
</t>
<t>
+The coding mode (SILK, Hybrid, or CELT), audio bandwidth, channel count,
+ duration (frame size), and number of frames per packet, are indicated in the
+ TOC (table of contents) in the first byte of each Opus packet, as described
+ in Section 3.1 of <xref target="RFC6716"/>.
+The combination of mode, audio bandwidth, and frame size is referred to as
+ the configuration of an Opus packet.
+</t>
+<t>
The first audio data page SHOULD NOT have the 'continued packet' flag set
(which would indicate the first audio data packet is continued from a previous
page).
This guarantees that a demuxer can assign individual packets the same granule
position when working forwards as when working backwards.
For this to work, there cannot be any gaps.
-In order to support capturing a stream that uses discontinuous transmission
- (DTX), an encoder SHOULD emit packets that explicitly request the use of
- Packet Loss Concealment (PLC) (i.e., with a frame length of 0, as defined in
- Section 3.2.1 of <xref target="RFC6716"/>) in place of the packets that were
- not transmitted.
</t>
+<section anchor="gap-repair" title="Repairing Gaps in Real-time Streams">
+<t>
+In order to support capturing a real-time stream that has lost or not
+ transmitted packets, a muxer SHOULD emit packets that explicitly request the
+ use of Packet Loss Concealment (PLC) in place of the missing packets.
+Only gaps that are a multiple of 2.5 ms are repairable, as these are the
+ only durations that can be created by packet loss or discontinuous
+ transmission.
+Muxers need not handle other gap sizes.
+Creating the necessary packets involves synthesizing a TOC byte (defined in
+Section 3.1 of <xref target="RFC6716"/>)—and whatever
+ additional internal framing is needed—to indicate the packet duration
+ for each stream.
+The actual length of each missing Opus frame inside the packet is zero bytes,
+ as defined in Section 3.2.1 of <xref target="RFC6716"/>.
+</t>
+
+<t>
+Zero-byte frames MAY be packed into packets using any of codes 0, 1,
+ 2, or 3.
+When successive frames have the same configuration, the higher code packings
+ reduce overhead.
+Likewise, if the TOC configuration matches, the muxer MAY further combine the
+ empty frames with previous or subsequent non-zero-length frames (using
+ code 2 or VBR code 3).
+</t>
+
+<t>
+<xref target="RFC6716"/> does not impose any requirements on the PLC, but this
+ section outlines choices that are expected to have a positive influence on
+ most PLC implementations, including the reference implementation.
+Synthesized TOC bytes SHOULD maintain the same mode, audio bandwidth,
+ channel count, and frame size as the previous packet (if any).
+This is the simplest and usually the most well-tested case for the PLC to
+ handle and it covers all losses that do not include a configuration switch,
+ as defined in Section 4.5 of <xref target="RFC6716"/>.
+</t>
+
+<t>
+When a previous packet is available, keeping the audio bandwidth and channel
+ count the same allows the PLC to provide maximum continuity in the concealment
+ data it generates.
+However, if the size of the gap is not a multiple of the most recent frame
+ size, then the frame size will have to change for at least some frames.
+Such changes SHOULD be delayed as long as possible to simplify
+ things for PLC implementations.
+</t>
+
+<t>
+As an example, a 95 ms gap could be encoded as nineteen 5 ms frames
+ in two bytes with a single CBR code 3 packet.
+If the previous frame size was 20 ms, using four 20 ms frames
+ followed by three 5 ms frames requires 4 bytes (plus an extra byte
+ of Ogg lacing overhead), but allows the PLC to use its well-tested steady
+ state behavior for as long as possible.
+The total bitrate of the latter approach, including Ogg overhead, is about
+ 0.4 kbps, so the impact on file size is minimal.
+</t>
+
+<t>
+Changing modes is discouraged, since this causes some decoder implementations
+ to reset their PLC state.
+However, SILK and Hybrid mode frames cannot fill gaps that are not a multiple
+ of 10 ms.
+If switching to CELT mode is needed to match the gap size, a muxer SHOULD do
+ so at the end of the gap to allow the PLC to function for as long as possible.
+</t>
+
+<t>
+In the example above, if the previous frame was a 20 ms SILK mode frame,
+ the better solution is to synthesize a packet describing four 20 ms SILK
+ frames, followed by a packet with a single 10 ms SILK
+ frame, and finally a packet with a 5 ms CELT frame, to fill the 95 ms
+ gap.
+This also requires four bytes to describe the synthesized packet data (two
+ bytes for a CBR code 3 and one byte each for two code 0 packets) but three
+ bytes of Ogg lacing overhead are required to mark the packet boundaries.
+At 0.6 kbps, this is still a minimal bitrate impact over a naive, low quality
+ solution.
+</t>
+
+<t>
+Since medium-band audio is an option only in the SILK mode, wideband frames
+ SHOULD be generated if switching from that configuration to CELT mode, to
+ ensure that any PLC implementation which does try to migrate state between
+ the modes will be able to preserve all of the available audio bandwidth.
+</t>
+
+</section>
+
<section anchor="preskip" title="Pre-skip">
<t>
There is some amount of latency introduced during the decoding process, to
- allow for overlap in the MDCT modes, stereo mixing in the LP modes, and
- resampling, and the encoder will introduce even more latency (though the exact
- amount is not specified).
+ allow for overlap in the CELT mode, stereo mixing in the SILK mode, and
+ resampling.
+The encoder may introduce additional latency through its own resampling
+ and analysis (though the exact amount is not specified).
Therefore, the first few samples produced by the decoder do not correspond to
real input audio, but are instead composed of padding inserted by the encoder
to compensate for this latency.
A 'pre-skip' field in the ID header (see <xref target="id_header"/>) signals
the number of samples which SHOULD be skipped (decoded but discarded) at the
beginning of the stream.
-This provides sufficient history to the decoder so that it has already
- converged before the stream's output begins.
-It may also be used to perform sample-accurate cropping of existing encoded
- streams.
-This amount need not be a multiple of 2.5 ms, may be smaller than a single
- packet, or may span the contents of several packets.
+This amount MAY not be a multiple of 2.5 ms, MAY be smaller than a single
+ packet, or MAT span the contents of several packets.
+These samples are not valid audio, and should not be played.
</t>
+
+<t>
+For example, if the first Opus frame uses the CELT mode, it will always
+ produce 120 samples of windowed overlap-add data.
+However, the overlap data is initially all zeros (since there is no prior
+ frame), meaning this cannot, in general, accurately represent the original
+ audio.
+The SILK mode requires additional delay to account for its analysis and
+ resampling latency.
+The encoder delays the original audio to avoid this problem.
+</t>
+
+<t>
+The pre-skip field MAY also be used to perform sample-accurate cropping of
+ already encoded streams.
+In this case, a value of at least 3840 samples (80 ms) provides
+ sufficient history to the decoder that it will have converged
+ before the stream's output begins.
+</t>
+
</section>
<section anchor="pcm_sample_position" title="PCM Sample Position">
<t>
+<figure align="center">
+<preamble>
The PCM sample position is determined from the granule position using the
formula
-<figure align="center">
+</preamble>
<artwork align="center"><![CDATA[
'PCM sample position' = 'granule position' - 'pre-skip' .
]]></artwork>
For example, if the granule position of the first audio data page is 59,971,
and the pre-skip is 11,971, then the PCM sample position of the last decoded
sample from that page is 48,000.
-This can be converted into a playback time using the formula
<figure align="center">
+<preamble>
+This can be converted into a playback time using the formula
+</preamble>
<artwork align="center"><![CDATA[
'PCM sample position'
'playback time' = --------------------- .
This field is <spanx style="emph">not</spanx> the sample rate to use for
playback of the encoded data.
<vspace blankLines="1"/>
-Opus has a handful of coding modes, with internal audio bandwidths of 4, 6, 8,
- 12, and 20 kHz.
+Opus can switch between internal audio bandwidths of 4, 6, 8, 12, and
+ 20 kHz.
Each packet in the stream may have a different audio bandwidth.
Regardless of the audio bandwidth, the reference decoder supports decoding any
stream at a sample rate of 8, 12, 16, 24, or 48 kHz.
<t>Otherwise, if the hardware's highest available sample rate is a supported
rate, decode at this sample rate.</t>
<t>Otherwise, if the hardware's highest available sample rate is less than
- 48 kHz, decode at the highest supported rate above this and resample.</t>
+ 48 kHz, decode at the next highest supported rate above this and
+ resample.</t>
<t>Otherwise, decode at 48 kHz and resample.</t>
</list>
However, the 'Input Sample Rate' field allows the encoder to pass the sample
It is 20*log10 of the factor to scale the decoder output by to achieve the
desired playback volume, stored in a 16-bit, signed, two's complement
fixed-point value with 8 fractional bits (i.e., Q7.8).
-To apply the gain, a decoder could use
<figure align="center">
+<preamble>
+To apply the gain, a decoder could use
+</preamble>
<artwork align="center"><![CDATA[
sample *= pow(10, output_gain/(20.0*256)) ,
]]></artwork>
-</figure>
+<postamble>
where output_gain is the raw 16-bit value from the header.
+</postamble>
+</figure>
<vspace blankLines="1"/>
Virtually all players and media frameworks should apply it by default.
If a player chooses to apply any volume adjustment or gain modification, such
<t><spanx style="strong">Channel Mapping Family</spanx> (8 bits,
unsigned):
<vspace blankLines="1"/>
-This octet indicates the order and semantic meaning of the various channels
- encoded in each Ogg packet.
+This octet indicates the order and semantic meaning of the output channels.
<vspace blankLines="1"/>
Each possible value of this octet indicates a mapping family, which defines a
set of allowed channel counts, and the ordered set of channel names for each
mono (a single channel) or stereo (two channels) by appropriate initialization
of the decoder.
The 'coupled stream count' field indicates that the first M Opus decoders are
- to be initialized in stereo mode, and the remaining N-M decoders are to be
- initialized in mono mode.
+ to be initialized for stereo output, and the remaining N-M decoders are to be
+ initialized for mono only.
The total number of decoded channels, (M+N), MUST be no larger than 255, as
there is no way to index more channels than that in the channel mapping.
<vspace blankLines="1"/>
If 'index' is less than 2*M, the output MUST be taken from decoding stream
('index'/2) as stereo and selecting the left channel if 'index' is even, and
the right channel if 'index' is odd.
-If 'index' is 2*M or larger, the output MUST be taken from decoding stream
- ('index'-M) as mono.
+If 'index' is 2*M or larger, but less than 255, the output MUST be taken from
+ decoding stream ('index'-M) as mono.
If 'index' is 255, the corresponding output channel MUST contain pure silence.
<vspace blankLines="1"/>
The number of output channels, C, is not constrained to match the number of
<t>
After producing the output channels, the channel mapping family determines the
semantic meaning of each one.
-Currently there are three defined mapping families, although more may be added:
-<list style="symbols">
-<t>Family 0 (RTP mapping):
-<vspace blankLines="1"/>
+Currently there are three defined mapping families, although more may be added.
+</t>
+
+<section anchor="channel_mapping_0" title="Channel Mapping Family 0">
+<t>
Allowed numbers of channels: 1 or 2.
+RTP mapping.
+</t>
+<t>
<list style="symbols">
<t>1 channel: monophonic (mono).</t>
<t>2 channels: stereo (left, right).</t>
if stereo.
When the 'channel mapping family' octet has this value, the channel mapping
table MUST be omitted from the ID header packet.
-<vspace blankLines="1"/>
</t>
-<t>Family 1 (Vorbis channel order):
-<vspace blankLines="1"/>
+</section>
+
+<section anchor="channel_mapping_1" title="Channel Mapping Family 1">
+<t>
Allowed numbers of channels: 1...8.
-<vspace/>
+Vorbis channel order.
+</t>
+<t>
Each channel is assigned to a speaker location in a conventional surround
- configuration.
+ arrangement.
Specific locations depend on the number of channels, and are given below
in order of the corresponding channel indicies.
<list style="symbols">
<t>7 channels: 6.1 surround (front left, front center, front right, side left, side right, rear center, LFE).</t>
<t>8 channels: 7.1 surround (front left, front center, front right, side left, side right, rear left, rear right, LFE)</t>
</list>
-This set of surround configurations and speaker location orderings is the same
- as the one used by the Vorbis codec. <xref target="vorbis-mapping"/>
+</t>
+<t>
+This set of surround options and speaker location orderings is the same
+ as those used by the Vorbis codec <xref target="vorbis-mapping"/>.
The ordering is different from the one used by the
WAVE <xref target="wave-multichannel"/> and
FLAC <xref target="flac"/> formats,
- although the configurations match, so correct ordering requires permutation
- of the output channels when encoding from or decoding to those formats.
+ so correct ordering requires permutation of the output channels when decoding
+ to or encoding from those formats.
'LFE' here refers to a Low Frequency Effects, often mapped to a subwoofer
- with no particular spacial position.
+ with no particular spatial position.
Implementations SHOULD identify 'side' or 'rear' speaker locations with
'surround' and 'back' as appropriate when interfacing with audio formats
or systems which prefer that terminology.
-<vspace blankLines="1"/>
</t>
-<t>Family 255 (no defined channel meaning):
-<vspace blankLines="1"/>
-Allowed numbers of channels: 1...255.<vspace/>
+</section>
+
+<section anchor="channel_mapping_255"
+ title="Channel Mapping Family 255">
+<t>
+Allowed numbers of channels: 1...255.
+No defined channel meaning.
+</t>
+<t>
Channels are unidentified.
General-purpose players SHOULD NOT attempt to play these streams, and offline
decoders MAY deinterleave the output into separate PCM files, one per channel.
(pure silence) unless they have no other way to indicate the index of
non-silent channels.
</t>
-</list>
+</section>
+
+<section anchor="channel_mapping_undefined"
+ title="Undefined Channel Mappings">
+<t>
The remaining channel mapping families (2...254) are reserved.
A decoder encountering a reserved channel mapping family value SHOULD act as
though the value is 255.
-<vspace blankLines="1"/>
+</t>
+</section>
+
+<section anchor="downmix" title="Downmixing">
+<t>
An Ogg Opus player MUST play any Ogg Opus stream with a channel mapping family
of 0 or 1, even if the number of channels does not match the physically
connected audio hardware.
channels as needed.
</t>
-</section>
+<t>
+Implementations MAY use the following matricies to implement downmixing from
+ multichannel files using <xref target="channel_mapping_1">Channel Mapping
+ Family 1</xref>, which are known to give acceptable results for stereo.
+Matricies for 3 and 4 channels are normalized so each coefficent row sums
+ to 1 to avoid clipping.
+For 5 or more channels they are normalized to 2 as a compromise between
+ clipping and dynamic range reduction.
+</t>
+<t>
+In these matricies the front left and front right channels are generally
+passed through directly.
+When a surround channel is split between both the left and right stereo
+ channels, coefficients are chosen so their squares sum to 1, which
+ helps preserve the perceived intensity.
+Rear channels are mixed more diffusely or attenuated to maintain focus
+ on the front channels.
+</t>
+
+<figure anchor="downmix-matrix-3"
+ title="Stereo downmix matrix for the linear surround channel mapping"
+ align="center">
+<artwork align="center"><![CDATA[
+L output = ( 0.585786 * left + 0.414214 * center )
+R output = ( 0.414214 * center + 0.585786 * right )
+]]></artwork>
+<postamble>
+Exact coefficient values are 1 and 1/sqrt(2), multiplied by
+ 1/(1 + 1/sqrt(2)) for normalization.
+</postamble>
+</figure>
+
+<figure anchor="downmix-matrix-4"
+ title="Stereo downmix matrix for the quadraphonic channel mapping"
+ align="center">
+<artwork align="center"><![CDATA[
+/ \ / \ / FL \
+| L output | | 0.422650 0.000000 0.366025 0.211325 | | FR |
+| R output | = | 0.000000 0.422650 0.211325 0.366025 | | RL |
+\ / \ / \ RR /
+]]></artwork>
+<postamble>
+Exact coefficient values are 1, sqrt(3)/2 and 1/2, multiplied by
+ 1/(1 + sqrt(3)/2 + 1/2) for normalization.
+</postamble>
+</figure>
+
+<figure anchor="downmix-matrix-5"
+ title="Stereo downmix matrix for the 5.0 surround mapping"
+ align="center">
+<artwork align="center"><![CDATA[
+ / FL \
+/ \ / \ | FC |
+| L | | 0.650802 0.460186 0.000000 0.563611 0.325401 | | FR |
+| R | = | 0.000000 0.460186 0.650802 0.325401 0.563611 | | RL |
+\ / \ / | RR |
+ \ /
+]]></artwork>
+<postamble>
+Exact coefficient values are 1, 1/sqrt(2), sqrt(3)/2 and 1/2, multiplied by
+ 2/(1 + 1/sqrt(2) + sqrt(3)/2 + 1/2)
+ for normalization.
+</postamble>
+</figure>
+
+<figure anchor="downmix-matrix-6"
+ title="Stereo downmix matrix for the 5.1 surround mapping"
+ align="center">
+<artwork align="center"><![CDATA[
+ /FL \
+/ \ / \ |FC |
+|L| | 0.529067 0.374107 0.000000 0.458186 0.264534 0.374107 | |FR |
+|R| = | 0.000000 0.374107 0.529067 0.264534 0.458186 0.374107 | |RL |
+\ / \ / |RR |
+ \LFE/
+]]></artwork>
+<postamble>
+Exact coefficient values are 1, 1/sqrt(2), sqrt(3)/2 and 1/2, multiplied by
+2/(1 + 1/sqrt(2) + sqrt(3)/2 + 1/2 + 1/sqrt(2))
+ for normalization.
+</postamble>
+</figure>
+
+<figure anchor="downmix-matrix-7"
+ title="Stereo downmix matrix for the 6.1 surround mapping"
+ align="center">
+<artwork align="center"><![CDATA[
+ / \
+ | 0.455310 0.321953 0.000000 0.394310 0.227655 0.278819 0.321953 |
+ | 0.000000 0.321953 0.455310 0.227655 0.394310 0.278819 0.321953 |
+ \ /
+]]></artwork>
+<postamble>
+Exact coefficient values are 1, 1/sqrt(2), sqrt(3)/2, 1/2 and
+ sqrt(3)/2/sqrt(2), multiplied by
+ 2/(1 + 1/sqrt(2) + sqrt(3)/2 + 1/2 +
+ sqrt(3)/2/sqrt(2) + 1/sqrt(2)) for normalization.
+The coeffients are in the same order as in <xref target="channel_mapping_1" />,
+ and the matricies above.
+</postamble>
+</figure>
+
+<figure anchor="downmix-matrix-8"
+ title="Stereo downmix matrix for the 7.1 surround mapping"
+ align="center">
+<artwork align="center"><![CDATA[
+/ \
+| .388631 .274804 .000000 .336565 .194316 .336565 .194316 .274804 |
+| .000000 .274804 .388631 .194316 .336565 .194316 .336565 .274804 |
+\ /
+]]></artwork>
+<postamble>
+Exact coefficient values are 1, 1/sqrt(2), sqrt(3)/2 and 1/2, multiplied by
+ 2/(2 + 2/sqrt(2) + sqrt(3)) for normalization.
+The coeffients are in the same order as in <xref target="channel_mapping_1" />,
+ and the matricies above.
+</postamble>
+</figure>
</section>
+</section> <!-- end channel_mapping_table -->
+
+</section> <!-- end id_header -->
+
<section anchor="comment_header" title="Comment Header">
<figure anchor="comment_header_packet" title="Comment Header Packet"
<t>
The comment header consists of a 64-bit magic signature, followed by data in
the same format as the <xref target="vorbis-comment"/> header used in Ogg
- Vorbis (without the final "framing bit"), Ogg Theora, and Speex.
+ Vorbis, except (like Ogg Theora and Speex) the final "framing bit" specified
+ in the Vorbis spec is not present.
<list style="numbers">
<t><spanx style="strong">Magic Signature</spanx>:
<vspace blankLines="1"/>
for these fields, or that do not contain enough data for the corresponding
vendor string or user comments they describe.
Making this check before allocating the associated memory to contain the data
- may help prevent a possible Denial-of-Service (DoS) attack from small comment
+ helps prevent a possible Denial-of-Service (DoS) attack from small comment
headers that claim to contain strings longer than the entire packet or more
user comments than than could possibly fit in the packet.
</t>
<t>
The user comment strings follow the NAME=value format described by
<xref target="vorbis-comment"/> with the same recommended tag names.
-One new comment tag is introduced for Ogg Opus:
+</t>
<figure align="center">
+ <preamble>Two new comment tags are introduced for Ogg Opus:</preamble>
<artwork align="left"><![CDATA[
R128_TRACK_GAIN=-573
]]></artwork>
-</figure>
+<postamble>
representing the volume shift needed to normalize the track's volume.
The gain is a Q7.8 fixed point number in dB, as in the ID header's 'output
gain' field.
+</postamble>
+</figure>
+<t>
This tag is similar to the REPLAYGAIN_TRACK_GAIN tag in
Vorbis <xref target="replay-gain"/>, except that the normal volume
reference is the <xref target="EBU-R128"/> standard.
</t>
+<artwork align="left"><![CDATA[
+R128_ALBUM_GAIN=111
+]]></artwork>
+<postamble>
+representing the volume shift needed to normalize the volume of a collection
+ of tracks.
+The gain is a Q7.8 fixed point number in dB, as in the ID header's 'output
+ gain' field.
+</postamble>
+</figure>
<t>
-An Ogg Opus file MUST NOT have more than one such tag, and if present its
- value MUST be an integer from -32768 to 32767, inclusive, represented in
+An Ogg Opus file MUST NOT have more than one of each tags, and if present
+ their values MUST be an integer from -32768 to 32767, inclusive, represented in
ASCII with no whitespace.
-If present, it MUST correctly represent the R128 normalization gain relative
- to the 'output gain' field specified in the ID header.
-If a player chooses to make use of the R128_TRACK_GAIN tag, it MUST be
- applied <spanx style="emph">in addition</spanx> to the 'output gain' value.
+If present, REPLAYGAIN_TRACK_GAIN MUST correctly represent the R128
+ normalization gain relative to the 'output gain' field specified in the ID header.
+If a player chooses to make use of the R128_TRACK_GAIN tag or the
+ R128_ALBUM_GAIN, it MUST be applied <spanx style="emph">in addition</spanx> to
+ the 'output gain' value.
If an encoder wishes to use R128 normalization, and the output gain is not
otherwise constrained or specified, the encoder SHOULD write the R128 gain
into the 'output gain' field and store a tag containing "R128_TRACK_GAIN=0".
That is, it should assume that by default tools will respect the 'output gain'
field, and not the comment tag.
If a tool modifies the ID header's 'output gain' field, it MUST also update or
- remove the R128_TRACK_GAIN comment tag.
+ remove the R128_TRACK_GAIN and R128_ALBUM_GAIN comment tags.
</t>
<t>
To avoid confusion with multiple normalization schemes, an Opus comment header
SHOULD NOT contain any of the REPLAYGAIN_TRACK_GAIN, REPLAYGAIN_TRACK_PEAK,
REPLAYGAIN_ALBUM_GAIN, or REPLAYGAIN_ALBUM_PEAK tags.
</t>
-<t>
-There is no Opus comment tag corresponding to REPLAYGAIN_ALBUM_GAIN.
-That information should instead be stored in the ID header's 'output gain'
- field.
-</t>
</section>
</section>
The largest packet consisting of entirely useful data is
(15,326*N - 2) octets, or about 15 kB per stream.
This corresponds to 120 ms of audio encoded as 10 ms frames in either
- LP or Hybrid mode, but at a data rate of over 1 Mbps, which makes little
+ SILK or Hybrid mode, but at a data rate of over 1 Mbps, which makes little
sense for the quality achieved.
A more reasonable limit is (7,664*N - 2) octets, or about 7.5 kB
per stream.
-This corresponds to 120 ms of audio encoded as 20 ms stereo MDCT-mode
+This corresponds to 120 ms of audio encoded as 20 ms stereo CELT mode
frames, with a total bitrate just under 511 kbps (not counting the Ogg
encapsulation overhead).
With N=8, the maximum number of channels currently defined by mapping
</t>
</section>
-<section anchor="implementation" title="Implementation Status">
+<section anchor="encoder" title="Encoder Guidelines">
<t>
-What follows is a brief summary of major implementations of this
- draft, and their status.
-Note that this section should be removed before final publication
- as an RFC as per <xref target="draft-sheffer-running-code"/>.
-</t>
-
-<section anchor="impl-opus-tools" title="opus-tools">
-<t>
-The initial development implementation of this draft was in the
- opusenc, opusdec, and opusinfo command-line utilties, part of the
- opus-tools package and repository.
-While still 'development' status (pre-1.0) these utilities are
- in active public use, and have shipped with some recent Linux
- distributions and in homebrew.
-Together they implement basic read, write and playback support of
- Ogg Opus files including metadata, multichannel, start and end
- trimming, the gain field, live streams, and chained files, but currently do
- not support seeking.
-This implementation is open source.
-</t>
-<t><list style="symbols">
- <t><eref target="https://git.xiph.org/?p=opus-tools.git"/></t>
- <t><eref target="http://www.opus-codec.org/downloads/"/></t>
-</list></t>
-</section>
-
-<section anchor="impl-opusfile" title="opusfile">
-<t>
-The opusfile library is a separate implementation of this draft as a helper
- library for demuxing and decoding.
-Like opus-tools, it supports metadata, multichannel, start and end trimming,
- the gain field, live streams, and chained files.
-Its primary focus is efficient seeking, including over HTTP(S) and in chained
- streams.
-It currently does not create Ogg Opus files.
-This library is in early development and is not widely deployed, though several
- projects are currently using it, including xmms2, taglib, and cmus, and it is
- shipped in some Linux distributions and in homebrew.
-This implementation is open source.
-</t>
-<t><list style="symbols">
- <t><eref target="https://git.xiph.org/?p=opusfile.git"/></t>
- <t><eref target="http://www.opus-codec.org/downloads/"/></t>
-</list></t>
-</section>
-
-<section anchor="impl-firefox" title="Firefox">
-<t>
-The Firefox web browser is a widely deployed implementation of
- this draft.
-Basic playback support with the HTML5 <audio> element, including start
- and end trimming, the gain field, live streams, multiplexing with other
- streams (for, e.g., the <video> tag), and seeking, was added in
- Firefox 15, in production release starting August 28, 2012.
-Multichannel support was added in Firefox 17, in production release
- starting November 20, 2012.
-Metadata support was added in Firefox 18, in production release starting
- January 8, 2013.
-Chained files (as streams only, with seeking disabled) will be supported in
- Firefox 20, scheduled to enter production release in early April, 2013.
-This implementation is open source.
-</t>
-<t><list style="symbols">
- <t><eref target="https://mozilla.org/firefox/"/></t>
- <t><eref target="https://hacks.mozilla.org/2012/08/opus-support-for-webrtc/"/></t>
- <t><eref target="https://bugzilla.mozilla.org/show_bug.cgi?id=674225"/></t>
- <t><eref target="https://bugzilla.mozilla.org/show_bug.cgi?id=748144"/></t>
- <t><eref target="https://bugzilla.mozilla.org/show_bug.cgi?id=778050"/></t>
- <t><eref target="https://bugzilla.mozilla.org/show_bug.cgi?id=455165"/></t>
-</list></t>
-</section>
-
-<section anchor="impl-chrome" title="Chrome">
-<t>
-Google's Chrome web browser has support for this draft with the
- HTML5 <audio> element in M25 and M26, the dev and
- canary channels respectively as of January, 2013.
-This implementation currently does not support end trimming, the gain tag,
- chained files, or the .opus extension.
-Both M25 and M26 require passing --enable-opus-playback to the executable
- to enable support at the time of this writing.
-This implementation is based on open source code in
- Chromium and WebKit.
-</t>
-<t><list style="symbols">
- <t><eref target="https://www.google.com/intl/en/chrome/browser/"/></t>
- <t><eref target="https://www.google.com/intl/en/chrome/browser/canary.html"/></t>
- <t><eref target="http://code.google.com/p/chromium/issues/detail?id=104241"/></t>
-</list></t>
-</section>
-
-<section anchor="impl-gstreamer" title="GStreamer">
-<t>
-The GStreamer media framework includes an implementation of
- this draft.
-It supports metadata, multichannel, start and end trimming, the gain field,
- live streams, chained files, multiplexing with other streams (e.g., video),
- and seeking.
-Support was first added in early 2011, and is part of the 0.11 and 1.0.x
- releases.
-The code implementing this draft is in the gst-plugins-bad collection,
- which generally indicates unsupported and/or experimental code,
- despite its release status.
-This implementation is open source.
-</t>
-<t><list style="symbols">
- <t><eref target="http://gstreamer.net/"/></t>
- <t><eref target="http://cgit.freedesktop.org/gstreamer/gst-plugins-bad/"/></t>
-</list></t>
-</section>
-
-<section anchor="impl-ffmpeg" title="FFmpeg">
+When encoding Opus files, Ogg encoders should take into account the
+ algorithmic delay of the Opus encoder.
+</t>
+<figure align="center">
+<preamble>
+In encoders derived from the reference implementation, the number of
+ samples can be queried with:
+</preamble>
+<artwork align="center"><![CDATA[
+ opus_encoder_ctl(encoder_state, OPUS_GET_LOOKAHEAD, &delay_samples);
+]]></artwork>
+</figure>
<t>
-The popular media framework and conversion tool FFmpeg implements
- some of this draft.
-End trimming is not implemented, so file durations are not exactly
- preserved.
-This implementation is open source.
+To achieve good quality in the very first samples of a stream, the Ogg encoder
+ MAY use linear predictive coding (LPC) extrapolation
+ <xref target="linear-prediction"/> to generate at least 120 extra samples at
+ the beginning to avoid the Opus encoder having to encode a discontinuous
+ signal.
+For an input file containing 'length' samples, the Ogg encoder SHOULD set the
+ pre-skip header value to delay_samples+extra_samples, encode at least
+ length+delay_samples+extra_samples samples, and set the granulepos of the last
+ page to length+delay_samples+extra_samples.
+This ensures that the encoded file has the same duration as the original, with
+ no time offset. The best way to pad the end of the stream is to also use LPC
+ extrapolation, but zero-padding is also acceptable.
+</t>
+
+<section anchor="lpc" title="LPC Extrapolation">
+<t>
+The first step in LPC extrapolation is to compute linear prediction
+ coefficients. <xref target="lpc-sample"/>
+When extending the end of the signal, order-N (typically with N ranging from 8
+ to 40) LPC analysis is performed on a window near the end of the signal.
+The last N samples are used as memory to an infinite impulse response (IIR)
+ filter.
</t>
-<t><list style="symbols">
- <t><eref target="http://ffmpeg.org/"/></t>
-</list></t>
-</section>
-
-<section anchor="impl-libav" title="libav">
+<figure align="center">
+<preamble>
+The filter is then applied on a zero input to extrapolate the end of the signal.
+Let a(k) be the kth LPC coefficient and x(n) be the nth sample of the signal,
+ each new sample past the end of the signal is computed as:
+</preamble>
+<artwork align="center"><![CDATA[
+ N
+ ---
+x(n) = \ a(k)*x(n-k)
+ /
+ ---
+ k=1
+]]></artwork>
+</figure>
<t>
-The development repository for libav implements this draft,
- similar to FFmpeg.
-This implementation is open source.
+The process is repeated independently for each channel.
+It is possible to extend the beginning of the signal by applying the same
+ process backward in time.
+When extending the beginning of the signal, it is best to apply a "fade in" to
+ the extrapolated signal, e.g. by multiplying it by a half-Hanning window
+ <xref target="hanning"/>.
</t>
-<t><list style="symbols">
- <t><eref target="http://libav.org/"/></t>
-</list></t>
-</section>
-<section anchor="impl-vlc" title="VLC">
-<t>
-VLC is another widely deployed implementation of demuxing, decoding, and
- playback support for this draft.
-It supports metadata, multichannel, start and end trimming, the gain field,
- live streams, seeking, chained files (though seeking does not work
- correctly with chained files), and multiplexing with other streams (e.g.,
- video).
-Opus support was added in version 2.0.4, released on October 18, 2012.
-This implementation is open source.
-</t>
-<t><list style="symbols">
- <t><eref target="http://www.videolan.org/vlc/"/></t>
- <t><eref target="http://git.videolan.org/?p=vlc.git"/></t>
- <t><eref target="http://trac.videolan.org/vlc/ticket/7185"/></t>
-</list></t>
</section>
-<section anchor="impl-foobar2k" title="foobar2000">
+<section anchor="continuous_chaining" title="Continuous Chaining">
<t>
-A popular Windows application, foobar2000 implements read, write, and playback
- support for this draft.
-It supports metadata, multichannel, start and end trimming, the gain field,
- live streams, chained files, and seeking.
-Opus support was added in version 1.1.14, released on August 17, 2012.
-Encoding support is implemented using opusenc from opus-tools.
-This implementation is closed source.
+In some applications, such as Internet radio, it is desirable to cut a long
+ stream into smaller chains, e.g. so the comment header can be updated.
+This can be done simply by separating the input streams into segments and
+ encoding each segment independently.
+The drawback of this approach is that it creates a small discontinuity
+ at the boundary due to the lossy nature of Opus.
+An encoder MAY avoid this discontinuity by using the following procedure:
+<list style="numbers">
+<t>Encode the last frame of the first segment as an independent frame by
+ turning off all forms of inter-frame prediction.
+De-emphasis is allowed.</t>
+<t>Set the granulepos of the last page to a point near the end of the last
+ frame.</t>
+<t>Begin the second segment with a copy of the last frame of the first
+ segment.</t>
+<t>Set the pre-skip value of the second stream in such a way as to properly
+ join the two streams.</t>
+<t>Continue the encoding process normally from there, without any reset to
+ the encoder.</t>
+</list>
</t>
-<t><list style="symbols">
- <t><eref target="http://www.foobar2000.org/"/></t>
-</list></t>
+<figure align="center">
+<preamble>
+In encoders derived from the reference implementation, inter-frame prediction
+ can be turned off by calling:
+</preamble>
+<artwork align="center"><![CDATA[
+ opus_encoder_ctl(encoder_state, OPUS_SET_PREDICTION_DISABLED, 1);
+]]></artwork>
+<postamble>
+Prediction should be enabled again before resuming normal encoding, even
+ after a reset.
+</postamble>
+</figure>
+
</section>
-<section anchor="impl-rockbox" title="Rockbox">
-<t>
-Rockbox is an established alternative firmware for portable music players
- (typically small, embedded devices) that implements demuxing, decoding, and
- playback support for this draft.
-It supports metadata, start and end trimming, the gain field, and seeking.
-It does not currently support multichannel or chained files.
-Opus is currently only supported in development builds, though it is scheduled
- to be included in the next stable release (3.13).
-This implementation is open source.
-</t>
-<t><list style="symbols">
- <t><eref target="http://www.rockbox.org/"/></t>
- <t><eref target="http://git.rockbox.org/?p=rockbox.git"/></t>
- <t><eref target="http://gerrit.rockbox.org/r/#/c/300/"/></t>
-</list></t>
</section>
+<section anchor="implementation" title="Implementation Status">
+<t>
+A brief summary of major implementations of this draft is available
+ at <eref target="https://wiki.xiph.org/OggOpusImplementation"/>,
+ along with their status.
+</t>
+<t>
+[Note to RFC Editor: please remove this entire section before
+ final publication per <xref target="RFC6982"/>.]
+</t>
</section>
<section anchor="security" title="Security Considerations">
&rfc6381;
&rfc6716;
-<reference anchor="EBU-R128" target="http://tech.ebu.ch/loudness">
+<reference anchor="EBU-R128" target="https://tech.ebu.ch/loudness">
<front>
-<title>"Loudness Recommendation EBU R128</title>
-<author fullname="EBU Technical Committee"/>
-<date month="August" year="2011"/>
+ <title>Loudness Recommendation EBU R128</title>
+ <author>
+ <organization>EBU Technical Committee</organization>
+ </author>
+ <date month="August" year="2011"/>
</front>
</reference>
<reference anchor="vorbis-comment"
- target="http://www.xiph.org/vorbis/doc/v-comment.html">
+ target="https://www.xiph.org/vorbis/doc/v-comment.html">
<front>
<title>Ogg Vorbis I Format Specification: Comment Field and Header
Specification</title>
</front>
</reference>
-<reference anchor="vorbis-mapping"
- target="http://www.xiph.org/vorbis/doc/Vorbis_I_spec.html#x1-800004.3.9">
-<front>
-<title>The Vorbis I Specification, Section 4.3.9 Output Channel Order</title>
-<author initials="C." surname="Montgomery"
- fullname="Christopher "Monty" Montgomery"/>
-<date month="January" year="2010"/>
-</front>
-</reference>
-
</references>
<references title="Informative References">
<!--?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.3550.xml"?-->
&rfc4732;
-
-<reference anchor="draft-sheffer-running-code"
- target="https://tools.ietf.org/html/draft-sheffer-running-code-01#section-2">
- <front>
- <title>Improving "Rough Consensus" with Running Code</title>
- <author initials="Y." surname="Sheffer" fullname="Yaron Sheffer"/>
- <author initials="A." surname="Farrel" fullname="Adrian Farrel"/>
- <date month="December" year="2012"/>
- </front>
-</reference>
+ &rfc6982;
<reference anchor="flac"
target="https://xiph.org/flac/format.html">
</front>
</reference>
+<reference anchor="hanning"
+ target="https://en.wikipedia.org/wiki/Hamming_function#Hann_.28Hanning.29_window">
+ <front>
+ <title>Hann window</title>
+ <author>
+ <organization>Wikipedia</organization>
+ </author>
+ <date month="May" year="2013"/>
+ </front>
+</reference>
+
+<reference anchor="linear-prediction"
+ target="https://en.wikipedia.org/wiki/Linear_predictive_coding">
+ <front>
+ <title>Linear Predictive Coding</title>
+ <author>
+ <organization>Wikipedia</organization>
+ </author>
+ <date month="January" year="2014"/>
+ </front>
+</reference>
+
+<reference anchor="lpc-sample"
+ target="https://svn.xiph.org/trunk/vorbis/lib/lpc.c">
+<front>
+ <title>Autocorrelation LPC coeff generation algorithm
+ (Vorbis source code)</title>
+<author initials="J." surname="Degener" fullname="Jutta Degener"/>
+<author initials="C." surname="Bormann" fullname="Carsten Bormann"/>
+<date month="November" year="1994"/>
+</front>
+</reference>
+
+
<reference anchor="replay-gain"
- target="http://wiki.xiph.org/VorbisComment#Replay_Gain">
+ target="https://wiki.xiph.org/VorbisComment#Replay_Gain">
<front>
<title>VorbisComment: Replay Gain</title>
<author initials="C." surname="Parker" fullname="Conrad Parker"/>
</reference>
<reference anchor="seeking"
- target="http://wiki.xiph.org/Seeking">
+ target="https://wiki.xiph.org/Seeking">
<front>
<title>Granulepos Encoding and How Seeking Really Works</title>
<author initials="S." surname="Pfeiffer" fullname="Silvia Pfeiffer"/>
</front>
</reference>
-<reference anchor="vorbis-trim"
- target="http://xiph.org/vorbis/doc/Vorbis_I_spec.html#x1-130000A.2">
+<reference anchor="vorbis-mapping"
+ target="https://www.xiph.org/vorbis/doc/Vorbis_I_spec.html#x1-800004.3.9">
<front>
-<title>The Vorbis I Specification, Appendix A: Embedding Vorbis into an
- Ogg stream</title>
+<title>The Vorbis I Specification, Section 4.3.9 Output Channel Order</title>
<author initials="C." surname="Montgomery"
fullname="Christopher "Monty" Montgomery"/>
-<date month="November" year="2008"/>
+<date month="January" year="2010"/>
</front>
</reference>
+<reference anchor="vorbis-trim"
+ target="https://xiph.org/vorbis/doc/Vorbis_I_spec.html#x1-130000A.2">
+ <front>
+ <title>The Vorbis I Specification, Appendix A: Embedding Vorbis
+ into an Ogg stream</title>
+ <author initials="C." surname="Montgomery"
+ fullname="Christopher "Monty" Montgomery"/>
+ <date month="November" year="2008"/>
+ </front>
+</reference>
+
<reference anchor="wave-multichannel"
target="http://msdn.microsoft.com/en-us/windows/hardware/gg463006.aspx">
<front>
<title>Multiple Channel Audio Data and WAVE Files</title>
- <author fullname="Microsoft Corporation"/>
+ <author>
+ <organization>Microsoft Corporation</organization>
+ </author>
<date month="March" year="2007"/>
</front>
</reference>