Commit ce9c82af authored by Sebastian Dröge's avatar Sebastian Dröge 🍵

Release 1.11.1

parent d4081d6c
This source diff could not be displayed because it is too large. You can view the blob instead.
# GStreamer 1.10 Release Notes
**GStreamer 1.10.0 was released on 1st November 2016.**
The GStreamer team is proud to announce a new major feature release in the
stable 1.x API series of your favourite cross-platform multimedia framework!
As always, this release is again packed with new features, bug fixes and other
See [][latest] for the latest
version of this document.
*Last updated: Tuesday 1 Nov 2016, 15:00 UTC [(log)][gitlog]*
## Introduction
The GStreamer team is proud to announce a new major feature release in the
stable 1.x API series of your favourite cross-platform multimedia framework!
As always, this release is again packed with new features, bug fixes and other
## Highlights
- Several convenience APIs have been added to make developers' lives easier
- A new `GstStream` API provides applications a more meaningful view of the
structure of streams, simplifying the process of dealing with media in
complex container formats
- Experimental `decodebin3` and `playbin3` elements which bring a number of
improvements which were hard to implement within `decodebin` and `playbin`
- A new `parsebin` element to automatically unpack and parse a stream, stopping
just short of decoding
- Experimental new `meson`-based build system, bringing faster build and much
better Windows support (including for building with Visual Studio)
- A new `gst-docs` module has been created, and we are in the process of moving
our documentation to a markdown-based format for easier maintenance and
- A new `gst-examples` module has been create, which contains example
GStreamer applications and is expected to grow with many more examples in
the future
- Various OpenGL and OpenGL|ES-related fixes and improvements for greater
efficiency on desktop and mobile platforms, and Vulkan support on Wayland was
also added
- Extensive improvements to the VAAPI plugins for improved robustness and
- Lots of fixes and improvements across the board, spanning RTP/RTSP, V4L2,
Bluetooth, audio conversion, echo cancellation, and more!
## Major new features and changes
### Noteworthy new API, features and other changes
#### Core API additions
##### Receive property change notifications via bus messages
New API was added to receive element property change notifications via
bus messages. So far, applications had to connect a callback to an element's
`notify::property-name` signal via the GObject API, which was inconvenient for
at least two reasons: one had to implement a signal callback function, and that
callback function would usually be called from one of the streaming threads, so
one had to marshal (send) any information gathered or pending requests to the
main application thread which was tedious and error-prone.
Enter [`gst_element_add_property_notify_watch()`][notify-watch] and
[`gst_element_add_property_deep_notify_watch()`][deep-notify-watch] which will
watch for changes of a property on the specified element, either only for this
element or recursively for a whole bin or pipeline. Whenever such a
property change happens, a `GST_MESSAGE_PROPERTY_NOTIFY` message will be posted
on the pipeline bus with details of the element, the property and the new
property value, all of which can be retrieved later from the message in the
application via [`gst_message_parse_property_notify()`][parse-notify]. Unlike
the GstBus watch functions, this API does not rely on a running GLib main loop.
The above can be used to be notified asynchronously of caps changes in the
pipeline, or volume changes on an audio sink element, for example.
##### GstBin "deep" element-added and element-removed signals
GstBin has gained `"deep-element-added"` and `"deep-element-removed"` signals
which makes it easier for applications and higher-level plugins to track when
elements are added or removed from a complex pipeline with multiple sub-bins.
`playbin` makes use of this to implement the new `"element-setup"` signal which
can be used to configure elements as they are added to `playbin`, just like the
existing `"source-setup"` signal which can be used to configure the source
element created.
##### Error messages can contain additional structured details
It is often useful to provide additional, structured information in error,
warning or info messages for applications (or higher-level elements) to make
intelligent decisions based on them. To allow this, error, warning and info
messages now have API for adding arbitrary additional information to them
using a `GstStructure`:
[`GST_ELEMENT_ERROR_WITH_DETAILS`][element-error-with-details] and
corresponding API for the other message types.
This is now used e.g. by the new [`GST_ELEMENT_FLOW_ERROR`][element-flow-error]
API to include the actual flow error in the error message, and the
[souphttpsrc element][souphttpsrc-detailed-errors] to provide the HTTP
status code, and the URL (if any) to which a redirection has happened.
##### Redirect messages have official API now
Sometimes, elements need to redirect the current stream URL and tell the
application to proceed with this new URL, possibly using a different
protocol too (thus changing the pipeline configuration). Until now, this was
informally implemented using `ELEMENT` messages on the bus.
Now this has been formalized in the form of a new `GST_MESSAGE_REDIRECT` message.
A new redirect message can be created using [`gst_message_new_redirect()`][new-redirect].
If needed, multiple redirect locations can be specified by calling
[`gst_message_add_redirect_entry()`][add-redirect] to add further redirect
entries, all with metadata, so the application can decide which is
most suitable (e.g. depending on the bitrate tags).
##### New pad linking convenience functions that automatically create ghost pads
New pad linking convenience functions were added:
[`gst_pad_link_maybe_ghosting()`][pad-maybe-ghost] and
[`gst_pad_link_maybe_ghosting_full()`][pad-maybe-ghost-full] which were
previously internal to GStreamer have now been exposed for general use.
The existing pad link functions will refuse to link pads or elements at
different levels in the pipeline hierarchy, requiring the developer to
create ghost pads where necessary. These new utility functions will
automatically create ghostpads as needed when linking pads at different
levels of the hierarchy (e.g. from an element inside a bin to one that's at
the same level in the hierarchy as the bin, or in another bin).
##### Miscellaneous
Pad probes: IDLE and BLOCK probes now work slightly differently in pull mode,
so that push and pull mode have opposite scenarios for idle and blocking probes.
In push mode, it will block with some data type and IDLE won't have any data.
In pull mode, it will block _before_ getting a buffer and will be IDLE once some
data has been obtained. ([commit][commit-pad-probes], [bug][bug-pad-probes])
[`gst_parse_launch_full()`][parse-launch-full] can now be made to return a
`GstBin` instead of a top-level pipeline by passing the new
The default GStreamer debug log handler can now be removed before
calling `gst_init()`, so that it will never get installed and won't be active
during initialization.
A new [`STREAM_GROUP_DONE` event][stream-group-done-event] was added. In some
ways it works similar to the `EOS` event in that it can be used to unblock
downstream elements which may be waiting for further data, such as for example
`input-selector`. Unlike `EOS`, further data flow may happen after the
`STREAM_GROUP_DONE` event though (and without the need to flush the pipeline).
This is used to unblock input-selector when switching between streams in
adaptive streaming scenarios (e.g. HLS).
The `gst-launch-1.0` command line tool will now print unescaped caps in verbose
mode (enabled by the -v switch).
[`gst_element_call_async()`][call-async] has been added as convenience API for
plugin developers. It is useful for one-shot operations that need to be done
from a thread other than the current streaming thread. It is backed by a
thread-pool that is shared by all elements.
Various race conditions have been fixed around the `GstPoll` API used by e.g.
`GstBus` and `GstBufferPool`. Some of these manifested themselves primarily
on Windows.
`GstAdapter` can now keep track of discontinuities signalled via the `DISCONT`
buffer flag, and has gained [new API][new-adapter-api] to track PTS, DTS and
offset at the last discont. This is useful for plugins implementing advanced
trick mode scenarios.
`GstTestClock` gained a new [`"clock-type"` property][clock-type-prop].
#### GstStream API for stream announcement and stream selection
New stream listing and stream selection API: new API has been added to
provide high-level abstractions for streams ([`GstStream`][stream-api])
and collections of streams ([`GstStreamCollections`][stream-collection-api]).
##### Stream listing
A [`GstStream`][stream-api] contains all the information pertinent to a stream,
such as stream id, caps, tags, flags and stream type(s); it can represent a
single elementary stream (e.g. audio, video, subtitles, etc.) or a container
stream. This will depend on the context. In a decodebin3/playbin3 one
it will typically be elementary streams that can be selected and unselected.
A [`GstStreamCollection`][stream-collection-api] represents a group of streams
and is used to announce or publish all available streams. A GstStreamCollection
is immutable - once created it won't change. If the available streams change,
e.g. because a new stream appeared or some streams disappeared, a new stream
collection will be published. This new stream collection may contain streams
from the previous collection if those streams persist, or completely new ones.
Stream collections do not yet list all theoretically available streams,
e.g. other available DVD angles or alternative resolutions/bitrate of the same
stream in case of adaptive streaming.
New events and messages have been added to notify or update other elements and
the application about which streams are currently available and/or selected.
This way, we can easily and seamlessly let the application know whenever the
available streams change, as happens frequently with digital television streams
for example. The new system is also more flexible. For example, it is now also
possible for the application to select multiple streams of the same type
(e.g. in a transcoding/transmuxing scenario).
A [`STREAM_COLLECTION` message][stream-collection-msg] is posted on the bus
to inform the parent bin (e.g. `playbin3`, `decodebin3`) and/or the application
about what streams are available, so you no longer have to hunt for this
information at different places. The available information includes number of
streams of each type, caps, tags etc. Bins and/or the application can intercept
the message synchronously to select and deselect streams before any data is
produced - for the case where elements such as the demuxers support the new
stream API, not necessarily in the parsebin compatibility fallback case.
Similarly, there is also a [`STREAM_COLLECTION` event][stream-collection-event]
to inform downstream elements of the available streams. This event can be used
by elements to aggregate streams from multiple inputs into one single collection.
The `STREAM_START` event was extended so that it can also contain a GstStream
object with all information about the current stream, see
[`gst_event_set_stream()`][event-set-stream] and
[`gst_pad_get_stream()`][pad-get-stream] is a new utility function that can be
used to look up the GstStream from the `STREAM_START` sticky event on a pad.
##### Stream selection
Once the available streams have been published, streams can be selected via
their stream ID using the new `SELECT_STREAMS` event, which can be created
with [`gst_event_new_select_streams()`][event-select-streams]. The new API
supports selecting multiple streams per stream type. In the future, we may also
implement explicit deselection of streams that will never be used, so
elements can skip these and never expose them or output data for them in the
first place.
The application is then notified of the currently selected streams via the
new `STREAMS_SELECTED` message on the pipeline bus, containing both the current
stream collection as well as the selected streams. This might be posted in
response to the application sending a `SELECT_STREAMS` event or when
`decodebin3` or `playbin3` decide on the streams to be initially selected without
application input.
##### Further reading
See further below for some notes on the new elements supporting this new
stream API, namely: `decodebin3`, `playbin3` and `parsebin`.
More information about the new API and the new elements can also be found here:
- GStreamer [stream selection design docs][streams-design]
- Edward Hervey's talk ["The new streams API: Design and usage"][streams-talk] ([slides][streams-slides])
- Edward Hervey's talk ["Decodebin3: Dealing with modern playback use cases"][db3-talk] ([slides][db3-slides])
#### Audio conversion and resampling API
The audio conversion library received a completely new and rewritten audio
resampler, complementing the audio conversion routines moved into the audio
library in the [previous release][release-notes-1.8]. Integrating the resampler
with the other audio conversion library allows us to implement generic
conversion much more efficiently, as format conversion and resampling can now
be done in the same processing loop instead of having to do it in separate
steps (our element implementations do not make use of this yet though).
The new audio resampler library is a combination of some of the best features
of other samplers such as ffmpeg, speex and SRC. It natively supports S16, S32,
F32 and F64 formats and uses optimized x86 and neon assembly for most of its
processing. It also has support for dynamically changing sample rates by incrementally
updating the filter tables using linear or cubic interpolation. According to
some benchmarks, it's one of the fastest and most accurate resamplers around.
The `audioresample` plugin has been ported to the new audio library functions
to make use of the new resampler.
#### Support for SMPTE timecodes
Support for SMPTE timecodes was added to the GStreamer video library. This
comes with an abstraction for timecodes, [`GstVideoTimeCode`][video-timecode]
and a [`GstMeta`][video-timecode-meta] that can be placed on video buffers for
carrying the timecode information for each frame. Additionally there is
various API for making handling of timecodes easy and to do various
calculations with them.
A new plugin called [`timecode`][timecode-plugin] was added, that contains an
element called `timecodestamper` for putting the timecode meta on video frames
based on counting the frames and another element called `timecodewait` that
drops all video (and audio) until a specific timecode is reached.
Additionally support was added to the Decklink plugin for including the
timecode information when sending video out or capturing it via SDI, the
`qtmux` element is able to write timecode information into the MOV container,
and the `timeoverlay` element can overlay timecodes on top of the video.
More information can be found in the [talk about timecodes][timecode-talk] at
the GStreamer Conference 2016.
#### GStreamer OpenMAX IL plugin
The last gst-omx release, 1.2.0, was in July 2014. It was about time to get
a new one out with all the improvements that have happened in the meantime.
From now on, we will try to release gst-omx together with all other modules.
This release features a lot of bugfixes, improved support for the Raspberry Pi
and in general improved support for zerocopy rendering via EGL and a few minor
new features.
At this point, gst-omx is known to work best on the Raspberry Pi platform but
it is also known to work on various other platforms. Unfortunately, we are
not including configurations for any other platforms, so if you happen to use
gst-omx: please send us patches with your configuration and code changes!
### New Elements
#### decodebin3, playbin3, parsebin (experimental)
This release features new decoding and playback elements as experimental
technology previews: `decodebin3` and `playbin3` will soon supersede the
existing `decodebin` and `playbin` elements. We skipped the number 2 because
it was already used back in the 0.10 days, which might cause confusion.
Experimental technology preview means that everything should work fine already,
but we can't guarantee there won't be minor behavioural changes in the
next cycle. In any case, please test and report any problems back.
Before we go into detail about what these new elements improve, let's look at
the new [`parsebin`][parsebin] element. It works similarly to `decodebin` and
`decodebin3`, only that it stops one step short and does not plug any actual
decoder elements. It will only plug parsers, tag readers, demuxers and
depayloaders. Also note that parsebin does not contain any queueing element.
[`decodebin3`'s][decodebin3] internal architecture is slightly different from
the existing `decodebin` element and fixes many long-standing issues with our
decoding engine. For one, data is now fed into the internal `multiqueue` element
*after* it has been parsed and timestamped, which means that the `multiqueue`
element now has more knowledge and is able to calculate the interleaving of the
various streams, thus minimizing memory requirements and doing away with magic
values for buffering limits that were conceived when videos were 240p or 360p.
Anyone who has tried to play back 4k video streams with decodebin2
will have noticed the limitations of that approach. The improved timestamp
tracking also enables `multiqueue` to keep streams of the same type (audio,
video) aligned better, making sure switching between streams of the same type
is very fast.
Another major improvement in `decodebin3` is that it will no longer decode
streams that are not being used. With the old `decodebin` and `playbin`, when
there were 8 audio streams we would always decode all 8 streams even
if 7 were not actually used. This caused a lot of CPU overhead, which was
particularly problematic on embedded devices. When switching between streams
`decodebin3` will try hard to re-use existing decoders. This is useful when
switching between multiple streams of the same type if they are encoded in the
same format.
Re-using decoders is also useful when the available streams change on the fly,
as might happen with radio streams (chained Oggs), digital television
broadcasts, when adaptive streaming streams change bitrate, or when switching
gaplessly to the next title. In order to guarantee a seamless transition, the
old `decodebin2` would plug a second decoder for the new stream while finishing
up the old stream. With `decodebin3`, this is no longer needed - at least not
when the new and old format are the same. This will be particularly useful
on embedded systems where it is often not possible to run multiple decoders
at the same time, or when tearing down and setting up decoders is fairly
`decodebin3` also allows for multiple input streams, not just a single one.
This will be useful, in the future, for gapless playback, or for feeding
multiple external subtitle streams to decodebin/playbin.
`playbin3` uses `decodebin3` internally, and will supercede `playbin`.
It was decided that it would be too risky to make the old `playbin` use the
new `decodebin3` in a backwards-compatible way. The new architecture
makes it awkward, if not impossible, to maintain perfect backwards compatibility
in some aspects, hence `playbin3` was born, and developers can migrate to the
new element and new API at their own pace.
All of these new elements make use of the new `GstStream` API for listing and
selecting streams, as described above. `parsebin` provides backwards
compatibility for demuxers and parsers which do not advertise their streams
using the new API yet (which is most).
The new elements are not entirely feature-complete yet: `playbin3` does not
support so-called decodersinks yet where the data is not decoded inside
GStreamer but passed directly for decoding to the sink. `decodebin3` is missing
the various `autoplug-*` signals to influence which decoders get autoplugged
in which order. We're looking to add back this functionality, but it will probably
be in a different way, with a single unified signal and using GstStream perhaps.
For more information on these new elements, check out Edward Hervey's talk
[*decodebin3 - dealing with modern playback use cases*][db3-talk]
#### LV2 ported from 0.10 and switched from slv2 to lilv2
The LV2 wrapper plugin has been ported to 1.0 and moved from using the
deprecated slv2 library to its replacement liblv2. We support sources and
filter elements. lv2 is short for *Linux Audio Developer's Simple Plugin API
(LADSPA) version 2* and is an open standard for audio plugins which includes
support for audio synthesis (generation), digital signal processing of digital
audio, and MIDI. The new lv2 plugin supersedes the existing LADSPA plugin.
#### WebRTC DSP Plugin for echo-cancellation, gain control and noise suppression
A set of new elements ([webrtcdsp][webrtcdsp], [webrtcechoprobe][webrtcechoprobe])
based on the WebRTC DSP software stack can now be used to improve your audio
voice communication pipelines. They support echo cancellation, gain control,
noise suppression and more. For more details you may read
[Nicolas' blog post][webrtc-blog-post].
#### Fraunhofer FDK AAC encoder and decoder
New encoder and decoder elements wrapping the Fraunhofer FDK AAC library have
been added (`fdkaacdec`, `fdkaacdec`). The Fraunhofer FDK AAC encoder is
generally considered to be a very high-quality AAC encoder, but unfortunately
it comes under a non-free license with the option to obtain a paid, commercial
### Noteworthy element features and additions
#### Major RTP and RTSP improvements
- The RTSP server and source element, as well as the RTP jitterbuffer now support
remote clock synchronization according to [RFC7273][].
- Support for application and profile specific RTCP packets was added.
- The H265/HEVC payloader/depayloader is again in sync with the final RFC.
- Seeking stability of the RTSP source and server was improved a lot and
runs stably now, even when doing scrub-seeking.
- The RTSP server received various major bugfixes, including for regressions that
caused the IP/port address pool to not be considered, or NAT hole punching
to not work anymore. [Bugzilla #766612][]
- Various other bugfixes that improve the stability of RTP and RTSP, including
many new unit / integration tests.
#### Improvements to splitmuxsrc and splitmuxsink
- The splitmux element received reliability and error handling improvements,
removing at least one deadlock case. `splitmuxsrc` now stops cleanly at the end
of the segment when handling seeks with a stop time. We fixed a bug with large
amounts of downstream buffering causing incorrect out-of-sequence playback.
- `splitmuxsrc` now has a `"format-location"` signal to directly specify the list
of files to play from.
- `splitmuxsink` can now optionally send force-keyunit events to upstream
elements to allow splitting files more accurately instead of having to wait
for upstream to provide a new keyframe by itself.
#### OpenGL/GLES improvements
##### iOS and macOS (OS/X)
- We now create OpenGL|ES 3.x contexts on iOS by default with a fallback to
OpenGL|ES 2.x if that fails.
- Various zerocopy decoding fixes and enhancements with the
encoding/decoding/capturing elements.
- libdispatch is now used on all Apple platforms instead of GMainLoop, removing
the expensive poll()/pthread_*() overhead.
##### New API
- `GstGLFramebuffer` - for wrapping OpenGL frame buffer objects. It provides
facilities for attaching `GstGLMemory` objects to the necessary attachment
points, binding and unbinding and running a user-supplied function with the
framebuffer bound.
- `GstGLRenderbuffer` (a `GstGLBaseMemory` subclass) - for wrapping OpenGL
render buffer objects that are typically used for depth/stencil buffers or
for color buffers where we don't care about the output.
- `GstGLMemoryEGL` (a `GstGLMemory` subclass) - for combining `EGLImage`s with a GL
texture that replaces `GstEGLImageMemory` bringing the improvements made to the
other `GstGLMemory` implementations. This fixes a performance regression in
zerocopy decoding on the Raspberry Pi when used with an updated gst-omx.
##### Miscellaneous improvements
- `gltestsrc` is now usable on devices/platforms with OpenGL 3.x and OpenGL|ES
and has completed or gained support for new patterns in line with the
existing ones in `videotestsrc`.
- `gldeinterlace` is now available on devices/platforms with OpenGL|ES
- The dispmanx backend (used on the Raspberry Pi) now supports the
`gst_video_overlay_set_window_handle()` and
`gst_video_overlay_set_render_rectangle()` functions.
- The `gltransformation` element now correctly transforms mouse coordinates (in
window space) to stream coordinates for both perspective and orthographic
- The `gltransformation` element now detects if the
`GstVideoAffineTransformationMeta` is supported downstream and will efficiently
pass its transformation downstream. This is a performance improvement as it
results in less processing being required.
- The wayland implementation now uses the multi-threaded safe event-loop API
allowing correct usage in applications that call wayland functions from
multiple threads.
- Support for native 90 degree rotations and horizontal/vertical flips
in `glimagesink`.
#### Vulkan
- The Vulkan elements now work under Wayland and have received numerous
#### QML elements
- `qmlglsink` video sink now works on more platforms, notably, Windows, Wayland,
and Qt's eglfs (for embedded devices with an OpenGL implementation) including
the Raspberry Pi.
- New element `qmlglsrc` to record a QML scene into a GStreamer pipeline.
#### KMS video sink
- New element `kmssink` to render video using Direct Rendering Manager
(DRM) and Kernel Mode Setting (KMS) subsystems in the Linux
kernel. It is oriented to be used mostly in embedded systems.
#### Wayland video sink
- `waylandsink` now supports the wl_viewporter extension allowing
video scaling and cropping to be delegated to the Wayland
compositor. This extension is also been made optional, so that it can
also work on current compositors that don't support it. It also now has
support for the video meta, allowing zero-copy operations in more
#### DVB improvements
- `dvbsrc` now has better delivery-system autodetection and several
new parameter sanity-checks to improve its resilience to configuration
omissions and errors. Superfluous polling continues to be trimmed down,
and the debugging output has been made more consistent and precise.
Additionally, the channel-configuration parser now supports the new dvbv5
format, enabling `dvbbasebin` to automatically playback content transmitted
on delivery systems that previously required manual description, like ISDB-T.
#### DASH, HLS and adaptivedemux
- HLS now has support for Alternate Rendition audio and video tracks. Full
support for Alternate Rendition subtitle tracks will be in an upcoming release.
- DASH received support for keyframe-only trick modes if the
`GST_SEEK_FLAG_TRICKMODE_KEY_UNITS` flag is given when seeking. It will
only download keyframes then, which should help with high-speed playback.
Changes to skip over multiple frames based on bandwidth and other metrics
will be added in the near future.
- Lots of reliability fixes around seek handling and bitrate switching.
#### Bluetooth improvements
- The `avdtpsrc` element now supports metadata such as track title, artist
name, and more, which devices can send via AVRCP. These are published as
tags on the pipeline.
- The `a2dpsink` element received some love and was cleaned up so that it
actually works after the initial GStreamer 1.0 port.
#### GStreamer VAAPI
- All the decoders have been split, one plugin feature per codec. So
far, the available ones, depending on the driver, are:
`vaapimpeg2dec`, `vaapih264dec`, `vaapih265dec`, `vaapivc1dec`, `vaapivp8dec`,
`vaapivp9dec` and `vaapijpegdec` (which already was split).
- Improvements when mapping VA surfaces into memory. It now differentiates
between negotiation caps and allocations caps, since the allocation
memory for surfaces may be bigger than one that is going to be
- `vaapih265enc` now supports constant bitrate mode (CBR).
- Since several VA drivers are unmaintained, we decide to keep a whitelist
with the va drivers we actually test, which is mostly the i915 and to a lesser
degree gallium from the mesa project. Exporting the environment variable
`GST_VAAPI_ALL_DRIVERS` disables the whitelist.
- Plugin features are registered at run-time, according to their support by
the loaded VA driver. So only the decoders and encoder supported by the
system are registered. Since the driver can change, some dependencies are
tracked to invalidate the GStreamer registry and reload the plugin.
- `dmabuf` importation from upstream has been improved, gaining performance.
- `vaapipostproc` now can negotiate buffer transformations via caps.
- Decoders now can do I-frame only reverse playback. This decodes I-frames
only because the surface pool is smaller than the required by the GOP to show all the
- The upload of frames onto native GL textures has been optimized too, keeping
a cache of the internal structures for the offered textures by the sink.
#### V4L2 changes
- More pixels formats are now supported
- Decoder is now using `G_SELECTION` instead of the deprecated `G_CROP`
- Decoder now uses the `STOP` command to handle EOS
- Transform element can now scale the pixel aspect ratio
- Colorimetry support has been improved even more
- We now support the `OUTPUT_OVERLAY` type of video node in v4l2sink
#### Miscellaneous
- `multiqueue`'s input pads gained a new `"group-id"` property which
can be used to group input streams. Typically one will assign
different id numbers to audio, video and subtitle streams for
example. This way `multiqueue` can make sure streams of the same
type advance in lockstep if some of the streams are unlinked and the
`"sync-by-running-time"` property is set. This is used in
decodebin3/playbin3 to implement almost-instantaneous stream
switching. The grouping is required because different downstream
paths (audio, video, etc.) may have different buffering/latency
etc. so might be consuming data from multiqueue with a slightly
different phase, and if we track different stream groups separately
we minimize stream switching delays and buffering inside the
- `alsasrc` now supports ALSA drivers without a position for each
channel, this is common in some professional or industrial hardware.