NEWS 49.5 KB
Newer Older
Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
1 2 3 4


GSTREAMER 1.14 RELEASE NOTES

Sebastian Dröge's avatar
Sebastian Dröge committed
5

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
6
GStreamer 1.14.0 has not been released yet. It is scheduled for release
Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
7
in early March 2018.
Sebastian Dröge's avatar
Sebastian Dröge committed
8

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
9
There are unstable pre-releases available for testing and development
Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
10 11
purposes. The latest pre-release is version 1.13.91 (rc2) and was
released on 12 March 2018.
Sebastian Dröge's avatar
Sebastian Dröge committed
12

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
13
See https://gstreamer.freedesktop.org/releases/1.14/ for the latest
Sebastian Dröge's avatar
Sebastian Dröge committed
14 15
version of this document.

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
16
_Last updated: Monday 12 March 2018, 18:00 UTC (log)_
Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
17 18 19 20 21 22 23 24 25 26 27 28 29


Introduction

The GStreamer team is proud to announce a new major feature release in
the stable 1.x API series of your favourite cross-platform multimedia
framework!

As always, this release is again packed with new features, bug fixes and
other improvements.


Highlights
Sebastian Dröge's avatar
Sebastian Dröge committed
30

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73
-   WebRTC support: real-time audio/video streaming to and from web
    browsers

-   Experimental support for the next-gen royalty-free AV1 video codec

-   Video4Linux: encoding support, stable element names and faster
    device probing

-   Support for the Secure Reliable Transport (SRT) video streaming
    protocol

-   RTP Forward Error Correction (FEC) support (ULPFEC)

-   RTSP 2.0 support in rtspsrc and gst-rtsp-server

-   ONVIF audio backchannel support in gst-rtsp-server and rtspsrc

-   playbin3 gapless playback and pre-buffering support

-   tee, our stream splitter/duplication element, now does allocation
    query aggregation which is important for efficient data handling and
    zero-copy

-   QuickTime muxer has a new prefill recording mode that allows file
    import in Adobe Premiere and FinalCut Pro while the file is still
    being written.

-   rtpjitterbuffer fast-start mode and timestamp offset adjustment
    smoothing

-   souphttpsrc connection sharing, which allows for connection reuse,
    cookie sharing, etc.

-   nvdec: new plugin for hardware-accelerated video decoding using the
    NVIDIA NVDEC API

-   Adaptive DASH trick play support

-   ipcpipeline: new plugin that allows splitting a pipeline across
    multiple processes

-   Major gobject-introspection annotation improvements for large parts
    of the library API
Sebastian Dröge's avatar
Sebastian Dröge committed
74 75


Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
76
Major new features and changes
Sebastian Dröge's avatar
Sebastian Dröge committed
77

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
78 79 80 81 82 83 84 85 86 87 88 89 90
WebRTC support

There is now basic support for WebRTC in GStreamer in form of a new
webrtcbin element and a webrtc support library. This allows you to build
applications that set up connections with and stream to and from other
WebRTC peers, whilst leveraging all of the usual GStreamer features such
as hardware-accelerated encoding and decoding, OpenGL integration,
zero-copy and embedded platform support. And it's easy to build and
integrate into your application too!

WebRTC enables real-time communication of audio, video and data with web
browsers and native apps, and it is supported or about to be support by
recent versions of all major browsers and operating systems.
Sebastian Dröge's avatar
Sebastian Dröge committed
91

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
92 93 94 95 96 97 98 99 100 101 102 103 104
GStreamer's new WebRTC implementation uses libnice for Interactive
Connectivity Establishment (ICE) to figure out the best way to
communicate with other peers, punch holes into firewalls, and traverse
NATs.

The implementation is not complete, but all the basics are there, and
the code sticks fairly close to the PeerConnection API. Where
functionality is missing it should be fairly obvious where it needs to
go.

For more details, background and example code, check out Nirbheek's blog
post _GStreamer has grown a WebRTC implementation_, as well as Matthew's
_GStreamer WebRTC_ talk from last year's GStreamer Conference in Prague.
Sebastian Dröge's avatar
Sebastian Dröge committed
105

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
106
New Elements
Sebastian Dröge's avatar
Sebastian Dröge committed
107

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192
-   webrtcbin handles the transport aspects of webrtc connections (see
    WebRTC section above for more details)

-   New srtsink and srtsrc elements for the Secure Reliable Transport
    (SRT) video streaming protocol, which aims to be easy to use whilst
    striking a new balance between reliability and latency for low
    latency video streaming use cases. More details about SRT and the
    implementation in GStreamer in Olivier's blog post _SRT in
    GStreamer_.

-   av1enc and av1dec elements providing experimental support for the
    next-generation royalty free video AV1 codec, alongside Matroska
    support for it.

-   hlssink2 is a rewrite of the existing hlssink element, but unlike
    its predecessor hlssink2 takes elementary streams as input and
    handles the muxing to MPEG-TS internally. It also leverages
    splitmuxsink internally to do the splitting. This allows more
    control over the chunk splitting and sizing process and relies less
    on the co-operation of an upstream muxer. Different to the old
    hlssink it also works with pre-encoded streams and does not require
    close interaction with an upstream encoder element.

-   audiolatency is a new element for measuring audio latency end-to-end
    and is useful to measure roundtrip latency including both the
    GStreamer-internal latency as well as latency added by external
    components or circuits.

-   'fakevideosink is basically a null sink for video data and very
    similar to fakesink, only that it will answer allocation queries and
    will advertise support for various video-specific things such
    GstVideoMeta, GstVideoCropMeta and GstVideoOverlayCompositionMeta
    like a normal video sink would. This is useful for throughput
    testing and testing the zero-copy path when creating a new pipeline.

-   ipcpipeline: new plugin that allows the splitting of a pipeline into
    multiple processes. Usually a GStreamer pipeline runs in a single
    process and parallelism is achieved by distributing workloads using
    multiple threads. This means that all elements in the pipeline have
    access to all the other elements' memory space however, including
    that of any libraries used. For security reasons one might therefore
    want to put sensitive parts of a pipeline such as DRM and decryption
    handling into a separate process to isolate it from the rest of the
    pipeline. This can now be achieved with the new ipcpipeline plugin.
    Check out George's blog post _ipcpipeline: Splitting a GStreamer
    pipeline into multiple processes_ or his lightning talk from last
    year's GStreamer Conference in Prague for all the gory details.

 
-   proxysink and proxysrc are new elements to pass data from one
    pipeline to another within the same process, very similar to the
    existing inter elements, but not limited to raw audio and video
    data. These new proxy elements are very special in how they work
    under the hood, which makes them extremely powerful, but also
    dangerous if not used with care. The reason for this is that it's
    not just data that's passed from sink to src, but these elements
    basically establish a two-way wormhole that passes through queries
    and events in both directions, which means caps negotiation and
    allocation query driven zero-copy can work through this wormhole.
    There are scheduling considerations as well: proxysink forwards
    everything into the proxysrc pipeline directly from the proxysink
    streaming thread. There is a queue element inside proxysrc to
    decouple the source thread from the sink thread, but that queue is
    not unlimited, so it is entirely possible that the proxysink
    pipeline thread gets stuck in the proxysrc pipeline, e.g. when that
    pipeline is paused or stops consuming data for some other reason.
    This means that one should always shut down down the proxysrc
    pipeline before shutting down the proxysink pipeline, for example.
    Or at least take care when shutting down pipelines. Usually this is
    not a problem though, especially not in live pipelines. For more
    information see Nirbheek's blog post _Decoupling GStreamer
    Pipelines_, and also check out out the new ipcpipeline plugin for
    sending data from one process to another process (see above).

-   lcms is a new LCMS-based ICC color profile correction element

-   openmptdec is a new OpenMPT-based decoder for module music formats,
    such as S3M, MOD, XM, IT. It is built on top of a new
    GstNonstreamAudioDecoder base class which aims to unify handling of
    files which do not operate a streaming model. The wildmidi plugin
    has also been revived and is also implemented on top of this new
    base class.

-   The curl plugin has gained a new curlhttpsrc element, which is
    useful for testing HTTP protocol version 2.0 amongst other things.
Sebastian Dröge's avatar
Sebastian Dröge committed
193

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
194
Noteworthy new API
Sebastian Dröge's avatar
Sebastian Dröge committed
195

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565
-   GstPromise provides future/promise-like functionality. This is used
    in the GStreamer WebRTC implementation.

 
-   GstReferenceTimestampMeta is a new meta that allows you to attach
    additional reference timestamps to a buffer. These timestamps don't
    have to relate to the pipeline clock in any way. Examples of this
    could be an NTP timestamp when the media was captured, a frame
    counter on the capture side or the (local) UNIX timestamp when the
    media was captured. The decklink elements make use of this.

 
-   GstVideoRegionOfInterestMeta: it's now possible to attach generic
    free-form element-specific parameters to a region of interest meta,
    for example to tell a downstream encoder to use certain codec
    parameters for a certain region.

 
-   gst_bus_get_pollfd can be used to obtain a file descriptor for the
    bus that can be poll()-ed on for new messages. This is useful for
    integration with non-GLib event loops.

 
-   gst_get_main_executable_path() can be used by wrapper plugins that
    need to find things in the directory where the application
    executable is located. In the same vein,
    GST_PLUGIN_DEPENDENCY_FLAG_PATHS_ARE_RELATIVE_TO_EXE can be used to
    signal that plugin dependency paths are relative to the main
    executable.

-   pad templates can be told about the GType of the pad subclass of the
    pad via newly-added GstPadTemplate API API or the
    gst_element_class_add_static_pad_template_with_gtype() convenience
    function. gst-inspect-1.0 will use this information to print pad
    properties.

 
-   new convenience functions to iterate over element pads without using
    the GstIterator API: gst_element_foreach_pad(),
    gst_element_foreach_src_pad(), and gst_element_foreach_sink_pad().

 
-   GstBaseSrc and appsrc have gained support for buffer lists:
    GstBaseSrc subclasses can use gst_base_src_submit_buffer_list(), and
    applications can use gst_app_src_push_buffer_list() to push a buffer
    list into appsrc.

 
-   The GstHarness unit test harness has a couple of new convenience
    functions to retrieve all pending data in the harness in form of a
    single chunk of memory.

 
-   GstAudioStreamAlign is a new helper object for audio elements that
    handles discontinuity detection and sample alignment. It will align
    samples after the previous buffer's samples, but keep track of the
    divergence between buffer timestamps and sample position (jitter).
    If it exceeds a configurable threshold the alignment will be reset.
    This simply factors out code that was duplicated in a number of
    elements into a common helper API.

 
-   The GstVideoEncoder base class implements Quality of Service (QoS)
    now. This is disabled by default and must be opted in by setting the
    "qos" property, which will make the base class gather statistics
    about the real-time performance of the pipeline from downstream
    elements (usually sinks that sync the pipeline clock). Subclasses
    can then make use of this by checking whether input frames are late
    already using gst_video_encoder_get_max_encode_time() If late, they
    can just drop them and skip encoding in the hope that the pipeline
    will catch up.

 
-   The GstVideoOverlay interface gained a few helper functions for
    installing and handling a "render-rectangle" property on elements
    that implement this interface, so that this functionality can also
    be used from the command line for testing and debugging purposes.
    The property wasn't added to the interface itself as that would
    require all implementors to provide it which would not be
    backwards-compatible.

 
-   A new base class, GstNonstreamAudioDecoder for non-stream audio
    decoders was added to gst-plugins-bad. This base-class is meant to
    be used for audio decoders that require the whole stream to be
    loaded first before decoding can start. Examples of this are module
    formats (MOD/S3M/XM/IT/etc), C64 SID tunes, video console music
    files (GYM/VGM/etc), MIDI files and others. The new openmptdec
    element is based on this.

 
-   Full list of API new in 1.14:
-   GStreamer core API new in 1.14
-   GStreamer base library API new in 1.14
-   gst-plugins-base libraries API new in 1.14
-   gst-plugins-bad: no list, mostly GstWebRTC library and new
    non-stream audio decoder base class.

New RTP features and improvements

-   rtpulpfecenc and rtpulpfecdec are new elements that implement
    Generic Forward Error Correction (FEC) using Uneven Level Protection
    (ULP) as described in RFC 5109. This can be used to protect against
    certain types of (non-bursty) packet loss, and important packets
    such as those containing codec configuration data or key frames can
    be protected with higher redundancy. Equally, packets that are not
    particularly important can be given low priority or not be protected
    at all. If packets are lost, the receiver can then hopefully restore
    the lost packet(s) from the surrounding packets which were received.
    This is an alternative to, or rather complementary to, dealing with
    packet loss using _retransmission (rtx)_. GStreamer has had
    retransmission support for a long time, but Forward Error Correction
    allows for different trade-offs: The advantage of Forward Error
    Correction is that it doesn't add latency, whereas retransmission
    requires at least one more roundtrip to request and hopefully
    receive lost packets; Forward Error Correction increases the
    required bandwidth however, even in situations where there is no
    packet loss at all, so one will typically want to fine-tune the
    overhead and mechanisms used based on the characteristics of the
    link at the time.

-   New _Redundant Audio Data (RED)_ encoders and decoders for RTP as
    per RFC 2198 are also provided (rtpredenc and rtpreddec), mostly for
    chrome webrtc compatibility, as chrome will wrap ULPFEC-protected
    streams in RED packets, and such streams need to be wrapped and
    unwrapped in order to use ULPFEC with chrome.

 
-   a few new buffer flags for FEC support:
    GST_BUFFER_FLAG_NON_DROPPABLE can be used to mark important buffers,
    e.g. to flag RTP packets carrying keyframes or codec setup data for
    RTP Forward Error Correction purposes, or to prevent still video
    frames from being dropped by elements due to QoS. There already is a
    GST_BUFFER_FLAG_DROPPABLE. GST_RTP_BUFFER_FLAG_REDUNDANT is used to
    signal internally that a packet represents a redundant RTP packet
    and used in rtpstorage to hold back the packet and use it only for
    recovery from packet loss. Further work is still needed in
    payloaders to make use of these.

-   rtpbin now has an option for increasing timestamp offsets gradually:
    Instant large changes to the internal ts_offset may cause timestamps
    to move backwards and also cause visible glitches in media playback.
    The new "max-ts-offset-adjustment" and "max-ts-offset" properties
    let the application control the rate to apply changes to ts_offset.
    There have also been some EOS/BYE handling improvements in rtpbin.

-   rtpjitterbuffer has a new fast start mode: in many scenarios the
    jitter buffer will have to wait for the full configured latency
    before it can start outputting packets. The reason for that is that
    it often can't know what the sequence number of the first expected
    RTP packet is, so it can't know whether a packet earlier than the
    earliest packet received will still arrive in future. This behaviour
    can now be bypassed by setting the "faststart-min-packets" property
    to the number of consecutive packets needed to start, and the jitter
    buffer will start output packets as soon as it has N consecutive
    packets queued internally. This is particularly useful to get a
    first video frame decoded and rendered as quickly as possible.

-   rtpL8pay and rtpL8depay provide RTP payloading and depayloading for
    8-bit raw audio

New element features

-   playbin3 has gained support or gapless playback via the
    "about-to-finish" signal where users can set the uri for the next
    item to play. For non-live streams this will be emitted as soon as
    the first uri has finished downloading, so with sufficiently large
    buffers it is now possible to pre-buffer the next item well ahead of
    time (unlike playbin where there would not be a lot of time between
    "about-to-finish" emission and the end of the stream). If the stream
    format of the next stream is the same as that of the previous
    stream, the data will be concatenated via the concat element.
    Whether this will result in true gaplessness depends on the
    container format and codecs used, there might still be codec-related
    gaps between streams with some codecs.

-   tee now does allocation query aggregation, which is important for
    zero-copy and efficient data handling, especially for video. Those
    who want to drop allocation queries on purpose can use the identity
    element's new "drop-allocation" property for that instead.

-   audioconvert now has a "mix-matrix" property, which obsoletes the
    audiomixmatrix element. There's also mix matrix support in the audio
    conversion and channel mixing API.

-   x264enc: new "insert-vui" property to disable VUI (Video Usability
    Information) parameter insertion into the stream, which allows
    creation of streams that are compatible with certain legacy hardware
    decoders that will refuse to decode in certain combinations of
    resolution and VUI parameters; the max. allowed number of B-frames
    was also increased from 4 to 16.

-   dvdlpcmdec: has gained support for Blu-Ray audio LPCM.

-   appsrc has gained support for buffer lists (see above) and also seen
    some other performance improvements.

-   flvmux has been ported to the GstAggregator base class which means
    it can work in defined-latency mode with live input sources and
    continue streaming if one of the inputs stops producing data.

-   jpegenc has gained a "snapshot" property just like pngenc to make it
    easier to just output a single encoded frame.

-   jpegdec will now handle interlaced MJPEG streams properly and also
    handle frames without an End of Image marker better.

-   v4l2: There are now video encoders for VP8, VP9, MPEG4, and H263.
    The v4l2 video decoder handles dynamic resolution changes, and the
    video4linux device provider now does much faster device probing. The
    plugin also no longer uses the libv4l2 library by default, as it has
    prevented a lot of interesting use cases like CREATE_BUFS, DMABuf,
    usage of TRY_FMT. As the libv4l2 library is totally inactive and not
    really maintained, we decided to disable it. This might affect a
    small number of cheap/old webcams with custom vendor formats for
    which we do not provide conversion in GStreamer. It is possible to
    re-enable support for libv4l2 at run-time however, by setting the
    environment variable GST_V4L2_USE_LIBV4L2=1.

-   rtspsrc now has support for RTSP protocol version 2.0 as well as
    ONVIF audio backchannels (see below for more details). It also
    sports a new ["accept-certificate"] signal for "manually" checking a
    TLS certificate for validity. It now also prints RTSP/SDP messages
    to the gstreamer debug log instead of stdout.

-   shout2send now uses non-blocking I/O and has a configurable network
    operations timeout.

-   splitmuxsink has gained a "split-now" action signal and new
    "alignment-threshold" and "use-robust-muxing" properties. If robust
    muxing is enabled, it will check and set the muxer's reserved space
    properties if present. This is primarily for use with mp4mux's
    robust muxing mode.

-   qtmux has a new _prefill recording mode_ which sets up a moov header
    with the correct sample positions beforehand, which then allows
    software like Adobe Premiere and FinalCut Pro to import the files
    while they are still being written to. This only works with constant
    framerate I-frame only streams, and for now only support for ProRes
    video and raw audio is implemented but adding new codecs is just a
    matter of defining appropriate maximum frame sizes.

-   qtmux also supports writing of svmi atoms with stereoscopic video
    information now. Trak timescales can be configured on a per-stream
    basis using the "trak-timescale" property on the sink pads. Various
    new formats can be muxed: MPEG layer 1 and 2, AC3 and Opus, as well
    as PNG and VP9.

-   souphttpsrc now does connection sharing by default, shares its
    SoupSession with other elements in the same pipeline via a
    GstContext if possible (session-wide settings are all the defaults).
    This allows for connection reuse, cookie sharing, etc. Applications
    can also force a context to use. In other news, HTTP headers
    received from the server are posted as element messages on the bus
    now for easier diagnostics, and it's also possible now to use other
    types of proxy servers such as SOCKS4 or SOCKS5 proxies, support for
    which is implemented directly in gio. Before only HTTP proxies were
    allowed.

-   qtmux, mp4mux and matroskamux will now refuse caps changes of input
    streams at runtime. This isn't really supported with these
    containers (or would have to be implemented differently with a
    considerable effort) and doesn't produce valid and spec-compliant
    files that will play everywhere. So if you can't guarantee that the
    input caps won't change, use a container format that does support on
    the fly caps changes for a stream such as MPEG-TS or use
    splitmuxsink which can start a new file when the caps change. What
    would happen before is that e.g. rtph264depay or rtph265depay would
    simply send new SPS/PPS inband even for AVC format, which would then
    get muxed into the container as if nothing changed. Some decoders
    will handle this just fine, but that's often more luck than by
    design. In any case, it's not right, so we disallow it now.

-   matroskamux had Table of Content (TOC) support now (chapters etc.)
    and matroskademux TOC support has been improved. matroskademux has
    also seen seeking improvements searching for the right cluster and
    position.

-   videocrop now uses GstVideoCropMeta if downstream supports it, which
    means cropping can be handled more efficiently without any copying.

-   compositor now has support for _crossfade blending_, which can be
    used via the new "crossfade-ratio" property on the sink pads.

-   The avwait element has a new "end-timecode" property and posts
    "avwait-status" element messages now whenever avwait starts or stops
    passing through data (e.g. because target-timecode and end-timecode
    respectively have been reached).

 
-   h265parse and h265parse will try harder to make upstream output the
    same caps as downstream requires or prefers, thus avoiding
    unnecessary conversion. The parsers also expose chroma format and
    bit depth in the caps now.

-   The dtls elements now longer rely on or require the application to
    run a GLib main loop that iterates the default main context
    (GStreamer plugins should never rely on the application running a
    GLib main loop).

-   openh264enc allows to change the encoding bitrate dynamically at
    runtime now

-   nvdec is a new plugin for hardware-accelerated video decoding using
    the NVIDIA NVDEC API (which replaces the old VDPAU API which is no
    longer supported by NVIDIA)

-   The NVIDIA NVENC hardware-accelerated video encoders now support
    dynamic bitrate and preset reconfiguration and support the I420
    4:2:0 video format. It's also possible to configure the gop size via
    the new "gop-size" property.

-   The MPEG-TS muxer and demuxer (tsmux, tsdemux) now have support for
    JPEG2000

-   openjpegdec and jpeg2000parse support 2-component images now (gray
    with alpha), and jpeg2000parse has gained limited support for
    conversion between JPEG2000 stream-formats. (JP2, J2C, JPC) and also
    extracts more details such as colorimetry, interlace-mode,
    field-order, multiview-mode and chroma siting.

-   The decklink plugin for Blackmagic capture and playback cards have
    seen numerous improvements:

-   decklinkaudiosrc and decklinkvideosrc now put hardware reference
    timestamp on buffers in form of GstReferenceTimestampMetas.
    This can be useful to know on multi-channel cards which frames from
    different channels were captured at the same time.

-   decklinkvideosink has gained support for Decklink hardware keying
    with two new properties ("keyer-mode" and "keyer-level") to control
    the built-in hardware keyer of Decklink cards.

-   decklinkaudiosink has been re-implemented around GstBaseSink instead
    of the GstAudioBaseSink base class, since the Decklink APIs don't
    fit very well with the GstAudioBaseSink APIs, which used to cause
    various problems due to inaccuracies in the clock calculations.
    Problems were audio drop-outs and A/V sync going wrong after
    pausing/seeking.

-   support for more than 16 devices, without any artificial limit

-   work continued on the msdk plugin for Intel's Media SDK which
    enables hardware-accelerated video encoding and decoding on Intel
    graphics hardware on Windows or Linux. More tuning options were
    added, and more pixel formats and video codecs are supported now.
    The encoder now also handles force-key-unit events and can insert
    frame-packing SEIs for side-by-side and top-bottom stereoscopic 3D
    video.

-   dashdemux can now do adaptive trick play of certain types of DASH
    streams, meaning it can do fast-forward/fast-rewind of normal (non-I
    frame only) streams even at high speeds without saturating network
    bandwidth or exceeding decoder capabilities. It will keep statistics
    and skip keyframes or fragments as needed. See Sebastian's blog post
    _DASH trick-mode playback in GStreamer_ for more details. It also
    supports webvtt subtitle streams now and has seen improvements when
    seeking in live streams.

 
-   kmssink has seen lots of fixes and improvements in this cycle,
    including:

-   Raspberry Pi (vc4) and Xilinx DRM driver support

-   new "render-rectangle" property that can be used from the command
    line as well as "display-width" and "display-height", and
    "can-scale" properties

-   GstVideoCropMeta support
Sebastian Dröge's avatar
Sebastian Dröge committed
566

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
567
Plugin and library moves
Sebastian Dröge's avatar
Sebastian Dröge committed
568

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671
MPEG-1 audio (mp1, mp2, mp3) decoders and encoders moved to -good

Following the expiration of the last remaining mp3 patents in most
jurisdictions, and the termination of the mp3 licensing program, as well
as the decision by certain distros to officially start shipping full mp3
decoding and encoding support, these plugins should now no longer be
problematic for most distributors and have therefore been moved from
-ugly and -bad to gst-plugins-good. Distributors can still disable these
plugins if desired.

In particular these are:

-   mpg123audiodec: an mp1/mp2/mp3 audio decoder using libmpg123
-   lamemp3enc: an mp3 encoder using LAME
-   twolamemp2enc: an mp2 encoder using TwoLAME

GstAggregator moved from -bad to core

GstAggregator has been moved from gst-plugins-bad to the base library in
GStreamer and is now stable API.

GstAggregator is a new base class for mixers and muxers that have to
handle multiple input pads and aggregate streams into one output stream.
It improves upon the existing GstCollectPads API in that it is a proper
base class which was also designed with live streaming in mind.
GstAggregator subclasses will operate in a mode with defined latency if
any of the inputs are live streams. This ensures that the pipeline won't
stall if any of the inputs stop producing data, and that the configured
maximum latency is never exceeded.

GstAudioAggregator, audiomixer and audiointerleave moved from -bad to -base

GstAudioAggregator is a new base class for raw audio mixers and muxers
and is based on GstAggregator (see above). It provides defined-latency
mixing of raw audio inputs and ensures that the pipeline won't stall
even if one of the input streams stops producing data.

As part of the move to stabilise the API there were some last-minute API
changes and clean-ups, but those should mostly affect internal elements.

It is used by the audiomixer element, which is a replacement for
'adder', which did not handle live inputs very well and did not align
input streams according to running time. audiomixer should behave much
better in that respect and generally behave as one would expected in
most scenarios.

Similarly, audiointerleave replaces the 'interleave' element which did
not handle live inputs or non-aligned inputs very robustly.

GstAudioAggregator and its subclases have gained support for input
format conversion, which does not include sample rate conversion though
as that would add additional latency. Furthermore, GAP events are now
handled correctly.

We hope to move the video equivalents (GstVideoAggregator and
compositor) to -base in the next cycle, i.e. for 1.16.

GStreamer OpenGL integration library and plugin moved from -bad to -base

The GStreamer OpenGL integration library and opengl plugin have moved
from gst-plugins-bad to -base and are now part of the stable API canon.
Not all OpenGL elements have been moved; a few had to be left behind in
gst-plugins-bad in the new openglmixers plugin, because they depend on
the GstVideoAggregator base class which we were not able to move in this
cycle. We hope to reunite these elements with the rest of their family
for 1.16 though.

This is quite a milestone, thanks to everyone who worked to make this
happen!

Qt QML and GTK plugins moved from -bad to -good

The Qt QML-based qmlgl plugin has moved to -good and provides a
qmlglsink video sink element as well as a qmlglsrc element. qmlglsink
renders video into a QQuickItem, and qmlglsrc captures a window from a
QML view and feeds it as video into a pipeline for further processing.
Both elements leverage GStreamer's OpenGL integration. In addition to
the move to -good the following features were added:

-   A proxy object is now used for thread-safe access to the QML widget
    which prevents crashes in corner case scenarios: QML can destroy the
    video widget at any time, so without this we might be left with a
    dangling pointer.

-   EGL is now supported with the X11 backend, which works e.g. on
    Freescale imx6

The GTK+ plugin has also moved from -bad to -good. It includes gtksink
and gtkglsink which both render video into a GtkWidget. gtksink uses
Cairo for rendering the video, which will work everywhere in all
scenarios but involves an extra memory copy, whereas gtkglsink fully
leverages GStreamer's OpenGL integration, but might not work properly in
all scenarios, e.g. where the OpenGL driver does not properly support
multiple sharing contexts in different threads; on Linux Nouveau is
known to be broken in this respect, whilst NVIDIA's proprietary drivers
and most other drivers generally work fine, and the experience with
Intel's driver seems to be fixed; some proprietary embedded Linux
drivers don't work; macOS works).

GstPhysMemoryAllocator interface moved from -bad to -base

GstPhysMemoryAllocator is a marker interface for allocators with
physical address backed memory.
Sebastian Dröge's avatar
Sebastian Dröge committed
672

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
673
Plugin removals
Sebastian Dröge's avatar
Sebastian Dröge committed
674

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
675 676 677
-   the sunaudio plugin was removed, since it couldn't ever have been
    built or used with GStreamer 1.0, but no one even noticed in all
    these years.
Sebastian Dröge's avatar
Sebastian Dröge committed
678

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
679 680 681 682
-   the schroedinger-based Dirac encoder/decoder plugin has been
    removed, as there is no longer any upstream or anyone else
    maintaining it. Seeing that it's quite a fringe codec it seemed best
    to simply remove it.
Sebastian Dröge's avatar
Sebastian Dröge committed
683

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
684
API removals
Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
685

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
686 687 688
-   some MPEG video parser API in the API unstable codecutils library in
    gst-plugins-bad was removed after having been deprecated for 5
    years.
Sebastian Dröge's avatar
Sebastian Dröge committed
689 690


Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
691
Miscellaneous changes
Sebastian Dröge's avatar
Sebastian Dröge committed
692

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718
-   The video support library has gained support for a few new pixel
    formats:
-   NV16_10LE32: 10-bit variant of NV16, packed into 32bit words (plus 2
    bits padding)
-   NV12_10LE32: 10-bit variant of NV12, packed into 32bit words (plus 2
    bits padding)
-   GRAY10_LE32: 10-bit grayscale, packed in 32bit words (plus 2 bits
    padding)

-   decodebin, playbin and GstDiscoverer have seen stability
    improvements in corner cases such as shutdown while still starting
    up or shutdown in error cases (hat tip to the oss-fuzz project).

-   floating reference handling was inconsistent and has been cleaned up
    across the board, including annotations. This solves various
    long-standing memory leaks in language bindings, which e.g. often
    caused elements and pads to be leaked.

-   major gobject-introspection annotation improvements for large parts
    of the library API, including nullability of return types and
    function parameters, correct types (e.g. strings vs. filenames),
    ownership transfer, array length parameters, etc. This allows to use
    bigger parts of the GStreamer API to be safely used from dynamic
    language bindings (e.g. Python, Javascript) and allows static
    bindings (e.g. C#, Rust, Vala) to autogenerate more API bindings
    without manual intervention.
Sebastian Dröge's avatar
Sebastian Dröge committed
719

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
720
OpenGL integration
Sebastian Dröge's avatar
Sebastian Dröge committed
721

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
722 723 724 725 726 727 728 729 730 731 732 733 734 735
-   The GStreamer OpenGL integration library has moved to
    gst-plugins-base and is now part of our stable API.

-   new MESA3D GBM BACKEND. On devices with working libdrm support, it
    is possible to use Mesa3D's GBM library to set up an EGL context
    directly on top of KMS. This makes it possible to use the GStreamer
    OpenGL elements without a windowing system if a libdrm- and
    Mesa3D-supported GPU is present.

-   Prefer wayland display over X11: As most Wayland compositors support
    XWayland, the X11 backend would get selected.

-   gldownload can export dmabufs now, and glupload will advertise
    dmabuf as caps feature.
Sebastian Dröge's avatar
Sebastian Dröge committed
736 737


Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
738
Tracing framework and debugging improvements
Sebastian Dröge's avatar
Sebastian Dröge committed
739

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793
-   NEW MEMORY RINGBUFFER BASED DEBUG LOGGER, useful for long-running
    applications or to retrieve diagnostics when encountering an error.
    The GStreamer debug logging system provides in-depth debug logging
    about what is going on inside a pipeline. When enabled, debug logs
    are usually written into a file, printed to the terminal, or handed
    off to a log handler installed by the application. However, at
    higher debug levels the volume of debug output quickly becomes
    unmanageable, which poses a problem in disk-space or bandwidth
    restricted environments or with long-running pipelines where a
    problem might only manifest itself after multiple days. In those
    situations, developers are usually only interested in the most
    recent debug log output. The new in-memory ringbuffer logger makes
    this easy: just installed it with gst_debug_add_ring_buffer_logger()
    and retrieve logs with gst_debug_ring_buffer_logger_get_logs() when
    needed. It is possible to limit the memory usage per thread and set
    a timeout to determine how long messages are kept around. It was
    always possible to implement this in the application with a custom
    log handler of course, this just provides this functionality as part
    of GStreamer.

 
-   'fakevideosink is a null sink for video data that advertises
    video-specific metas ane behaves like a video sink. See above for
    more details.

-   gst_util_dump_buffer() prints the content of a buffer to stdout.

-   gst_pad_link_get_name() and gst_state_change_get_name() print pad
    link return values and state change transition values as strings.

-   The LATENCY TRACER has seen a few improvements: trace records now
    contain timestamps which is useful to plot things over time, and
    downstream synchronisation time is now excluded from the measured
    values.

-   Miniobject refcount tracing and logging was not entirley
    thread-safe, there were duplicates or missing entries at times. This
    has now been made reliable.

-   The netsim element, which can be used to simulate network jitter,
    packet reordering and packet loss, received new features and
    improvements: it can now also simulate network congestion using a
    token bucket algorithm. This can be enabled via the "max-kbps"
    property. Packet reordering can be disabled now via the
    "allow-reordering" property: Reordering of packets is not very
    common in networks, and the delay functions will always introduce
    reordering if delay > packet-spacing, so by setting
    "allow-reordering" to FALSE you guarantee that the packets are in
    order, while at the same time introducing delay/jitter to them. By
    using the new "delay-distribution" property the use can control how
    the delay applied to delayed packets is distributed: This is either
    the uniform distribution (as before) or the normal distribution; in
    addition there is also the gamma distribution which simulates the
    delay on wifi networks better.
Sebastian Dröge's avatar
Sebastian Dröge committed
794 795


Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
796
Tools
Sebastian Dröge's avatar
Sebastian Dröge committed
797

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
798 799 800 801 802 803 804 805 806 807 808
-   gst-inspect-1.0 now prints pad properties for elements that have pad
    subclasses with special properties, such as compositor or
    audiomixer. This only works for elements that use the newly-added
    GstPadTemplate API API or the
    gst_element_class_add_static_pad_template_with_gtype() convenience
    function to tell GStreamer about the special pad subclass.

-   gst-launch-1.0 now generates a gstreamer pipeline diagram (.dot
    file) whenever SIGHUP is sent to it on Linux/*nix systems.

-   gst-discoverer-1.0 can now analyse live streams such as rtsp:// URIs
Sebastian Dröge's avatar
Sebastian Dröge committed
809 810


Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
811
GStreamer RTSP server
Sebastian Dröge's avatar
Sebastian Dröge committed
812

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850
-   Initial support for [RTSP protocol version
    2.0][rtsp2-lightning-talk] was added, which is to the best of our
    knowledge the first RTSP 2.0 implementation ever!

-   ONVIF audio backchannel support. This is an extension specified by
    ONVIF that allows RTSP clients (e.g. a control room operator) to
    send audio back to the RTSP server (e.g. an IP camera).
    Theoretically this could have been done also by using the RECORD
    method of the RTSP protocol, but ONVIF chose not to do that, so the
    backchannel is set up alongside the other streams. Format
    negotiation needs to be done out of band, if needed. Use the new
    ONVIF-specific subclasses GstRTSPOnvifServer and
    GstRTSPOnvifMediaFactory to enable this functionality.

 
-   The internal server streaming pipeline is now dynamically
    reconfigured on PLAY based on the transports needed. This means that
    the server no longer adds the pipeline plumbing for all possible
    transports from the start, but only if needed as needed. This
    improves performance and memory footprint.

-   rtspclientsink has gained an "accept-certificate" signal for
    manually checking a TLS certificate for validity.

-   Fix keep-alive/timeout issue for certain clients using TCP
    interleave as transport who don't do keep-alive via some other
    method such as periodic RTSP OPTION requests. We now put netaddress
    metas on the packets from the TCP interleaved stream, so can map
    RTCP packets to the right stream in the server and can handle them
    properly.

-   Language bindings improvements: in general there were quite a few
    improvements in the gobject-introspection annotations, but we also
    extended the permissions API which was not usable from bindings
    before.

-   Fix corner case issue where the wrong mount point was found when
    there were multiple mount points with a common prefix.
Sebastian Dröge's avatar
Sebastian Dröge committed
851 852


Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
853
GStreamer VAAPI
Sebastian Dröge's avatar
Sebastian Dröge committed
854

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
855
-   this section will be filled in shortly {FIXME!}
Sebastian Dröge's avatar
Sebastian Dröge committed
856 857


Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
858
GStreamer Editing Services and NLE
Sebastian Dröge's avatar
Sebastian Dröge committed
859

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
860
-   this section will be filled in shortly {FIXME!}
Sebastian Dröge's avatar
Sebastian Dröge committed
861 862


Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
863
GStreamer validate
Sebastian Dröge's avatar
Sebastian Dröge committed
864

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
865
-   this section will be filled in shortly {FIXME!}
Sebastian Dröge's avatar
Sebastian Dröge committed
866 867


Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
868
GStreamer Python Bindings
Sebastian Dröge's avatar
Sebastian Dröge committed
869

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
870
-   this section will be filled in shortly {FIXME!}
Sebastian Dröge's avatar
Sebastian Dröge committed
871 872


Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
873
Build and Dependencies
Sebastian Dröge's avatar
Sebastian Dröge committed
874

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932
-   the new WebRTC support in gst-plugins-bad depends on the GStreamer
    elements that ship as part of libnice, and libnice version 1.1.14 is
    required. Also the dtls and srtp plugins.

-   gst-plugins-bad no longer depends on the libschroedinger Dirac codec
    library.

-   The srtp plugin can now also be built against libsrtp2.

-   some plugins and libraries have moved between modules, see the
    _Plugin and_ _library moves_ section above, and their respective
    dependencies have moved with them of course, e.g. the GStreamer
    OpenGL integration support library and plugin is now in
    gst-plugins-base, and mpg123, LAME and twoLAME based audio decoder
    and encoder plugins are now in gst-plugins-good.

-   Unify static and dynamic plugin interface and remove plugin specific
    static build option: Static and dynamic plugins now have the same
    interface. The standard --enable-static/--enable-shared toggle is
    sufficient. This allows building static and shared plugins from the
    same object files, instead of having to build everything twice.

-   The default plugin entry point has changed. This will only affect
    plugins that are recompiled against new GStreamer headers. Binary
    plugins using the old entry point will continue to work. However,
    plugins that are recompiled must have matching plugin names in
    GST_PLUGIN_DEFINE and filenames, as the plugin entry point for
    shared plugins is now deduced from the plugin filename. This means
    you can no longer have a plugin called foo living in a file called
    libfoobar.so or such, the plugin filename needs to match. This might
    cause problems with some external third party plugin modules when
    they get rebuilt against GStreamer 1.14.


Note to packagers and distributors

A number of libraries, APIs and plugins moved between modules and/or
libraries in different modules between version 1.12.x and 1.14.x, see
the _Plugin and_ _library moves_ section above. Some APIs have seen
minor ABI changes in the course of moving them into the stable APIs
section.

This means that you should try to ensure that all major GStreamer
modules are synced to the same major version (1.12 or 1.13/1.14) and can
only be upgraded in lockstep, so that your users never end up with a mix
of major versions on their system at the same time, as this may cause
breakages.

Also, plugins compiled against >= 1.14 headers will not load with
GStreamer <= 1.12 owing to a new plugin entry point (but plugin binaries
built against older GStreamer versions will continue to load with newer
versions of GStreamer of course).

There is also a small structure size related ABI breakage introduced in
the gst-plugins-bad codecparsers library between version 1.13.90 and
1.13.91. This should "only" affect gstreamer-vaapi, so anyone who ships
the release candidates is advised to upgrade those two modules at the
same time.
Sebastian Dröge's avatar
Sebastian Dröge committed
933 934


Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
935
Platform-specific improvements
Sebastian Dröge's avatar
Sebastian Dröge committed
936

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
937
Android
Sebastian Dröge's avatar
Sebastian Dröge committed
938

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
939
-   ahcsrc (Android camera source) does autofocus now
Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
940 941 942

macOS and iOS

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
943
-   this section will be filled in shortly {FIXME!}
Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
944 945 946

Windows

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981
-   The GStreamer wasapi plugin was rewritten and should not only be
    usable now, but in top shape and suitable for low-latency use cases.
    The Windows Audio Session API (WASAPI) is Microsoft's most modern
    method for talking with audio devices, and now that the wasapi
    plugin is up to scratch it is preferred over the directsound plugin.
    The ranks of the wasapisink and wasapisrc elements have been updated
    to reflect this. Further improvements include:

-   support for more than 2 channels

-   a new "low-latency" property to enable low-latency operation (which
    should always be safe to enable)

-   support for the AudioClient3 API which is only available on Windows
    10: in wasapisink this will be used automatically if available; in
    wasapisrc it will have to be enabled explicitly via the
    "use-audioclient3" property, as capturing audio with low latency and
    without glitches seems to require setting the realtime priority of
    the entire pipeline to "critical", which cannot be done from inside
    the element, but has to be done in the application.

-   set realtime thread priority to avoid glitches

-   allow opening devices in exclusive mode, which provides much lower
    latency compared to shared mode where WASAPI's engine period is
    10ms. This can be activated via the "exclusive" property.

-   There are now GstDeviceProvider implementations for the wasapi and
    directsound plugins, so it's now possible to discover both audio
    sources and audio sinks on Windows via the GstDeviceMonitor API

-   debug log timestamps are now higher granularity owing to
    g_get_monotonic_time() now being used as fallback in
    gst_utils_get_timestamp(). Before that, there would sometimes be
    10-20 lines of debug log output sporting the same timestamp.
Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033


Contributors

Aaron Boxer, Adrián Pardini, Adrien SCH, Akinobu Mita, Alban Bedel,
Alessandro Decina, Alex Ashley, Alicia Boya García, Alistair Buxton,
Alvaro Margulis, Anders Jonsson, Andreas Frisch, Andrejs Vasiljevs,
Andrew Bott, Antoine Jacoutot, Antonio Ospite, Antoni Silvestre, Anton
Obzhirov, Anuj Jaiswal, Arjen Veenhuizen, Arnaud Bonatti, Arun Raghavan,
Ashish Kumar, Aurélien Zanelli, Ayaka, Branislav Katreniak, Branko
Subasic, Brion Vibber, Carlos Rafael Giani, Cassandra Rommel, Chris
Bass, Chris Paulson-Ellis, Christoph Reiter, Claudio Saavedra, Clemens
Lang, Cyril Lashkevich, Daniel van Vugt, Dave Craig, Dave Johnstone,
David Evans, David Schleef, Deepak Srivastava, Dimitrios Katsaros,
Dmitry Zhadinets, Dongil Park, Dustin Spicuzza, Eduard Sinelnikov,
Edward Hervey, Enrico Jorns, Eunhae Choi, Ezequiel Garcia, fengalin,
Filippo Argiolas, Florent Thiéry, Florian Zwoch, Francisco Velazquez,
François Laignel, fvanzile, George Kiagiadakis, Georg Lippitsch, Graham
Leggett, Guillaume Desmottes, Gurkirpal Singh, Gwang Yoon Hwang, Gwenole
Beauchesne, Haakon Sporsheim, Haihua Hu, Håvard Graff, Heekyoung Seo,
Heinrich Fink, Holger Kaelberer, Hoonhee Lee, Hosang Lee, Hyunjun Ko,
Ian Jamison, James Stevenson, Jan Alexander Steffens (heftig), Jan
Schmidt, Jason Lin, Jens Georg, Jeremy Hiatt, Jérôme Laheurte, Jimmy
Ohn, Jochen Henneberg, John Ludwig, John Nikolaides, Jonathan Karlsson,
Josep Torra, Juan Navarro, Juan Pablo Ugarte, Julien Isorce, Jun Xie,
Jussi Kukkonen, Justin Kim, Lasse Laursen, Lubosz Sarnecki, Luc
Deschenaux, Luis de Bethencourt, Marcin Lewandowski, Mario Alfredo
Carrillo Arevalo, Mark Nauwelaerts, Martin Kelly, Matej Knopp, Mathieu
Duponchelle, Matteo Valdina, Matt Fischer, Matthew Waters, Matthieu
Bouron, Matthieu Crapet, Matt Staples, Michael Catanzaro, Michael
Olbrich, Michael Shigorin, Michael Tretter, Michał Dębski, Michał Górny,
Michele Dionisio, Miguel París, Mikhail Fludkov, Munez, Nael Ouedraogo,
Neos3452, Nicholas Panayis, Nick Kallen, Nicola Murino, Nicolas
Dechesne, Nicolas Dufresne, Nirbheek Chauhan, Ognyan Tonchev, Ole André
Vadla Ravnås, Oleksij Rempel, Olivier Crête, Omar Akkila, Orestis
Floros, Patricia Muscalu, Patrick Radizi, Paul Kim, Per-Erik Brodin,
Peter Seiderer, Philip Craig, Philippe Normand, Philippe Renon, Philipp
Zabel, Pierre Pouzol, Piotr Drąg, Ponnam Srinivas, Pratheesh Gangadhar,
Raimo Järvi, Ramprakash Jelari, Ravi Kiran K N, Reynaldo H. Verdejo
Pinochet, Rico Tzschichholz, Robert Rosengren, Roland Peffer, Руслан
Ижбулатов, Sam Hurst, Sam Thursfield, Sangkyu Park, Sanjay NM, Satya
Prakash Gupta, Scott D Phillips, Sean DuBois, Sebastian Cote, Sebastian
Dröge, Sebastian Rasmussen, Sejun Park, Sergey Borovkov, Seungha Yang,
Shakin Chou, Shinya Saito, Simon Himmelbauer, Sky Juan, Song Bing,
Sreerenj Balachandran, Stefan Kost, Stefan Popa, Stefan Sauer, Stian
Selnes, Thiago Santos, Thibault Saunier, Thijs Vermeir, Tim Allen,
Tim-Philipp Müller, Ting-Wei Lan, Tomas Rataj, Tom Bailey, Tonu Jaansoo,
U. Artie Eoff, Umang Jain, Ursula Maplehurst, VaL Doroshchuk, Vasilis
Liaskovitis, Víctor Manuel Jáquez Leal, vijay, Vincent Penquerc'h,
Vineeth T M, Vivia Nikolaidou, Wang Xin-yu (王昕宇), Wei Feng, Wim
Taymans, Wonchul Lee, Xabier Rodriguez Calvar, Xavier Claessens,
XuGuangxin, Yasushi SHOJI, Yi A Wang, Youness Alaoui,
Sebastian Dröge's avatar
Sebastian Dröge committed
1034 1035 1036 1037

... and many others who have contributed bug reports, translations, sent
suggestions or helped testing.

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
1038

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
1039
Bugs fixed in 1.14
Sebastian Dröge's avatar
Sebastian Dröge committed
1040

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
1041
More than 800 bugs have been fixed during the development of 1.14.
Sebastian Dröge's avatar
Sebastian Dröge committed
1042 1043

This list does not include issues that have been cherry-picked into the
Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
1044 1045 1046 1047 1048
stable 1.12 branch and fixed there as well, all fixes that ended up in
the 1.12 branch are also included in 1.14.

This list also does not include issues that have been fixed without a
bug report in bugzilla, so the actual number of fixes is much higher.
Sebastian Dröge's avatar
Sebastian Dröge committed
1049 1050


Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
1051
Stable 1.14 branch
Sebastian Dröge's avatar
Sebastian Dröge committed
1052

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
1053 1054 1055 1056 1057
After the 1.14.0 release there will be several 1.14.x bug-fix releases
which will contain bug fixes which have been deemed suitable for a
stable branch, but no new features or intrusive changes will be added to
a bug-fix release usually. The 1.14.x bug-fix releases will be made from
the git 1.14 branch, which is a stable branch.
Sebastian Dröge's avatar
Sebastian Dröge committed
1058

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
1059
1.14.0
Sebastian Dröge's avatar
Sebastian Dröge committed
1060

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
1061
1.14.0 is scheduled to be released in early March 2018.
Sebastian Dröge's avatar
Sebastian Dröge committed
1062 1063


Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
1064
Known Issues
Sebastian Dröge's avatar
Sebastian Dröge committed
1065

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
1066 1067
-   The webrtcdsp element (which is unrelated to the newly-landed
    GStreamer webrtc support) is currently not shipped as part of the
Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
1068
    Windows binary packages due to a build system issue.
Sebastian Dröge's avatar
Sebastian Dröge committed
1069 1070


Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
1071
Schedule for 1.16
Sebastian Dröge's avatar
Sebastian Dröge committed
1072

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
1073 1074 1075
Our next major feature release will be 1.16, and 1.15 will be the
unstable development version leading up to the stable 1.16 release. The
development of 1.15/1.16 will happen in the git master branch.
Sebastian Dröge's avatar
Sebastian Dröge committed
1076

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
1077 1078 1079
The plan for the 1.16 development cycle is yet to be confirmed, but it
is expected that feature freeze will be around August 2017 followed by
several 1.15 pre-releases and the new 1.16 stable release in September.
Sebastian Dröge's avatar
Sebastian Dröge committed
1080

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
1081 1082
1.16 will be backwards-compatible to the stable 1.14, 1.12, 1.10, 1.8,
1.6, 1.4, 1.2 and 1.0 release series.
Sebastian Dröge's avatar
Sebastian Dröge committed
1083

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
1084
------------------------------------------------------------------------
Sebastian Dröge's avatar
Sebastian Dröge committed
1085

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
1086 1087
_These release notes have been prepared by Tim-Philipp Müller with_
_contributions from Sebastian Dröge._
Sebastian Dröge's avatar
Sebastian Dröge committed
1088

Tim-Philipp Müller's avatar
Tim-Philipp Müller committed
1089
_License: CC BY-SA 4.0_