1. 21 May, 2019 1 commit
  2. 13 May, 2019 2 commits
  3. 30 Apr, 2019 1 commit
    • Seungha Yang's avatar
      video: Add new APIs for HDR information representation · 74d909bc
      Seungha Yang authored
      Introduce HDR signalling methods
      * GstVideoMasteringDisplayInfo: Representing display color volume info.
        Defined by SMPTE ST 2086
      * GstVideoContentLightLevel: Representing content light level specified in
        CEA-861.3, Appendix A.
      Closes #400
  4. 10 Apr, 2019 1 commit
  5. 07 Apr, 2019 1 commit
  6. 05 Mar, 2019 1 commit
    • Tim-Philipp Müller's avatar
      audiodecoder: add _finish_subframe() method · 8d112201
      Tim-Philipp Müller authored
      This allows us to output audio samples without discarding
      any input frames, which is useful for some formats/codecs
      (e.g. the MonkeysAudio decoder implementation in ffmpeg
      which will might return e.g. 16 output buffers for an
      input buffer for certain files).
      In the past decoder implementations just concatenated
      the returned audio buffers until a full frame had been
      decoded, but that's no longer possible to do efficiently
      when the decoder returns audio samples in non-interleaved
      Allowing subframes to be output before the entire input
      frame is decoded can also be useful to decrease startup
  7. 28 Feb, 2019 1 commit
  8. 29 Jan, 2019 2 commits
  9. 06 Jan, 2019 1 commit
  10. 28 Dec, 2018 1 commit
  11. 19 Dec, 2018 1 commit
  12. 17 Dec, 2018 1 commit
    • Mathieu Duponchelle's avatar
      audio-converter: add API to determine passthrough mode · 1edb2c42
      Mathieu Duponchelle authored
      audioconvert's passthrough status can no longer be determined
      strictly from input / output caps equality, as a mix-matrix can
      now be specified.
      We now call gst_base_transform_set_passthrough dynamically, based
      on the return from the new gst_audio_converter_is_passthrough()
      API, which takes the mix matrix into account.
  13. 15 Dec, 2018 1 commit
  14. 13 Dec, 2018 1 commit
    • Justin Kim's avatar
      rtcpbuffer: add support XR packet parsing · 5303e2c3
      Justin Kim authored
      According to RFC3611, the extended report blocks in XR packet can
      have variable length. To visit each block, the iterator should look
      into block header. Once XR type is extracted, users can parse the
      detailed information by given functions.
      Loss/Duplicate RLE
      The Loss RLE and the Duplicate RLE have same format so
      they can share parsers. For unit test, randomly generated
      pseudo packet is used.
      Packet Receipt Times
      The packet receipt times report block has a list of receipt
      times which are in [begin_seq, end_seq).
      Receiver Reference Time paser for XR packet
      The receiver reference time has ntptime which is 64 bit type.
      The DLRR report block consists of sub-blocks which has ssrc, last RR,
      and delay since last RR. The number of sub-blocks should be calculated
      from block length.
      Statistics Summary
      The Statistics Summary report block provides fixed length
      VoIP Metrics
      VoIP Metrics consists of several metrics even though they are in
      a report block. Data retrieving functions are added per metrics.
  15. 21 Nov, 2018 1 commit
    • Tomasz Andrzejak's avatar
      audiodecoder: add API for setting caps on the source pad · e0268c02
      Tomasz Andrzejak authored
      This patch adds API in the audio decoder base class for setting the arbitrary
      caps on the source pad.  Previously only caps converted from audio info were
      possible.  This is particularly useful when subclass wants to set caps features
      for audio decoder producing metadata.
  16. 12 Nov, 2018 1 commit
  17. 10 Oct, 2018 1 commit
    • Stian Selnes's avatar
      rtpbasepayload: rtpbasedepayload: Add source-info property · f766b85b
      Stian Selnes authored
      Add a source-info property that will read/write meta to the buffers
      about RTP source information. The GstRTPSourceMeta can be used to
      transport information about the origin of a buffer, e.g. the sources
      that is included in a mixed audio buffer.
      A new function gst_rtp_base_payload_allocate_output_buffer() is added
      for payloaders to use to allocate the output RTP buffer with the correct
      number of CSRCs according to the meta and fill it.
      RTPSourceMeta does not make sense on RTP buffers since the information
      is in the RTP header. So the payloader will strip the meta from the
      output buffer.
  18. 03 Oct, 2018 1 commit
  19. 18 Sep, 2018 1 commit
  20. 27 Jul, 2018 1 commit
  21. 16 Jul, 2018 1 commit
  22. 21 Jun, 2018 1 commit
  23. 11 Jun, 2018 1 commit
  24. 05 May, 2018 1 commit
  25. 26 Apr, 2018 1 commit
  26. 09 Apr, 2018 1 commit
    • Edward Hervey's avatar
      video: Add support for VANC and Closed Caption · 9dceb6ca
      Edward Hervey authored
      This commits add common elements for Ancillary Data and Closed
      Caption support in GStreamer:
      * A VBI (Video Blanking Interval) parser that supports detection
        and extraction of Ancillary data according to the SMPTE S291M
        specification. Currently supports the v210 and UYVY video
      * A new GstMeta for Closed Caption : GstVideoCaptionMeta. This
        supports the two types of CC : CEA-608 and CEA-708, along with
        the 4 different ways they can be transported (other systems
        are super-set of those).
  27. 02 Apr, 2018 1 commit
    • Edward Hervey's avatar
      docs/libs: The big spring cleanup · 10c161c7
      Edward Hervey authored
      * Explicitely specify which headers aren't to be included in gtkdoc-scan
        This is essentially all the headers that are not installed and only
        for internal/local usage. This also includes the orc-generated headers.
      * Remove all symbols/sections that are no longer present (due to accurately
        scanning only the headers we need).
      * Add or expose sections which weren't previously exposed
      * Make sure the "unified" library headers (ex: gst/video/video.h) are used
        everywhere applicable. Only use the specific headers where applicable
        (such as the GL-implementation-specific objects)
      * Add all documentation which was not previously exposed in the right sections
      * Update 'types' file to get as many runtime information as possible
      This brings down the number of unused symbols to 15 (from over 300).
  28. 13 Mar, 2018 1 commit
  29. 11 Mar, 2018 2 commits
  30. 08 Mar, 2018 1 commit
  31. 15 Feb, 2018 2 commits
  32. 14 Feb, 2018 2 commits
  33. 13 Feb, 2018 2 commits
  34. 20 Dec, 2017 1 commit