Skip to content
Snippets Groups Projects
  1. Jan 26, 2021
    • Seungha Yang's avatar
      d3d11: Suppress some warning debug messages · 0b0bf1b0
      Seungha Yang authored
      * Don't warn for live object, since ID3D11Debug itself seems to be
        holding refcount of ID3D11Device at the moment we called
        ID3D11Debug::ReportLiveDeviceObjects(). It would report live object
        always
      * Device might not be able to support some formats (e.g., P010)
        especially in case of WARP device. We don't need to warn about that.
      * gst_d3d11_device_new() can be used for device enumeration. Don't warn
        even if we cannot create D3D11 device with given adapter index therefore.
      * Don't warn for HLSL compiler warning. It's just noise and
        should not be critical thing at all
      
      Part-of: <gstreamer/gst-plugins-bad!1986>
      0b0bf1b0
    • Seungha Yang's avatar
      examples: Add d3d11videosink examples for shared-texture use cases · 657370a9
      Seungha Yang authored and GStreamer Marge Bot's avatar GStreamer Marge Bot committed
      Add two examples to demonstrate "draw-on-shared-texture" use cases.
      
      d3d11videosink will draw application's own texture without copy
      by using:
      - Enable "draw-on-shared-texture" property
      - make use of "begin-draw" and "draw" signals
      
      And then, application will render the shared application's texture
      to swapchain's backbuffer by using
      1) Direct3D11 APIs
      2) Or, Direct3D9Ex + interop APIs
      
      Part-of: <gstreamer/gst-plugins-bad!1873>
      657370a9
    • Seungha Yang's avatar
      d3d11videosink: Add support for drawing on application's own texture · 60e223f4
      Seungha Yang authored and GStreamer Marge Bot's avatar GStreamer Marge Bot committed
      Add a way to support drawing on application's texture instead of
      usual window handle.
      To make use of this new feature, application should follow below step.
      1) Enable this feature by using "draw-on-shared-texture" property
      2) Watch "begin-draw" signal
      3) On "begin-draw" signal handler, application can request drawing
         by using "draw" signal action. Note that "draw" signal action
         should be happen before "begin-draw" signal handler is returned
      
      NOTE 1) For texture sharing, creating a texture with
      D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX flag is strongly recommend
      if possible because we cannot ensure sync a texture
      which was created with D3D11_RESOURCE_MISC_SHARED
      and it would cause glitch with ID3D11VideoProcessor use case.
      
      NOTE 2) Direct9Ex doesn't support texture sharing which was
      created with D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX. In other words,
      D3D11_RESOURCE_MISC_SHARED is the only option for Direct3D11/Direct9Ex interop.
      
      NOTE 3) Because of missing synchronization around ID3D11VideoProcessor,
      If shared texture was created with D3D11_RESOURCE_MISC_SHARED,
      d3d11videosink might use fallback texture to convert DXVA texture
      to normal Direct3D texture. Then converted texture will be
      copied to user-provided shared texture.
      
      * Why not use generic appsink approach?
      In order for application to be able to store video data
      which was produced by GStreamer in application's own texture,
      there would be two possible approaches,
      one is copying our texture into application's own texture,
      and the other is drawing on application's own texture directly.
      The former (appsink way) cannot be a zero-copy by nature.
      In order to support zero-copy processing, we need to draw on
      application's own texture directly.
      
      For example, assume that application wants RGBA texture.
      Then we can imagine following case.
      
      "d3d11h264dec ! d3d11convert ! video/x-raw(memory:D3D11Memory),format=RGBA ! appsink"
                                   ^
                                   |_ allocate new Direct3D texture for RGBA format
      
      In above case, d3d11convert will allocate new texture(s) for RGBA format
      and then application will copy again the our RGBA texutre into
      application's own texture. One texture allocation plus per frame GPU copy will hanppen
      in that case therefore.
      Moreover, in order for application to be able to access
      our texture, we need to allocate texture with additional flags for
      application's Direct3D11 device to be able to read texture data.
      That would be another implementation burden on our side
      
      But with this MR, we can configure pipeline in this way
      "d3d11h264dec ! d3d11videosink".
      
      In that way, we can save at least one texture allocation and
      per frame texutre copy since d3d11videosink will convert incoming texture
      into application's texture format directly without copy.
      
      * What if we expose texture without conversion and application does
        conversion by itself?
      As mentioned above, for application to be able to access our texture
      from application's Direct3D11 device, we need to allocate texture
      in a special form. But in some case, that might not be possible.
      Also, if a texture belongs to decoder DPB, exposing such texture
      to application is unsafe and usual Direct3D11 shader cannot handle
      such texture. To convert format, ID3D11VideoProcessor API needs to
      be used but that would be a implementation burden for application.
      
      Part-of: <!1873>
      60e223f4
    • Haihua Hu's avatar
      dashsink: add h265 codec support · 66788366
      Haihua Hu authored and GStreamer Marge Bot's avatar GStreamer Marge Bot committed
      Return hvc1 for video/x-h265 mime type in mpd helper function
      
      Part-of: <gstreamer/gst-plugins-bad!1966>
      66788366
    • He Junyan's avatar
      av1parse: set the default alignment for input and output. · db134d27
      He Junyan authored and Víctor Manuel Jáquez Leal's avatar Víctor Manuel Jáquez Leal committed
      1. Set the default output alignment to frame, rather than current
         alignment of obu. This make it the same behaviour as h264/h265
         parse, which default align to AU.
      2. Set the default input alignment to byte. It can handle the "not
         enough data" error while the OBU alignment can not. Also make it
         conform to the comments.
      
      Part-of: <gstreamer/gst-plugins-bad!1979>
      db134d27
    • He Junyan's avatar
    • He Junyan's avatar
      av1parse: Reset the annex_b when meet TU inside a buffer. · 5abf4ad4
      He Junyan authored and Víctor Manuel Jáquez Leal's avatar Víctor Manuel Jáquez Leal committed
      Part-of: <!1979>
      5abf4ad4
    • He Junyan's avatar
      av1parse: Output each OBU when output is aligned to obu. · d83f2532
      He Junyan authored and Víctor Manuel Jáquez Leal's avatar Víctor Manuel Jáquez Leal committed
      The current behaviour for obu aligned output is not very precise.
      Several OBUs will be output together within one gst buffer. We
      should output each gst buffer just containing one OBU. This is
      the same way as the h264/h265 parse do when NAL aligned.
      
      Part-of: <!1979>
      d83f2532
    • He Junyan's avatar
      av1parse: Always copy the OBU to cache. · ee1f6017
      He Junyan authored and Víctor Manuel Jáquez Leal's avatar Víctor Manuel Jáquez Leal committed
      The current optimization when input align and out out align are
      the same is not very correct. We simply copy the data from input
      buffer to output buffer, but we failed to consider the dropping of
      OBUs. When we need to drop some OBUs(such as filter out the OBUs
      of some temporal ID), we can not do simple copy. So we need to
      always copy the input OBUs into a cache.
      
      Part-of: <!1979>
      ee1f6017
    • He Junyan's avatar
      av1parse: Improve the logic when to drop the OBU. · a9c8aa47
      He Junyan authored and Víctor Manuel Jáquez Leal's avatar Víctor Manuel Jáquez Leal committed
      When drop some OBU, we need to go on. The current manner will make
      the data access out range of the buffer mapping.
      
      Part-of: <!1979>
      a9c8aa47
    • Marijn Suijten's avatar
      ext/ldac: Move duplicate sampling rates into #define · e8bb0fa0
      Marijn Suijten authored
      Because there was a typo in one of the duplicates already (see previous
      commit) it is much safer to specify these once and only once.
      
      Part-of: <!1985>
      e8bb0fa0
    • Marijn Suijten's avatar
      ext/ldac: Fix typo in 88200(0) stereo encoder sampling rate · 3747fdb1
      Marijn Suijten authored
      Fixes: a5768145 ("ext: Add LDAC encoder")
      Part-of: <!1985>
      3747fdb1
  2. Jan 25, 2021
  3. Jan 23, 2021
  4. Jan 22, 2021
  5. Jan 21, 2021
  6. Jan 20, 2021
  7. Jan 19, 2021
  8. Jan 18, 2021
  9. Jan 17, 2021
  10. Jan 15, 2021
    • He Junyan's avatar
      va: Make the caps pointer operation atomic in vadecoder. · 82c0f901
      He Junyan authored
      The vadecoder's srcpad_caps and sinkpad_caps pointers are outside of the
      mutex protection. Just make all operation for them atomic.
      
      Part-of: <!1957>
      82c0f901
    • He Junyan's avatar
      va: Fix a latent race condition in vabasedec. · 6b1e1924
      He Junyan authored
      The vabasedec's display and decoder are created/destroyed between
      the gst_va_base_dec_open/close pair. All the data and event handling
      functions are between this pair and so the accessing to these pointers
      are safe. But the query function can be called anytime. So we need to:
      1. Make these pointers operation in open/close and query atomic.
      2. Hold an extra ref during query function to avoid it destroyed.
      
      Part-of: <!1957>
      6b1e1924
  11. Jan 14, 2021
  12. Jan 13, 2021
    • Seungha Yang's avatar
      mfvideoenc: Add support for Direct3D11 texture · 84db4b68
      Seungha Yang authored and GStreamer Marge Bot's avatar GStreamer Marge Bot committed
      Initial support for d3d11 texture so that encoder can copy
      upstream d3d11 texture into encoder's own texture pool without
      downloading memory.
      
      This implementation requires MFTEnum2() API for creating
      MFT (Media Foundation Transform) object for specific GPU but
      the API is Windows 10 desktop only. So UWP is not target
      of this change.
      See also https://docs.microsoft.com/en-us/windows/win32/api/mfapi/nf-mfapi-mftenum2
      
      Note that, for MF plugin to be able to support old OS versions
      without breakage, this commit will load MFTEnum2() symbol
      by using g_module_open()
      
      Summary of required system environment:
      - Needs Windows 10 (probably at least RS 1 update)
      - GPU should support ExtendedNV12SharedTextureSupported feature
      - Desktop application only (UWP is not supported yet)
      
      Part-of: <!1903>
      84db4b68
    • Mathieu Duponchelle's avatar
      webrtc: expose transport property on sender and receiver · 86c009e7
      Mathieu Duponchelle authored and GStreamer Marge Bot's avatar GStreamer Marge Bot committed
      As advised by !1366#note_629558 , the nice transport should be
      accessed through:
      
      > transceiver->sender/receiver->transport/rtcp_transport->icetransport
      
      All the objects on the path can be accessed through properties
      except sender/receiver->transport. This patch addresses that.
      
      Part-of: <gstreamer/gst-plugins-bad!1952>
      86c009e7
    • Seungha Yang's avatar
      d3d11: Move core methods to gst-libs · 0f7af4b1
      Seungha Yang authored and GStreamer Marge Bot's avatar GStreamer Marge Bot committed
      Move d3d11 device, memory, buffer pool and minimal method
      to gst-libs so that other plugins can access d3d11 resource.
      Since Direct3D is primary graphics API on Windows, we need
      this infrastructure for various plugins can share GPU resource
      without downloading GPU memory.
      Note that this implementation is public only for -bad scope
      for now.
      
      Part-of: <!464>
      0f7af4b1
Loading