- 06 Jan, 2020 2 commits
-
-
Haihao Xiang authored
The frame width and height is rounded up to 128 and 32 since commit 8daac1c0, so the width, height for initialization should be rounded up to 128 and 32 too because the MSDK VP9 encoder will do some check on width and height. Sample pipeline: gst-launch-1.0 videotestsrc ! \ video/x-raw,width=320,height=240,format=NV12 ! msdkvp9enc ! fakesink
-
Renegotiation was implemented for bitrate change. We can re-use the same sequence when video info changes except that this can be executed right away when receiving the new input format. I.e. no need to wait for the next call to handle_frame.
-
- 05 Jan, 2020 1 commit
-
-
Philippe Normand authored
If there is no decklink hardware/driver, the devices list is empty (NULL), so this needs to be checked before iterating over the list.
-
- 03 Jan, 2020 7 commits
-
-
Add static or dynamic mpd with: - baseURL - period - adaptation_set - representaton - SegmentList - SegmentURL - SegmentTemplate Support multiple audio and video streams. Pass conformance test with DashIF.org
-
fix CID #1456553
-
Add a way to set/get properties for given nodes: - root - baseurl - representation
-
Add mpd node base class to provide xml generation facilities for child objects.
-
Julien Isorce authored
Useful when framerate changes. Previously it was only checking for resolution change but renego should happen if any video info changes.
-
context query might happen before creating swapper.
-
The block that sets use_video_memory flag is after the the condition `if gst_msdk_context_prepare` but it always returns false when there is no other msdk elements. So the decoder ends up with use_video_memory as FALSE. Note that msdkvpp always set use_video_memory as TRUE. When use_video_memory is FALSE then the msdkdec allocates the output frames with posix_memalign (see gstmsdksystemmemory.c). The result is then copied back to the GstVideoPool's buffers (or to the downstream pool's buffers if any). When use_video_memory is TRUE then the msdkdec uses vaCreateSurfaces to create vaapi surfaces for the hw decoder to decode into (see gstmsdkvideomemory.c). The result is then copied to either the internal GstVideoPool and to the downstream pool if any. (vaDeriveImage/vaMapBuffer is used in order to read the surfaces)
-
- 02 Jan, 2020 2 commits
-
-
Seungha Yang authored
Use boolean instead of GstFlowReturn as declared. Note that since base class does not check return value of GstVideoDecoder::flush(), this would not cause any change of behavior.
-
Haihao Xiang authored
gstreamer/gst-plugins-bad!924 is trying to use video memory for decoding on Linux, which reveals a hidden bug in msdkdec. For video memory, it is possible that a locked mfx surface is not used indeed and it will be un-locked later in MSDK, so we have to check the associated MSDK surface to find out and free un-used surfaces, otherwise it is easy to exhaust all pre-allocated mfx surfaces and get errors below: 0:00:00.777324879 27290 0x564b65a510a0 ERROR default gstmsdkvideomemory.c:77:gst_msdk_video_allocator_get_surface: failed to get surface available 0:00:00.777429079 27290 0x564b65a510a0 ERROR msdkbufferpool gstmsdkbufferpool.c:260:gst_msdk_buffer_pool_alloc_buffer:<msdkbufferpool0> failed to create new MSDK memory Note the sample code in MSDK does similar thing in CBuffering::SyncFrameSurfaces()
-
- 31 Dec, 2019 7 commits
-
-
Instead of always going through the file system API we allow the application to modify the behaviour. For the playlist itself and fragments, the application can provide a GOutputStream. In addition the sink notifies the application whenever a fragment can be deleted.
-
Mark Nauwelaerts authored
... by seeking to target offset determined by new seek segment, rather than that of the previous segment. The latter would typically seek back to start for a non-accurate seek, and lead to a lot of skipping in case of an accurate seek.
-
Some DPB management implementation is taken from gstreamer-vaapi
-
Based on gstreamer-vaapi and Chromium implemetation.
-
New decoder implementation based on dxva2 on d3d11 APIs. The DPB management implementation is taken from Chromium.
-
A ID3D11Texture2D memory can consist of multiple planes with array. For array typed memory, GstD3D11Allocator will allocate new GstD3D11Memory with increased reference count to the ID3D11Texture2D but different array index.
-
- 30 Dec, 2019 3 commits
-
-
Use helper method to get string from GValue.
-
Only warn if pushing a buffer returns an actual error to not pollute logs with confusing warnings.
-
If one of the inputs is live, add a latency of 2 frames to the video stream and wait on the clock for that much time to pass to allow for the LTC audio to be ahead. In case of live LTC, don't do any waiting but only ensure that we don't overflow the LTC queue. Also in non-live LTC audio mode, flush too old items from the LTC queue if the video is actually ahead instead of potentially waiting forever. This could've happened if there was a bigger gap in the video stream.
-
- 28 Dec, 2019 5 commits
-
-
Seungha Yang authored
-
Seungha Yang authored
Even if one of downstream d3d11 elements can support dynamic-usage memory, another one might not support it. Also, to support dynamic-usage, both upstream and downstream d3d11device must be the same object.
-
Seungha Yang authored
If d3d11colorconvert element is configured, do color space conversion regardless of the device type whether it's S/W emulation or real H/W. Since d3d11colorconvert is no more a child of d3d11videosinkbin, we don't need this behavior. Note that previous code was added to avoid color space conversion from d3d11videosink if no hardware device is available (S/W emulation of d3d11 is too slow).
-
Seungha Yang authored
-
Seungha Yang authored
d3d11upload should be able to support upstream d3d11 memory, not only system memory. Fix for following pipeline d3d11upload ! "video/x-raw(memory:D3D11Memory)" ! d3d11videosink
-
- 26 Dec, 2019 1 commit
-
-
Nicola Murino authored
-
- 24 Dec, 2019 4 commits
-
-
Seungha Yang authored
Since we might draw on partial area of backbuffer in case of force-aspect-ratio, presenting only updated area is more efficient way. See also https://docs.microsoft.com/ko-kr/windows/win32/direct3ddxgi/dxgi-1-2-presentation-improvements
-
Seungha Yang authored
Add d3d11overlaycompositor object to draw overlay image on render target using Blend method.
-
Seungha Yang authored
Note that dxgi and d3d11 sdk debug will be enabled on debug build
-
Seungha Yang authored
Create CUDA context per device, instead of per codec and encoder/decoder. Allocating CUDA context is heavy operation so we should reuse it as much as possible. Fixes: gstreamer/gst-plugins-bad#1130
-
- 22 Dec, 2019 2 commits
-
-
Posting any message to parent seems to be pointless. That might break parent window. Regardless of the posting, parent window can catch mouse event and also any keyboard events will be handled by parent window by default.
-
Following the standard for low latency JPEG 2000 encoding https://www.itu.int/rec/dologin_pub.asp?lang=e&id=T-REC-H.222.0-200701-S!Amd1!PDF-E&type=items we divide the image into stripes of a specified height, and encode each stripe as a self-contained JPEG 2000 image. This MR is based on gstreamer/gst-plugins-base!429
-
- 20 Dec, 2019 6 commits
-
-
The SVT-HEVC (Scalable Video Technology[0] for HEVC) Encoder is an open source video coding technology[1] that is highly optimized for Intel Xeon Scalable processors and Intel Xeon D processors. [0] https://01.org/svt [1] https://github.com/OpenVisualCloud/SVT-HEVC
-
It makes more simplifies the conversion between GstH265Profile and string.
-
Seungha Yang authored
Upload CPU memory to texture directly by using dynamic usage texture. This will reduce at least one step of staging copy per frame.
-
Seungha Yang authored
Otherwise CPU cannot access texture via gst_memory_map()
-
Seungha Yang authored
The output of d3d11colorconvert would be used for rendering (i.e., shader resource)
-
Seungha Yang authored
Call DXGI API from window thread as much as possible
-