- Jan 26, 2021
-
-
Seungha Yang authored
* Don't warn for live object, since ID3D11Debug itself seems to be holding refcount of ID3D11Device at the moment we called ID3D11Debug::ReportLiveDeviceObjects(). It would report live object always * Device might not be able to support some formats (e.g., P010) especially in case of WARP device. We don't need to warn about that. * gst_d3d11_device_new() can be used for device enumeration. Don't warn even if we cannot create D3D11 device with given adapter index therefore. * Don't warn for HLSL compiler warning. It's just noise and should not be critical thing at all Part-of: <gstreamer/gst-plugins-bad!1986>
-
Add two examples to demonstrate "draw-on-shared-texture" use cases. d3d11videosink will draw application's own texture without copy by using: - Enable "draw-on-shared-texture" property - make use of "begin-draw" and "draw" signals And then, application will render the shared application's texture to swapchain's backbuffer by using 1) Direct3D11 APIs 2) Or, Direct3D9Ex + interop APIs Part-of: <gstreamer/gst-plugins-bad!1873>
-
Add a way to support drawing on application's texture instead of usual window handle. To make use of this new feature, application should follow below step. 1) Enable this feature by using "draw-on-shared-texture" property 2) Watch "begin-draw" signal 3) On "begin-draw" signal handler, application can request drawing by using "draw" signal action. Note that "draw" signal action should be happen before "begin-draw" signal handler is returned NOTE 1) For texture sharing, creating a texture with D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX flag is strongly recommend if possible because we cannot ensure sync a texture which was created with D3D11_RESOURCE_MISC_SHARED and it would cause glitch with ID3D11VideoProcessor use case. NOTE 2) Direct9Ex doesn't support texture sharing which was created with D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX. In other words, D3D11_RESOURCE_MISC_SHARED is the only option for Direct3D11/Direct9Ex interop. NOTE 3) Because of missing synchronization around ID3D11VideoProcessor, If shared texture was created with D3D11_RESOURCE_MISC_SHARED, d3d11videosink might use fallback texture to convert DXVA texture to normal Direct3D texture. Then converted texture will be copied to user-provided shared texture. * Why not use generic appsink approach? In order for application to be able to store video data which was produced by GStreamer in application's own texture, there would be two possible approaches, one is copying our texture into application's own texture, and the other is drawing on application's own texture directly. The former (appsink way) cannot be a zero-copy by nature. In order to support zero-copy processing, we need to draw on application's own texture directly. For example, assume that application wants RGBA texture. Then we can imagine following case. "d3d11h264dec ! d3d11convert ! video/x-raw(memory:D3D11Memory),format=RGBA ! appsink" ^ |_ allocate new Direct3D texture for RGBA format In above case, d3d11convert will allocate new texture(s) for RGBA format and then application will copy again the our RGBA texutre into application's own texture. One texture allocation plus per frame GPU copy will hanppen in that case therefore. Moreover, in order for application to be able to access our texture, we need to allocate texture with additional flags for application's Direct3D11 device to be able to read texture data. That would be another implementation burden on our side But with this MR, we can configure pipeline in this way "d3d11h264dec ! d3d11videosink". In that way, we can save at least one texture allocation and per frame texutre copy since d3d11videosink will convert incoming texture into application's texture format directly without copy. * What if we expose texture without conversion and application does conversion by itself? As mentioned above, for application to be able to access our texture from application's Direct3D11 device, we need to allocate texture in a special form. But in some case, that might not be possible. Also, if a texture belongs to decoder DPB, exposing such texture to application is unsafe and usual Direct3D11 shader cannot handle such texture. To convert format, ID3D11VideoProcessor API needs to be used but that would be a implementation burden for application. Part-of: <!1873>
-
Return hvc1 for video/x-h265 mime type in mpd helper function Part-of: <gstreamer/gst-plugins-bad!1966>
-
1. Set the default output alignment to frame, rather than current alignment of obu. This make it the same behaviour as h264/h265 parse, which default align to AU. 2. Set the default input alignment to byte. It can handle the "not enough data" error while the OBU alignment can not. Also make it conform to the comments. Part-of: <gstreamer/gst-plugins-bad!1979>
-
Part-of: <gstreamer/gst-plugins-bad!1979>
-
Part-of: <!1979>
-
The current behaviour for obu aligned output is not very precise. Several OBUs will be output together within one gst buffer. We should output each gst buffer just containing one OBU. This is the same way as the h264/h265 parse do when NAL aligned. Part-of: <!1979>
-
The current optimization when input align and out out align are the same is not very correct. We simply copy the data from input buffer to output buffer, but we failed to consider the dropping of OBUs. When we need to drop some OBUs(such as filter out the OBUs of some temporal ID), we can not do simple copy. So we need to always copy the input OBUs into a cache. Part-of: <!1979>
-
When drop some OBU, we need to go on. The current manner will make the data access out range of the buffer mapping. Part-of: <!1979>
-
Marijn Suijten authored
Because there was a typo in one of the duplicates already (see previous commit) it is much safer to specify these once and only once. Part-of: <!1985>
-
Marijn Suijten authored
Fixes: a5768145 ("ext: Add LDAC encoder") Part-of: <!1985>
-
- Jan 25, 2021
-
-
Seungha Yang authored
Add DXVA/Direct3D11 API based MPEG-2 decoder element Part-of: <!1969>
-
Problem is that unreffing the EGLImage/SHM Buffer while holding the images_mutex lock may deadlock when a new buffer is advertised and an attempt is made to lock the images_mutex there. The advertisement of the new image/buffer is performed in the WPEContextThread and the blocking dispatch when unreffing wants to run something on the WPEContextThread however images_mutex has already been locked by the destructor. Delay unreffing images/buffers outside of images_mutex and instead just clear the relevant fields within the lock. Part-of: <gstreamer/gst-plugins-bad!1843>
-
- Jan 23, 2021
-
-
1. Add the mono_chrome to identify 4:0:0 chroma-format. 2. Correct the mapping between subsampling_x/y and chroma-format. There is no 4:4:0 format definition in AV1. And 4:4:4 should let both subsampling_x/y be equal to 0. 3. Send the chroma-format when the color space is not RGB. Fixes: #1502 Part-of: <!1974>
-
The chroma format 4:4:4 needs both subsampling_x and subsampling_y equal to 0. Fixes: #1502 Part-of: <gstreamer/gst-plugins-bad!1974>
- Jan 22, 2021
-
-
Checking the same value twice is pointless Fixes gstreamer/gst-plugins-bad#1504 Part-of: <gstreamer/gst-plugins-bad!1977>
-
Otherwise there will be a scenario where the library can be found but not the header and a compilation build error will result Part-of: <gstreamer/gst-plugins-bad!1975>
-
Víctor Manuel Jáquez Leal authored
Fix the result of a wrong copy&paste Fixes: #1501 Part-of: <gstreamer/gst-plugins-bad!1976>
-
- Jan 21, 2021
-
-
Seungha Yang authored
Add P010 Direct3D11 texture format support Part-of: <gstreamer/gst-plugins-bad!1970>
-
- Jan 20, 2021
-
-
Seungha Yang authored
Maximum supported texture dimension is pre-defined based on feature level and it couldn't be INT_MAX in any case. See also https://docs.microsoft.com/en-us/windows/win32/direct3d11/overviews-direct3d-11-devices-downlevel-intro Part-of: <!1964>
-
- Jan 19, 2021
-
-
Part-of: <gstreamer/gst-plugins-bad!1614>
-
obu->obu_size does not contain the bytes of obu_size itself, we need to exclude it when doing the saint check. Part-of: <!1614>
-
Part-of: <gstreamer/gst-plugins-bad!1614>
-
This AV1 parse implements the conversion between alignment of obu, tu and frame, and the conversion between stream-format of obu-stream and annexb. TODO: 1. May need a property of operating_point to filter the OBUs 2. May add a property to disable deep parse. Part-of: <gstreamer/gst-plugins-bad!1614>
-
Don't need to put Win32 twice Part-of: <!1962>
-
gstd3d11window_corewindow.cpp(408): warning C4189: 'storage': local variable is initialized but not referenced gstd3d11window_corewindow.cpp(490): warning C4189: 'self': local variable is initialized but not referenced gstd3d11window_swapchainpanel.cpp(481): warning C4189: 'self': local variable is initialized but not referenced Part-of: <!1962>
-
WINAPI_PARTITION_DESKTOP and WINAPI_PARTITION_APP can coexist. Although UWP only binaries should be used for production stage, this change will be useful for development stage Part-of: <!1962>
-
- Jan 18, 2021
-
-
Some GPUs (especially NVIDIA) are complaining that GPU is still busy even we did 50 times of retry with 1ms sleep per failure. Because DXVA/D3D11 doesn't provide API for "GPU-IS-READY-TO-DECODE" like signal, there seems to be still no better solution other than sleep. Part-of: <!1913>
-
Seungha Yang authored
gstd3d11videosink.c(662): error C2065: 'sink': undeclared identifier Part-of: <!1961>
-
- Jan 17, 2021
-
- Jan 15, 2021
-
-
He Junyan authored
The vabasedec's display and decoder are created/destroyed between the gst_va_base_dec_open/close pair. All the data and event handling functions are between this pair and so the accessing to these pointers are safe. But the query function can be called anytime. So we need to: 1. Make these pointers operation in open/close and query atomic. 2. Hold an extra ref during query function to avoid it destroyed. Part-of: <!1957>
- Jan 14, 2021
-
-
Sebastian Dröge authored
decklinkaudiosrc: Allow disabling audio sample alignment code by setting the alignment-threshold to 0 And handle setting it to GST_CLOCK_TIME_NONE as always aligning without ever detecting a discont. Part-of: <gstreamer/gst-plugins-bad!1956>
-
- Jan 13, 2021
-
-
Initial support for d3d11 texture so that encoder can copy upstream d3d11 texture into encoder's own texture pool without downloading memory. This implementation requires MFTEnum2() API for creating MFT (Media Foundation Transform) object for specific GPU but the API is Windows 10 desktop only. So UWP is not target of this change. See also https://docs.microsoft.com/en-us/windows/win32/api/mfapi/nf-mfapi-mftenum2 Note that, for MF plugin to be able to support old OS versions without breakage, this commit will load MFTEnum2() symbol by using g_module_open() Summary of required system environment: - Needs Windows 10 (probably at least RS 1 update) - GPU should support ExtendedNV12SharedTextureSupported feature - Desktop application only (UWP is not supported yet) Part-of: <!1903>
-
As advised by !1366#note_629558 , the nice transport should be accessed through: > transceiver->sender/receiver->transport/rtcp_transport->icetransport All the objects on the path can be accessed through properties except sender/receiver->transport. This patch addresses that. Part-of: <gstreamer/gst-plugins-bad!1952>
-
Move d3d11 device, memory, buffer pool and minimal method to gst-libs so that other plugins can access d3d11 resource. Since Direct3D is primary graphics API on Windows, we need this infrastructure for various plugins can share GPU resource without downloading GPU memory. Note that this implementation is public only for -bad scope for now. Part-of: <!464>
-