Commits on Source (76)
-
Olivier Crête authored
Part-of: <!1924>
-
Haihao Xiang authored
Otherwise we will get double free issue because mfx session is closed in finalize. See !1867 (comment 739346) for the double free issue. Part-of: <!1916>
-
Haihao Xiang authored
Since the SDK API 1.26, TransformSkip was added to control transform_skip_enabled_flag setting in PPS [1] [1] https://github.com/Intel-Media-SDK/MediaSDK/blob/master/doc/mediasdk-man.md#mfxextcodingoption3 Part-of: <!1908>
-
Haihao Xiang authored
Like as msdkh264enc, b-pyramid is added to enable B-Pyramid reference structure for H265 encoding Part-of: <!1908>
-
Haihao Xiang authored
The SDK can support P-Pyramid reference structure [1], so add a new property to enable this feature in msdkenc{h264,h265}. [1] https://github.com/Intel-Media-SDK/MediaSDK/blob/master/doc/mediasdk-man.md#preftype Part-of: <!1908>
-
Haihao Xiang authored
The SDK allows user to set a QP range [1], so add min-qp and max-qp to sepecify QP range. By default, there is no limitations on QP. [1] https://github.com/Intel-Media-SDK/MediaSDK/blob/master/doc/mediasdk-man.md#mfxextcodingoption2 Part-of: <!1908>
-
Raghavendra Rao authored
Part-of: <!1725>
-
Part-of: <!1929>
-
We need to store the buffer flags such as GST_VIDEO_BUFFER_FLAG_INTERLACED and GST_VIDEO_BUFFER_FLAG_TFF for interlaced video. Without these flags, the VPP and display elements can not apply filter correctly. Part-of: <!1929>
-
Part-of: <!1929>
-
Spec says: In a frame picture top_field_first being set to ‘1’ indicates that the top field of the reconstructed frame is the first field output by the decoding process. top_field_first being set to ‘0’ indicates that the bottom field of the reconstructed frame is the first field output by decoding process. Here, the "output" should be interpreted just as the output order, not including the decoding order. The field should be decoded as the order they comes in the stream. Namely, no matter top_field_first is 0 or 1, the first coming field is the first one to be decoded. Part-of: <!1929>
-
When missing the reference frames, we should not just discard the current frame. Some streams have group of picture header. It is an optional header that can be used immediately before a coded I-frame to indicate to the decoder if the first consecutive B-pictures immediately following the coded I-frame can be reconstructed properly in the case of a random access. In that case, the B frames may miss the previous reference and can still be correctly decoded. We also notice that the second field of the I frame may be set to P type, and it only ref its first field. We should not skip all those frames, and even the frame really misses the reference frame, some manner such as inserting grey picture should be used to handle these cases. The driver crashes when it needs to access the reference picture while we set forward_reference_picture or backward_reference_picture to VA_INVALID_ID. We now set it to current picture to avoid this. This is just a temp manner. Part-of: <!1929>
-
So that the application gets notified may react to it. Part-of: <!1935>
-
Part-of: <!1938>
-
This enum can be used for quirk handling. It's not a property because the driver enum list might change, it's not static, thus avoiding the update of GType declaration. Part-of: <!1938>
-
Víctor Manuel Jáquez Leal authored
Since prev_picture and next_picture are plain pointers, not pointer to pointers, it's misleading to name them with _ptr suffix. Part-of: <!1939>
-
Víctor Manuel Jáquez Leal authored
Mark as decode only if picture type is B, without previous picture in DBP and closed_gop is 0 as might be understood in "6.3.8 Group of pictures header". Part-of: <!1939>
-
Víctor Manuel Jáquez Leal authored
Add a helper function _is_frame_start() which check if picture has a frame structure or if it has not an interlaced first field yet. This function is used with filling is_first_field parameter. Part-of: <!1939>
-
Víctor Manuel Jáquez Leal authored
Add the helper function _get_surface_id() which extracts the VASurfaceID from the passed picture. This function gets the surface of the next and previous reference picture. Instead of if-statements, this refactor uses a switch-statement with a fall-through, for P-type pictures, making the code a bit more readable. Also it adds quirks for gallium driver, which cannot handle invalid surfaces as forwarding nor backwarding references, so the function fails. Also iHD cannot handle them, but to avoid failing, the current picture is used as self-reference. Part-of: <!1939>
-
The pipeline now gets stuck in gst_srt_object_write_one() until the receiver comes online, which may or may not be desired based on the use case. Part-of: <!1836>
-
Part-of: <!1541>
-
Part-of: <!1541>
-
Any socket option that can be passed to libsrt's srt-live-transmit through SRT URI query string is now recognized. Also make the code that applies options to SRT sockets more generic. Part-of: <!1842>
-
There was some code left that wasn't used anymore. Part-of: <!1930>
-
On an error event, epoll wait puts the failed socket in both readfds and writefds. We can take advantage of this and avoid explicitly checking socket state before every read or write attempt. In addition, srt_getrejectreason() will give us more detailed description of the connection failure. Part-of: <!1943>
-
Use GST_RESOURCE_ERROR_NOT_AUTHORIZED code in posted error messages related to SRT authentication (e.g. incorrect or missing password) so that the application can recognize them more easily. Part-of: <!1943>
-
Allows compiling the plugin against old headers. For SRTO_BINDTODEVICE there's nothing we can do, since the value depends on configuration options of the library. Nice. Fixes build with libsrt < 1.4.2 Part-of: <!1945>
-
Part-of: <!1944>
-
Seungha Yang authored
VA plugin is linux-only plugin, so we can skip it earlier. Note that this plugin is making use of libdrm meson fallback, which is unusable on the other platforms such as Windows Part-of: <!1946>
-
On renegotiation, or when the user has specified a mid for a transceiver, we need to avoid picking a duplicate mid for a transceiver that doesn't yet have one. Also assign the mid we created to the transceiver, that doesn't fix a specific bug but seems to make sense to me. Part-of: <!1902>
-
Seungha Yang authored
... and reset() method to clear internal status at one place Part-of: <!1947>
-
Seungha Yang authored
In case that upstream pushed buffer as a frame unit, not picture unit for interlaced stream, baseclass should be able to detect AU boundary (i.e., complementary field pair). Part-of: <!1947>
-
Seungha Yang authored
Our DPB implementation was designed as such that allowing temporary DPB overflow in the middle of field picture decoding and incomplete field pair should not trigger DPB bumping. Part-of: <!1947>
-
Using the object lock is problematic for anything that can dispatch to another thread which is what createWPEView() does inside gst_wpe_src_start(). Using the object lock there can cause a deadlock. One example of such a deadlock is when createWPEView is called, but another (or the same) wpesrc is on the WPEContextThread and e.g. posts a bus message. This message propagations takes and releases the object lock of numerous elements in quick succession for determining various information about the elements in the bin. If the object lock is already held, then the message propagation will block and stall bin processing (state changes, other messages) and wpe servicing any events. Fixes #1490 Part-of: <!1934>
-
Seungha Yang authored
Trivial bug fix for deadlock Part-of: <!1949>
-
Move d3d11 device, memory, buffer pool and minimal method to gst-libs so that other plugins can access d3d11 resource. Since Direct3D is primary graphics API on Windows, we need this infrastructure for various plugins can share GPU resource without downloading GPU memory. Note that this implementation is public only for -bad scope for now. Part-of: <!464>
-
Initial support for d3d11 texture so that encoder can copy upstream d3d11 texture into encoder's own texture pool without downloading memory. This implementation requires MFTEnum2() API for creating MFT (Media Foundation Transform) object for specific GPU but the API is Windows 10 desktop only. So UWP is not target of this change. See also https://docs.microsoft.com/en-us/windows/win32/api/mfapi/nf-mfapi-mftenum2 Note that, for MF plugin to be able to support old OS versions without breakage, this commit will load MFTEnum2() symbol by using g_module_open() Summary of required system environment: - Needs Windows 10 (probably at least RS 1 update) - GPU should support ExtendedNV12SharedTextureSupported feature - Desktop application only (UWP is not supported yet) Part-of: <!1903>
-
Sebastian Dröge authored
decklinkaudiosrc: Allow disabling audio sample alignment code by setting the alignment-threshold to 0 And handle setting it to GST_CLOCK_TIME_NONE as always aligning without ever detecting a discont. Part-of: <!1956>
-
He Junyan authored
The vabasedec's display and decoder are created/destroyed between the gst_va_base_dec_open/close pair. All the data and event handling functions are between this pair and so the accessing to these pointers are safe. But the query function can be called anytime. So we need to: 1. Make these pointers operation in open/close and query atomic. 2. Hold an extra ref during query function to avoid it destroyed. Part-of: <!1957>
-
Seungha Yang authored
gstd3d11videosink.c(662): error C2065: 'sink': undeclared identifier Part-of: <!1961>
-
Some GPUs (especially NVIDIA) are complaining that GPU is still busy even we did 50 times of retry with 1ms sleep per failure. Because DXVA/D3D11 doesn't provide API for "GPU-IS-READY-TO-DECODE" like signal, there seems to be still no better solution other than sleep. Part-of: <!1913>
-
WINAPI_PARTITION_DESKTOP and WINAPI_PARTITION_APP can coexist. Although UWP only binaries should be used for production stage, this change will be useful for development stage Part-of: <!1962>
-
gstd3d11window_corewindow.cpp(408): warning C4189: 'storage': local variable is initialized but not referenced gstd3d11window_corewindow.cpp(490): warning C4189: 'self': local variable is initialized but not referenced gstd3d11window_swapchainpanel.cpp(481): warning C4189: 'self': local variable is initialized but not referenced Part-of: <!1962>
-
Don't need to put Win32 twice Part-of: <!1962>
-
This AV1 parse implements the conversion between alignment of obu, tu and frame, and the conversion between stream-format of obu-stream and annexb. TODO: 1. May need a property of operating_point to filter the OBUs 2. May add a property to disable deep parse. Part-of: <!1614>
-
Part-of: <!1614>
-
obu->obu_size does not contain the bytes of obu_size itself, we need to exclude it when doing the saint check. Part-of: <!1614>
-
Part-of: <!1614>
-
Seungha Yang authored
Maximum supported texture dimension is pre-defined based on feature level and it couldn't be INT_MAX in any case. See also https://docs.microsoft.com/en-us/windows/win32/direct3d11/overviews-direct3d-11-devices-downlevel-intro Part-of: <!1964>
-
Seungha Yang authored
Add P010 Direct3D11 texture format support Part-of: <!1970>
-
Víctor Manuel Jáquez Leal authored
Fix the result of a wrong copy&paste Fixes: #1501 Part-of: <!1976>
-
Otherwise there will be a scenario where the library can be found but not the header and a compilation build error will result Part-of: <!1975>
-
1. Add the mono_chrome to identify 4:0:0 chroma-format. 2. Correct the mapping between subsampling_x/y and chroma-format. There is no 4:4:0 format definition in AV1. And 4:4:4 should let both subsampling_x/y be equal to 0. 3. Send the chroma-format when the color space is not RGB. Fixes: #1502 Part-of: <!1974>
-
Problem is that unreffing the EGLImage/SHM Buffer while holding the images_mutex lock may deadlock when a new buffer is advertised and an attempt is made to lock the images_mutex there. The advertisement of the new image/buffer is performed in the WPEContextThread and the blocking dispatch when unreffing wants to run something on the WPEContextThread however images_mutex has already been locked by the destructor. Delay unreffing images/buffers outside of images_mutex and instead just clear the relevant fields within the lock. Part-of: <!1843>
-
Seungha Yang authored
Add DXVA/Direct3D11 API based MPEG-2 decoder element Part-of: <!1969>
-
Marijn Suijten authored
Fixes: a5768145 ("ext: Add LDAC encoder") Part-of: <!1985>
-
Marijn Suijten authored
Because there was a typo in one of the duplicates already (see previous commit) it is much safer to specify these once and only once. Part-of: <!1985>
-
When drop some OBU, we need to go on. The current manner will make the data access out range of the buffer mapping. Part-of: <!1979>
-
The current optimization when input align and out out align are the same is not very correct. We simply copy the data from input buffer to output buffer, but we failed to consider the dropping of OBUs. When we need to drop some OBUs(such as filter out the OBUs of some temporal ID), we can not do simple copy. So we need to always copy the input OBUs into a cache. Part-of: <!1979>
-
The current behaviour for obu aligned output is not very precise. Several OBUs will be output together within one gst buffer. We should output each gst buffer just containing one OBU. This is the same way as the h264/h265 parse do when NAL aligned. Part-of: <!1979>
-
Part-of: <!1979>
-
Part-of: <!1979>
-
1. Set the default output alignment to frame, rather than current alignment of obu. This make it the same behaviour as h264/h265 parse, which default align to AU. 2. Set the default input alignment to byte. It can handle the "not enough data" error while the OBU alignment can not. Also make it conform to the comments. Part-of: <!1979>
-
Return hvc1 for video/x-h265 mime type in mpd helper function Part-of: <!1966>
-
Add a way to support drawing on application's texture instead of usual window handle. To make use of this new feature, application should follow below step. 1) Enable this feature by using "draw-on-shared-texture" property 2) Watch "begin-draw" signal 3) On "begin-draw" signal handler, application can request drawing by using "draw" signal action. Note that "draw" signal action should be happen before "begin-draw" signal handler is returned NOTE 1) For texture sharing, creating a texture with D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX flag is strongly recommend if possible because we cannot ensure sync a texture which was created with D3D11_RESOURCE_MISC_SHARED and it would cause glitch with ID3D11VideoProcessor use case. NOTE 2) Direct9Ex doesn't support texture sharing which was created with D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX. In other words, D3D11_RESOURCE_MISC_SHARED is the only option for Direct3D11/Direct9Ex interop. NOTE 3) Because of missing synchronization around ID3D11VideoProcessor, If shared texture was created with D3D11_RESOURCE_MISC_SHARED, d3d11videosink might use fallback texture to convert DXVA texture to normal Direct3D texture. Then converted texture will be copied to user-provided shared texture. * Why not use generic appsink approach? In order for application to be able to store video data which was produced by GStreamer in application's own texture, there would be two possible approaches, one is copying our texture into application's own texture, and the other is drawing on application's own texture directly. The former (appsink way) cannot be a zero-copy by nature. In order to support zero-copy processing, we need to draw on application's own texture directly. For example, assume that application wants RGBA texture. Then we can imagine following case. "d3d11h264dec ! d3d11convert ! video/x-raw(memory:D3D11Memory),format=RGBA ! appsink" ^ |_ allocate new Direct3D texture for RGBA format In above case, d3d11convert will allocate new texture(s) for RGBA format and then application will copy again the our RGBA texutre into application's own texture. One texture allocation plus per frame GPU copy will hanppen in that case therefore. Moreover, in order for application to be able to access our texture, we need to allocate texture with additional flags for application's Direct3D11 device to be able to read texture data. That would be another implementation burden on our side But with this MR, we can configure pipeline in this way "d3d11h264dec ! d3d11videosink". In that way, we can save at least one texture allocation and per frame texutre copy since d3d11videosink will convert incoming texture into application's texture format directly without copy. * What if we expose texture without conversion and application does conversion by itself? As mentioned above, for application to be able to access our texture from application's Direct3D11 device, we need to allocate texture in a special form. But in some case, that might not be possible. Also, if a texture belongs to decoder DPB, exposing such texture to application is unsafe and usual Direct3D11 shader cannot handle such texture. To convert format, ID3D11VideoProcessor API needs to be used but that would be a implementation burden for application. Part-of: <!1873>
-
Add two examples to demonstrate "draw-on-shared-texture" use cases. d3d11videosink will draw application's own texture without copy by using: - Enable "draw-on-shared-texture" property - make use of "begin-draw" and "draw" signals And then, application will render the shared application's texture to swapchain's backbuffer by using 1) Direct3D11 APIs 2) Or, Direct3D9Ex + interop APIs Part-of: <!1873>
-
Seungha Yang authored
* Don't warn for live object, since ID3D11Debug itself seems to be holding refcount of ID3D11Device at the moment we called ID3D11Debug::ReportLiveDeviceObjects(). It would report live object always * Device might not be able to support some formats (e.g., P010) especially in case of WARP device. We don't need to warn about that. * gst_d3d11_device_new() can be used for device enumeration. Don't warn even if we cannot create D3D11 device with given adapter index therefore. * Don't warn for HLSL compiler warning. It's just noise and should not be critical thing at all Part-of: <!1986>
Showing
- docs/plugins/gst_plugins_cache.json 27 additions, 0 deletionsdocs/plugins/gst_plugins_cache.json
- ext/dash/gstdashsink.c 1 addition, 0 deletionsext/dash/gstdashsink.c
- ext/dash/gstmpdhelper.c 2 additions, 2 deletionsext/dash/gstmpdhelper.c
- ext/dash/gstmpdrootnode.c 3 additions, 3 deletionsext/dash/gstmpdrootnode.c
- ext/ldac/gstldacenc.c 6 additions, 5 deletionsext/ldac/gstldacenc.c
- ext/ldac/meson.build 2 additions, 1 deletionext/ldac/meson.build
- ext/srt/gstsrtobject.c 334 additions, 107 deletionsext/srt/gstsrtobject.c
- ext/srt/gstsrtobject.h 2 additions, 0 deletionsext/srt/gstsrtobject.h
- ext/srt/gstsrtsink.c 61 additions, 5 deletionsext/srt/gstsrtsink.c
- ext/srt/gstsrtsink.h 4 additions, 0 deletionsext/srt/gstsrtsink.h
- ext/srt/gstsrtsrc.c 66 additions, 6 deletionsext/srt/gstsrtsrc.c
- ext/srt/gstsrtsrc.h 4 additions, 0 deletionsext/srt/gstsrtsrc.h
- ext/webrtc/gstwebrtcbin.c 94 additions, 41 deletionsext/webrtc/gstwebrtcbin.c
- ext/webrtc/gstwebrtcice.c 0 additions, 39 deletionsext/webrtc/gstwebrtcice.c
- ext/webrtc/gstwebrtcstats.c 6 additions, 6 deletionsext/webrtc/gstwebrtcstats.c
- ext/webrtc/transportsendbin.c 0 additions, 1 deletionext/webrtc/transportsendbin.c
- ext/webrtc/transportsendbin.h 0 additions, 1 deletionext/webrtc/transportsendbin.h
- ext/webrtc/transportstream.c 0 additions, 13 deletionsext/webrtc/transportstream.c
- ext/webrtc/transportstream.h 0 additions, 3 deletionsext/webrtc/transportstream.h
- ext/webrtc/webrtctransceiver.c 7 additions, 2 deletionsext/webrtc/webrtctransceiver.c