gst-plugins-bad issueshttps://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues2020-02-21T14:54:00Zhttps://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/676webrtcbin do not signal when ICE candidate gathering is done2020-02-21T14:54:00ZBugzilla Migration Userwebrtcbin do not signal when ICE candidate gathering is done## Submitted by Daniel F
**[Link to original bug (#794798)](https://bugzilla.gnome.org/show_bug.cgi?id=794798)**
## Description
New webrtcbin component has signal add-ice-candidate which is sent for every discovered ICE candidate. H...## Submitted by Daniel F
**[Link to original bug (#794798)](https://bugzilla.gnome.org/show_bug.cgi?id=794798)**
## Description
New webrtcbin component has signal add-ice-candidate which is sent for every discovered ICE candidate. However now there is no way to tell when all of them are gathered. I need this to send all candidates appended to end of SDP (in my case they are for local IP address only).
Please either send signal on-ice-candidate one more time with NULL argument (like https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection/onicecandidate says), or add new signal - e.g. candidate-gathering-done, as in NiceAgent https://nice.freedesktop.org/libnice/NiceAgent.html
It will be beneficial to fix add-ice-candidate signal too, so GStreamer could fail faster if none of provided ICE candidates can be used. Without this it can only use timer to decide when to fail.
Version: 1.14.0https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/758webrtcbin connection-state always new because dtls-transport states don't change2022-09-30T05:12:09ZBugzilla Migration Userwebrtcbin connection-state always new because dtls-transport states don't change## Submitted by Jan Schmidt `@thaytan`
**[Link to original bug (#796896)](https://bugzilla.gnome.org/show_bug.cgi?id=796896)**
## Description
GstWebRTCDTLSTransport doesn't ever change the "state" property from NEW, so the overall c...## Submitted by Jan Schmidt `@thaytan`
**[Link to original bug (#796896)](https://bugzilla.gnome.org/show_bug.cgi?id=796896)**
## Description
GstWebRTCDTLSTransport doesn't ever change the "state" property from NEW, so the overall connection-state of webrtcbin never changes.
Part of the problem is that dtlssrtpbin doesn't expose enough API to report the state properly yet, but for now just switching to CONNECTED when on-key-set is fired.Sebastian DrögeSebastian Drögehttps://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/666ksvideosrc: Device Monitor shows "video/x-raw,format=(string)H264" caps inste...2021-09-24T14:36:08ZBugzilla Migration Userksvideosrc: Device Monitor shows "video/x-raw,format=(string)H264" caps instead of "video/x-h264" for Logitech C920## Submitted by Marcos Kintschner
**[Link to original bug (#793939)](https://bugzilla.gnome.org/show_bug.cgi?id=793939)**
## Description
I'm using a webcam (Logitech C920) on Windows 10. Device monitor shows some caps containing "vi...## Submitted by Marcos Kintschner
**[Link to original bug (#793939)](https://bugzilla.gnome.org/show_bug.cgi?id=793939)**
## Description
I'm using a webcam (Logitech C920) on Windows 10. Device monitor shows some caps containing "video/x-raw, format(string)=H264", which AFAIK is not valid (it should be "video/x-h264").
Here are the full caps I got from device monitor:
___
gst-device-monitor-1.0.exe
Probing devices...
Device found:
name : HD Pro Webcam C920
class : Video/Source
caps : video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)YUY2, width=(int)160, height=(int)90, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)YUY2, width=(int)160, height=(int)120, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)YUY2, width=(int)176, height=(int)144, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)12/11;
video/x-raw, format=(string)YUY2, width=(int)320, height=(int)180, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)YUY2, width=(int)320, height=(int)240, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)YUY2, width=(int)352, height=(int)288, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)12/11;
video/x-raw, format=(string)YUY2, width=(int)432, height=(int)240, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)YUY2, width=(int)640, height=(int)360, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)YUY2, width=(int)800, height=(int)448, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)YUY2, width=(int)800, height=(int)600, framerate=(fraction)[ 5/1, 24/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)YUY2, width=(int)864, height=(int)480, framerate=(fraction)[ 5/1, 24/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)YUY2, width=(int)960, height=(int)720, framerate=(fraction)[ 5/1, 15/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)YUY2, width=(int)1024, height=(int)576, framerate=(fraction)[ 5/1, 15/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)720, framerate=(fraction)[ 5/1, 10/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)YUY2, width=(int)1600, height=(int)896, framerate=(fraction)[ 5/1, 15/2 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)YUY2, width=(int)1920, height=(int)1080, framerate=(fraction)5/1, pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)YUY2, width=(int)2304, height=(int)1296, framerate=(fraction)2/1, pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)YUY2, width=(int)2304, height=(int)1536, framerate=(fraction)2/1, pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)H264, width=(int)640, height=(int)480, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)H264, width=(int)160, height=(int)90, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)H264, width=(int)160, height=(int)120, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)H264, width=(int)176, height=(int)144, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)12/11;
video/x-raw, format=(string)H264, width=(int)320, height=(int)180, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)H264, width=(int)320, height=(int)240, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)H264, width=(int)352, height=(int)288, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)12/11;
video/x-raw, format=(string)H264, width=(int)432, height=(int)240, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)H264, width=(int)640, height=(int)360, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)H264, width=(int)800, height=(int)448, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)H264, width=(int)800, height=(int)600, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)H264, width=(int)864, height=(int)480, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)H264, width=(int)960, height=(int)720, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)H264, width=(int)1024, height=(int)576, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)H264, width=(int)1280, height=(int)720, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)H264, width=(int)1600, height=(int)896, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
video/x-raw, format=(string)H264, width=(int)1920, height=(int)1080, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
image/jpeg, width=(int)640, height=(int)480, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
image/jpeg, width=(int)160, height=(int)90, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
image/jpeg, width=(int)160, height=(int)120, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
image/jpeg, width=(int)176, height=(int)144, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)12/11;
image/jpeg, width=(int)320, height=(int)180, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
image/jpeg, width=(int)320, height=(int)240, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
image/jpeg, width=(int)352, height=(int)288, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)12/11;
image/jpeg, width=(int)432, height=(int)240, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
image/jpeg, width=(int)640, height=(int)360, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
image/jpeg, width=(int)800, height=(int)448, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
image/jpeg, width=(int)800, height=(int)600, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
image/jpeg, width=(int)864, height=(int)480, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
image/jpeg, width=(int)960, height=(int)720, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
image/jpeg, width=(int)1024, height=(int)576, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
image/jpeg, width=(int)1280, height=(int)720, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
image/jpeg, width=(int)1600, height=(int)896, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
image/jpeg, width=(int)1920, height=(int)1080, framerate=(fraction)[ 5/1, 30/1 ], pixel-aspect-ratio=(fraction)1/1;
gst-launch-1.0 ksvideosrc device-path="\\\\\?\\usb\#vid_046d\&pid_082d\&mi_00\#7\&38a25b45\&0\&0000\#\{6994ad05-93ef-11d0-a3cc-00a0c9223196\}\\global" ! ...
Version: 1.xhttps://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/385adaptivedemux: Provide API for being able to set properties on internal HTTP ...2021-09-24T14:34:20ZBugzilla Migration Useradaptivedemux: Provide API for being able to set properties on internal HTTP (and other) sources## Submitted by pot..@..ty.com
**[Link to original bug (#765986)](https://bugzilla.gnome.org/show_bug.cgi?id=765986)**
## Description
I hope you will forgive me that I'm not good at English.
I have been using the version 1.8.0 ...## Submitted by pot..@..ty.com
**[Link to original bug (#765986)](https://bugzilla.gnome.org/show_bug.cgi?id=765986)**
## Description
I hope you will forgive me that I'm not good at English.
I have been using the version 1.8.0 of gstreamer.
Now, I am building a pipeline to play the Http Live Streaming(HLS) video.
the http protocol can play on this pipeline.
gst-launch-1.0 souphttpsrc location=http://path/to/hls.m3u8 ! decodebin ! videoconvert ! autovideosink
but, https protocol can't play.
gst-launch-1.0 souphttpsrc ssl-strict=false location=https://path/to/hls.m3u8 ! decodebin ! videoconvert ! autovideosink
By the way, in the case of the mp4 can be played on http protocol.
gst-launch-1.0 souphttpsrc ssl-strict=false location=https://path/to/movie.mp4 ! decodebin ! videoconvert ! autovideosink
Please pointed out if there is a mistake to building a pipeline.
Version: 1.8.0https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/78dvdspu: figure out how to make it work with hardware decoders and subpicture ...2023-12-18T15:36:59ZBugzilla Migration Userdvdspu: figure out how to make it work with hardware decoders and subpicture overlays## Submitted by Jan Schmidt `@thaytan`
Assigned to **Jan Schmidt `@thaytan`**
**[Link to original bug (#685282)](https://bugzilla.gnome.org/show_bug.cgi?id=685282)**
## Description
Getting the DVD SPU to paint generically is a req...## Submitted by Jan Schmidt `@thaytan`
Assigned to **Jan Schmidt `@thaytan`**
**[Link to original bug (#685282)](https://bugzilla.gnome.org/show_bug.cgi?id=685282)**
## Description
Getting the DVD SPU to paint generically is a requirement for allowing the DVD elements to plug / output hardware decoder caps.
Here's a conversation we had about it on IRC:
Sep 26 01:37:04 * thaytan wonders how SPU works
Sep 26 01:37:19 `<bilboed>` thaytan, with vdpau ?
Sep 26 01:37:24 `<thaytan>` *nod*
Sep 26 01:37:35 `<thaytan>` how it should work, that is
Sep 26 01:37:36 `<bilboed>` they have also VdpBitmapSurface
Sep 26 01:38:01 `<bilboed>` so VdpVideoSurface => YUV stuff, VdpBitmapSurface => RGB stuff
Sep 26 01:38:11 `<bilboed>` VdpOutputSurface => output of the compositor for display
Sep 26 01:38:40 `<bilboed>` you can create VdpBitmapSurface and map(write) stuff on it
Sep 26 01:39:01 `<bilboed>` then you give it to the compositor with coordinates (and whatever else) and that's basically it
Sep 26 01:39:09 `<thaytan>` hmmm
Sep 26 01:39:26 `<bilboed>` so as a result ... I'm also gonna have to figure out how to solve hw-compositing
Sep 26 01:39:28 `<thaytan>` not sure I see how that fits into a GStreamer/rsndvdbin context
Sep 26 01:39:31 `<bilboed>` in a generic fashion that is
Sep 26 01:39:55 `<bilboed>` thaytan, was thinking you could slap GstOverlayMeta (or sth like that) with attached GstBuffers
Sep 26 01:40:06 `<bilboed>` thaytan, maybe videomixer or some generic element could do that
Sep 26 01:40:09 `<thaytan>` I guess it's a vdpspu element with video and subpicture inputs as currently with dvdspu
Sep 26 01:40:25 `<thaytan>` except the video pad accepts vdp output surface caps
Sep 26 01:40:38 `<bilboed>` I'd prefer to avoid creating yet-another-custom-element
Sep 26 01:40:45 `<thaytan>` but, I don't know what it outputs
Sep 26 01:41:12 `<thaytan>` bilboed: I don't know how to build it generically
Sep 26 01:41:48 `<thaytan>` the dvdspu element uses the video stream passing through to a) paint onto b) uses the timestamps to crank the SPU state machine
Sep 26 01:42:44 `<bilboed>` what do you need more apart from "put this RGB bitmap at these coordinates for this timestamp and this duration"
Sep 26 01:43:48 `<thaytan>` it needs the timestamps and segment info on the video stream so it knows which pixels to generate
Sep 26 01:44:04 `<thaytan>` it's more "here's a video frame, what's the overlay?"
Sep 26 01:44:13 `<thaytan>` also, dvdspu works in YUV too
Sep 26 01:44:13 `<bilboed>` sure, but it doesn't care about the *content* of that frame
Sep 26 01:44:28 `<thaytan>` bilboed: not if it's not doing the compositing, no
Sep 26 01:44:34 `<bilboed>` right
Sep 26 01:45:11 `<bilboed>` so it could see the stream go through, watch/collect segment/timestamps and decide what subpicture to attach to it (without *actually* doing any processing and letting downstream handle that)
Sep 26 01:45:13 `<thaytan>` but the model is still to pass the video buffer stream through the spu element so it can see the timing info it needs, right?
Sep 26 01:45:22 `<thaytan>` oh, of course
Sep 26 01:45:28 `<thaytan>` that's what I was suggesting, I guess I wasn't clear
Sep 26 01:45:35 `<__tim>` thaytan, dvdspu should support GstVideoOverlayComposition imho
Sep 26 01:45:38 `<bilboed>` I'm not *that* familiar with SPU
Sep 26 01:45:42 `<bilboed>` also, what __tim said :)
Sep 26 01:45:54 `<bilboed>` like that I don't have to solve it in 500 different elements
Sep 26 01:45:56 `<thaytan>` ok, I guess I'll have to look at the GstVideoOverlayComposition API
Sep 26 01:46:54 * bilboed is not looking forward "at all" to fixing this for cluster-gst
Sep 26 01:47:12 `<__tim>` it's very dumb, you can provide one or more ARGB or AYUV rectangles and either use helper API to put them onto the raw video, or attach them to the buffer; the sink or whatever can then take over the overlaying using that
Sep 26 01:47:40 `<__tim>` and it will do a bunch of conversions for you and cache them if the sink or whatever does the overlaying doesn't support what you supplied
Sep 26 01:48:54 `<thaytan>` well, that sounds feasible - although less efficient than dvdspu painting natively if the composite is in software
Sep 26 01:49:28 `<thaytan>` maybe it can be extended to add RLE AYUV rectangles as a format though?
Sep 26 01:49:44 `<__tim>` thaytan, how sure are you of that? because basically you have to parse the RLE data for every single frame, right? is that really so much faster than blitting some ready-made rectangle using orc?
Sep 26 01:50:04 `<thaytan>` dvdspu gets to skip a lot of transparent pixels
Sep 26 01:52:21 `<__tim>` yeah, but it's if else and loops etc. You might be right, I'm just curious how much difference it actually makes. Also, you don't have to use the API to blit your pixels, you can still do that as you do now and only attach the AYUV rectangle if downstream supports that
Sep 26 01:54:13 `<__tim>` it's just convenient because you only have one code path
Sep 26 01:55:01 `<thaytan>` it sounds like a structural improvement
### Blocking
* [Bug 663750](https://bugzilla.gnome.org/show_bug.cgi?id=663750)
* [Bug 725900](https://bugzilla.gnome.org/show_bug.cgi?id=725900)https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/811dtlsdec: Critical warnings in gst-inspect2019-05-02T10:20:59ZBugzilla Migration Userdtlsdec: Critical warnings in gst-inspect## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#797364)](https://bugzilla.gnome.org/show_bug.cgi?id=797364)**
## Description
** (gst-inspect-1.0:23460): CRITICAL **: 23:38:49.968: file gstdtlsagent.c: line 188 (gs...## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#797364)](https://bugzilla.gnome.org/show_bug.cgi?id=797364)**
## Description
** (gst-inspect-1.0:23460): CRITICAL **: 23:38:49.968: file gstdtlsagent.c: line 188 (gst_dtls_agent_init): should not be reached
** (gst-inspect-1.0:23460): CRITICAL **: 23:38:49.968: gst_dtls_agent_set_property: assertion 'self->priv->ssl_context' failed
[...]
** (gst-inspect-1.0:23460): CRITICAL **: 23:38:49.968: gst_dtls_agent_get_certificate_pem: assertion 'GST_IS_DTLS_CERTIFICATE (self->priv->certificate)' failedhttps://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/809meson: Unkonw variables while disabling msdk plugin2018-11-06T22:12:16ZBugzilla Migration Usermeson: Unkonw variables while disabling msdk plugin## Submitted by Jordan Petridis
**[Link to original bug (#797354)](https://bugzilla.gnome.org/show_bug.cgi?id=797354)**
## Description
While building with auto_features=enabled -Dmsdk=disabled I encountered the following error.
...## Submitted by Jordan Petridis
**[Link to original bug (#797354)](https://bugzilla.gnome.org/show_bug.cgi?id=797354)**
## Description
While building with auto_features=enabled -Dmsdk=disabled I encountered the following error.
'tests/check/meson.build:18:0: ERROR: Unknown variable "have_msdk".'
same error was also thrown later for "msdk_dep".
This happened right after 55134df54c99b09556d3d0f60b9b4f029123af0e which to my understanding added an early exit before the variables where declared.
I am not sure if the following patch is the proper way to fix it, using auto-feature magic in the test too would probably be cleaner but I did not managed to figure out if its possible.
Tested the following patch in gnome-build-meta. https://gitlab.gnome.org/GNOME/gnome-build-meta/commit/8915cb51a25559163e2f1e6736742d8c242acf5fhttps://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/736ivfparse: Add vp9 header parsing to support dynamic resolution change notific...2021-09-24T14:36:30ZBugzilla Migration Userivfparse: Add vp9 header parsing to support dynamic resolution change notification## Submitted by Sreerenj Balachandran `@sree`
**[Link to original bug (#796599)](https://bugzilla.gnome.org/show_bug.cgi?id=796599)**
## Description
IVF parser can detect the vp9 streams, but it only extracts the IVF header for noti...## Submitted by Sreerenj Balachandran `@sree`
**[Link to original bug (#796599)](https://bugzilla.gnome.org/show_bug.cgi?id=796599)**
## Description
IVF parser can detect the vp9 streams, but it only extracts the IVF header for notifying the resolution. This won't work for dynamic resolution change, especially vp9 supports to have inter prediction from varying resolution frames.
For VP8, the ivfparser has uncompressed header parsing and it announces any possible resolution change.
Adding support for VP9 FrameHeader parsing in ivfparse is more complicated than vp8.
The easy option could be to add a dependency to the codecparser library so that we can use vp9 codecparsing apis directly.
The second option is to add a dependency to the bitreader api from GStreamer and implement the parsing support in ivfparse.
I would like to know what everybody thinks about this.https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/715wasapi: add a new volume property and implement volume/mute using ISimpleAudi...2021-09-24T14:36:22ZBugzilla Migration Userwasapi: add a new volume property and implement volume/mute using ISimpleAudioVolume## Submitted by Christoph Reiter (lazka)
**[Link to original bug (#796386)](https://bugzilla.gnome.org/show_bug.cgi?id=796386)**
## Description
Created attachment 372385
wasapi: add a new volume property and implement volume/mute ...## Submitted by Christoph Reiter (lazka)
**[Link to original bug (#796386)](https://bugzilla.gnome.org/show_bug.cgi?id=796386)**
## Description
Created attachment 372385
wasapi: add a new volume property and implement volume/mute using ISimpleAudioVolume
Implement mute/volume getters setters using ISimpleAudioVolume. This allows setting
those properties without delay and volume/mute changes will show up in sndvol.exe.
ISimpleAudioVolume only works in shared mode, so keep the old way of muting around
by setting the buffer to silent, to not break any existing code. Volume changes in
exclusive mode have no effect atm. Also missing is event handling of external
volume/mute changes through IAudioSessionEvents.
~~**Patch 372385**~~, "wasapi: add a new volume property and implement volume/mute using ISimpleAudioVolume":
[0001-wasapi-add-a-new-volume-property-and-implement-volum.patch](/uploads/94e6e41b4315bf2e21dd3c671e29fcc9/0001-wasapi-add-a-new-volume-property-and-implement-volum.patch)
Version: 1.14.0https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/671webrtcbin: signal "no-more-pads" is not emitted2023-06-07T10:42:52ZBugzilla Migration Userwebrtcbin: signal "no-more-pads" is not emitted## Submitted by Andreas Frisch `@fraxinas`
**[Link to original bug (#794624)](https://bugzilla.gnome.org/show_bug.cgi?id=794624)**
## Description
summary should be obvious enough## Submitted by Andreas Frisch `@fraxinas`
**[Link to original bug (#794624)](https://bugzilla.gnome.org/show_bug.cgi?id=794624)**
## Description
summary should be obvious enoughhttps://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/619dashdemux: segmentbase type with 'sidx' is not working as expected.2021-09-24T14:35:52ZBugzilla Migration Userdashdemux: segmentbase type with 'sidx' is not working as expected.## Submitted by Jun Xie
**[Link to original bug (#788763)](https://bugzilla.gnome.org/show_bug.cgi?id=788763)**
## Description
e.g.
http://dash.edgesuite.net/dash264/TestCases/1a/netflix/exMPD_BIP_TC1.mpd
currently, a whole ...## Submitted by Jun Xie
**[Link to original bug (#788763)](https://bugzilla.gnome.org/show_bug.cgi?id=788763)**
## Description
e.g.
http://dash.edgesuite.net/dash264/TestCases/1a/netflix/exMPD_BIP_TC1.mpd
currently, a whole file is downloaded without using range download, and also bitrate switch is not available.
The expected behaviour shall be that 'sidx' parsed,
and segments be retrieved by range download, and bitrate can be switched.https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/585HLS stream's video stops after discontinuity2021-09-24T14:35:40ZBugzilla Migration UserHLS stream's video stops after discontinuity## Submitted by Charlie Turner
**[Link to original bug (#785129)](https://bugzilla.gnome.org/show_bug.cgi?id=785129)**
## Description
Created attachment 355950
Minimal testcase showing the error
In the attachment is a minima...## Submitted by Charlie Turner
**[Link to original bug (#785129)](https://bugzilla.gnome.org/show_bug.cgi?id=785129)**
## Description
Created attachment 355950
Minimal testcase showing the error
In the attachment is a minimal master playlist that demonstrates the problem. It's referencing media from TED.com, which may or may not still exist by the time someone sees this ticket, but I can find a new testcase so let me know.
The problem is something to do with switch variant streams combined with discontinuities. The video / audio media playlists have TED's opening credits as a separate stream, then a discontinuity tag, and then the actually content. The video freezes close to the end of the opening credits.
The other interesting things to note about this example,
1) If you remove the BANDWIDTH keywords, it works fine. Because now it is not variant switching => there is something to do with variant switching.
2) If you comment out the audio rendition, the video doesn't stall => existence of both audio and video is required to trigger this bug.
I play it like this,
$ ./gstreamer/tools/gst-launch-1.0 playbin uri='file:///path/to/manifest-reduced.m3u8'
**Attachment 355950**, "Minimal testcase showing the error":
[manifest-reduced.m3u8](/uploads/ce1fecb48140f5ce33d50b446c1f676a/manifest-reduced.m3u8)https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/571h264parse : H264 that have SPS without PPS following2021-09-13T05:30:03ZBugzilla Migration Userh264parse : H264 that have SPS without PPS following## Submitted by Aurelien BOUIN
**[Link to original bug (#783861)](https://bugzilla.gnome.org/show_bug.cgi?id=783861)**
## Description
h264parse drop frames when there was no previous picture headers ( GST_H264_PARSE_STATE_VALID_PICT...## Submitted by Aurelien BOUIN
**[Link to original bug (#783861)](https://bugzilla.gnome.org/show_bug.cgi?id=783861)**
## Description
h264parse drop frames when there was no previous picture headers ( GST_H264_PARSE_STATE_VALID_PICTURE_HEADERS).
But some H264 encoder generate SPS header without PPS header following ...
The code in plugins-bad/gst/videoparsers/gsth264parse.c when receiving SPS GST_H264_NAL_SUBSET_SPS reset h264parse->state to 0 (meaning that previous SPS and PPS headers are ignored), instead of just considering SPS state only
So the change would be something like :
h264parse->state &= GST_H264_PARSE_STATE_GOT_PPS;
instead of
h264parse->state = 0;
Attached is a H264 video with PPS missinghttps://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/522vtenc: should support GLMemory2021-09-24T14:35:12ZBugzilla Migration Uservtenc: should support GLMemory## Submitted by Nick Kallen
**[Link to original bug (#778496)](https://bugzilla.gnome.org/show_bug.cgi?id=778496)**
## Description
Currently vtenc caps disallow GLMemory. However, having tested this on iOS and reviewed the code, it ...## Submitted by Nick Kallen
**[Link to original bug (#778496)](https://bugzilla.gnome.org/show_bug.cgi?id=778496)**
## Description
Currently vtenc caps disallow GLMemory. However, having tested this on iOS and reviewed the code, it actually works using GLMemory. Since the underlying pixel buffer is always a core media buffer,
pbuf = gst_core_media_buffer_get_pixel_buffer (frame->input_buffer);
this statement is valid whether the input_buffer's pixel buffer is GL buffer or not.
Therefore, just changing the static caps works.
Version: 1.11.1https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/509hlssink: to include BANDWIDTH tag2021-09-24T14:35:03ZBugzilla Migration Userhlssink: to include BANDWIDTH tag## Submitted by Andy S
**[Link to original bug (#777229)](https://bugzilla.gnome.org/show_bug.cgi?id=777229)**
## Description
Would it be possible to change the hlssink to allow adaptive bitrate streaming?
This would mean that ...## Submitted by Andy S
**[Link to original bug (#777229)](https://bugzilla.gnome.org/show_bug.cgi?id=777229)**
## Description
Would it be possible to change the hlssink to allow adaptive bitrate streaming?
This would mean that the #EXT-X-STREAM-INF:BANDWIDTH=1280000 type tag is written to the playlist file. Multiple bandwidth tags would be written to the playlist file to allow the client to choose which bandwidth is best for the current connection.
See here for more info:
https://tools.ietf.org/html/draft-pantos-http-live-streaming-20#section-4.3.4.2
It says:
4.3.4.2. EXT-X-STREAM-INF
The EXT-X-STREAM-INF tag specifies a Variant Stream, which is a set
of Renditions which can be combined to play the presentation. The
attributes of the tag provide information about the Variant Stream.
The URI line that follows the EXT-X-STREAM-INF tag specifies a Media
Playlist that carries a Rendition of the Variant Stream. The URI
line is REQUIRED. Clients that do not support multiple video
renditions SHOULD play this Rendition
Its format is:
#EXT-X-STREAM-INF:`<attribute-list>`
`<URI>`
The following attributes are defined:
BANDWIDTH
The value is a decimal-integer of bits per second. It represents
the peak segment bit rate of the Variant Stream.
If all the Media Segments in a Variant Stream have already been
created, the BANDWIDTH value MUST be the largest sum of peak
segment bit rates that is produced by any playable combination of
Renditions. (For a Variant Stream with a single Media Playlist,
this is just the peak segment bit rate of that Media Playlist.)
An inaccurate value can cause playback stalls or prevent clients
from playing the variant.
If the Master Playlist is to be made available before all Media
Segments in the presentation have been encoded, the BANDWIDTH
value SHOULD be the BANDWIDTH value of a representative period of
similar content, encoded using the same settings.
Every EXT-X-STREAM-INF tag MUST include the BANDWIDTH attribute.
AVERAGE-BANDWIDTH
The value is a decimal-integer of bits per second. It represents
the average segment bit rate of the Variant Stream.
If all the Media Segments in a Variant Stream have already been
created, the AVERAGE-BANDWIDTH value MUST be the largest sum of
average segment bit rates that is produced by any playable
combination of Renditions. (For a Variant Stream with a single
Media Playlist, this is just the average segment bit rate of that
Media Playlist.) An inaccurate value can cause playback stalls or
prevent clients from playing the variant.
If the Master Playlist is to be made available before all Media
Segments in the presentation have been encoded, the AVERAGE-
BANDWIDTH value SHOULD be the AVERAGE-BANDWIDTH value of a
representative period of similar content, encoded using the same
settings.
The AVERAGE-BANDWIDTH attribute is OPTIONAL.
Version: 1.10.2https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/469[PLUGIN-MOVE] Move fdkaac to gst-plugins-ugly2021-09-24T14:34:48ZBugzilla Migration User[PLUGIN-MOVE] Move fdkaac to gst-plugins-ugly## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#774570)](https://bugzilla.gnome.org/show_bug.cgi?id=774570)**
## Description
.## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#774570)](https://bugzilla.gnome.org/show_bug.cgi?id=774570)**
## Description
.https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/468[PLUGIN-MOVE] Move dts to gst-plugins-ugly2021-09-24T14:34:48ZBugzilla Migration User[PLUGIN-MOVE] Move dts to gst-plugins-ugly## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#774569)](https://bugzilla.gnome.org/show_bug.cgi?id=774569)**
## Description
.## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#774569)](https://bugzilla.gnome.org/show_bug.cgi?id=774569)**
## Description
.https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/402waylandsink: add support wayland presentation time interface2023-01-13T15:20:13ZBugzilla Migration Userwaylandsink: add support wayland presentation time interface## Submitted by Wonchul Lee
**[Link to original bug (#768079)](https://bugzilla.gnome.org/show_bug.cgi?id=768079)**
## Description
I bring comments about this tasks and wrap writer's name with angle brackets, sorry for the poor read...## Submitted by Wonchul Lee
**[Link to original bug (#768079)](https://bugzilla.gnome.org/show_bug.cgi?id=768079)**
## Description
I bring comments about this tasks and wrap writer's name with angle brackets, sorry for the poor readability.
Waylandsink was handled by George Kiagiadakis and he had written presentation time interface codes for the demo, but the interface has been changed and settled down as a stable protocol.
I was starting it based on George's work (http://cgit.collabora.com/git/user/gkiagia/gst-plugins-bad.git/log/?h=demo), removing presentation queue and considering display stack delay.
It was predicting latency at display stack from wl_commit/damage/attach to frame presence and Pekka Paalanen(pq) advised that would not estimate the delay from wl_surface_commit() to display.
(it's part of comments)
`<pq>` wonchul, if you are trying to estimate the delay from wl_surface_commit() to display, and you don't sync the time you call commit() to the incoming events, that's going to be a lot less accurate.
`<pq>` 11:11:07> no, I literally meant replacing the queueing protocol calls with a queue implementation in the sink, so you don't use the queueing protocol anymore, but rely only on the feedback protocol to trigger attach+commits from the queue.
`<pq>` 11:12:27> the queue being a timestamp-ordered list of frame, just like in the weston implementation.
So, the way estimating the delay from wayland is not much accurate.
I turned to add a queue holding buffers before doing render() in the waylandsink
<Olivier Crête>
I'm a bit concerned about adding a queue in the sink that would increase the latency unnecessarily. I wonder if this could be done while queueing around 1 buffer there in normal streaming. Are we talking about queuing the actual frames or just information about the frames?
`<Wonchul Lee>`
I've queued reference of frames and tried to render based on the wayland presentation clock.
It could bring some delay depending on specific contents by adding a queue in the sink, It's not clear to me what specific factor cause delay yet, but yes, it would increase the latency at the moment.
The idea was disabling clock synchronization in gstbasesink and rendering(wayland commit/damage/attach) frames based on the calibrated wayland clock. I pushed the reference of gstbuffer to the queue and set the async clock callback to request render at a right time, and then rendered or dropped it depending on the adjusted timestamp.
This changes have issues that adjusted timestamp what requested to render is getting late than expected and it could cause dropping most of the frames at some cases since the adjusted timestamp was always late.
So I'm referring audiobasesink to adjust clock synchronization for the frames with wayland clock.
<Olivier Crête>
This work has two separate goals:
When the video has a different framerate than the display framerate, it should drops frames more or less evenly, so if you need to display 4 out of 5 frames, it should be something like 1,2,3,4,6,7,8,9,11,... Or if you need to display 30/60 frames it should display 1,3,5,7,9, etc .. Currently, GstBaseSink is not very clever about that.
And we have to be careful as this can be also caused by the compositor not being able to keep up. It's not because the display can do 60fps that the compositor is actually able to produce 60 new frames, it could be limited to a lower number, so we'll also have to make sure we're protected against that.
We want to guess the latency added by the display stack. The current GStreamer video sinks more or less assume that a buffer is rendered immediately when the render() vmethod returns, but this is not really how current display hardware work. Especially when you have double or triple buffering. So we want to know how much in advance to submit the buffer, but not too early to not display it one interval too early.
I just asked @nicolas a quick question about how he though we should do this, then we spent two hours whiteboarding ideas about this and we've barely been able to define the problem.
Here are some ideas we bounced around:
After submitting one frame (the first frame? the preroll frame?), we can have an idea of the upper bound of the latency for the live pipeline case. It should be the time between the moment a frame was submitted and when it was finally rendered + the "refresh". We can probably delay sending the async-done until the presented event of the first frame has arrived.
For the non-live case, we can probably find a way to submit the frame as early as possible before the next. Finding that time is the tricky part I think
@wonchul: could you summarize the different things your tried, what were the hypothesis and what were the results? It's important to keep these kinds of records for the Tax R&D filings (and so we can keep up with your work).
@pq or @daniels:
what is the logic behind the seq field, how do you expect it can be used? Do you know any example where it is used?
I'm also not sure how we can detect the case where the compositor cannot keep up? Or is the compositor is gnome-shell and has a gc that makes it miss a couple frames for no good reason?
From the info is the presented event (or any other way), is there a way we can evaluate when is the latest we can submit a buffer to have it arrive in time for a specific refresh? Or do we have to try and then do some kind of search to find what those deadlines are in practice?
`<Pekka Paalanen>`
seq field of wp_presentation_feedback.presented event:
No examples of use, I don't think. I didn't originally considerer it as needed, but it was added to allow implementing GLX_OML_sync_control on top of it. I do not think we should generally depend on seq unless you specifically care about the refresh count instead of timings. My intention with the design was that new code can work better with timestamps, while old code you don't want to port to timestamps could use seq as it has always done. Timestamps are "accurate", while seq may have been estimated from a clock in the kernel and may change its rate or may not have a constant rate at all.
seq comes from a time, when display refresh was a known guaranteed constant frequency, and you could use it as a clock by simply counting cycles. I believe all timing-sensitive X11 apps have been written with this assumption. But it is no longer exactly true, it has caveats (hard to maintain across video mode switches or display suspends, lacking hardware support, etc.), and with new display tech it will become even less true (variable refresh rate, self-refresh panels, ...).
seq is not guaranteed to be provided, it may be zero depending on the graphics stack used by the compositor. I'm also not sure what it means if you don't have both VSYNC and HW_COMPLETION in flags
The timestamp OTOH is always provided, but it may have some caveats which should be indicated by unset bits in flags.
Compositor not keeping up:
Maybe you could use the tv + refresh from presented event to guess when the compositor should be presenting your frame, and compare afterwards with what actually happened?
I can't really think of a good way to know if the compositor cannot keep up or why it cannot keep up. Hickups can happen and the compositor probably won't know why either. All I can say is collect statistics and analyze then over time. This might be a topic for further investigations, but to get more information about which steps take too much time we need some kernel support (explicit fencing) that is being developed, and make the compositor use that information.
Only hand-waving, sorry.
Finding the deadline:
I don't think there is a way to know really, and also the compositor might be adjusting its own schedules, so it might be variable.
The way I imaged it is that from presented event you compute the time of the next possible presentation, and if you want to hit that, submit a frame ASAP. This should get you just below one display-frame-cycle latency in any case, if your rendering is already complete.
If we really need the deadline, that would call for extending the protocol, so that the compositor could tell you when the deadline is. The compositor chooses the deadline based on how fast it thinks it can do a composition and hit the right vblank.
`<Wonchul Lee>`
About the latency, I tried to get latency added by the display stack from the wl commit/damage/attach to the present frame. It's a variable delay depending on the situation as pq mentioned before and could disturb targeting next present. The way we could assume optimal latency by accumulating it and observe a gap by the presentation feedback, maybe not always reliable.
I tried to synchronize GStreamer clock time with presentation feedback to render a frame on time and added a queue in GstWaylandSink to request render on each presentation feedback if there's a frame on time, similar to what George did. It's not well fit with GstBaseSink though, and GstWaylandSink needs to disable BaseSink time synchronization and computing itself. I faced unexpected underflow (consistently increasing delay) when playing with mpegts stream, so It also needs proper QOS handling to prevent underflow.
I would be good to get reliable latency from the display stack to make use of it when synchronizing presenting time whether computing it GstWaylandSink itself or not, there's a latency what we're missing anyway, though I'm not sure it's feasible.
`<Pekka Paalanen>`
@wonchul btw. what do you mean when you say "synchronize GStreamer clock time with presentation feedback"?
Does it mean something else than looking at what clock is advertised by wp_presentation.clock_id and then synchronizing GStreamer clock with clock_gettime() using the given clock id? Or does synchronizing mean something else than being able to convert a timestamp from one clock domain to the other domain?
`<Nicolas Dufresne>`
@pq I would need some clarification about submitting frame ASAP. If we blindly do that, frames will get displayed too soon on screen (in playback, decoders are much faster then the expected render speed). In GStreamer, we have infrastructure to wait until the moment is right. The logic (simplified) is to wait for the right moment minus the "currently expected" render latency, and submit. This is in playback case of course, and is to ensure the best possible A/V sync. In that case we expect the presentation information to be helpful in constantly correcting that moment. What we miss, is some semantic, as just blindly obey to the computed render delay of last frames does not seem like best idea. We expected to be able to calculate, or estimate, a submission window that will (most of the time) hit the screen at an estimated time.
For the live case, we're still quite screwed. Nothing seems to improve our situation. We need at start to pick a latency, and if later find that latency was too small (the latency is the window in which we are able to adapt), we end-up screwing up the audio (a glitch) to increase that latency window. So again, some semantic that we could use to calculate a pessimistic latency from the first presentation report would be nice.
<Olivier Crête>
I think that in the live case you can probably keep a 1 frame queue at the sink, so when a new frame arrives, you can decide if you want to present the queued one at the next refresh or replace it with a new one. Then the thread that talks to the compositor (and gets the events, etc), can pick the buffers from the "queue" to send to the compositor.
`<Nicolas Dufresne>`
Ok, that make sense for non-live. Would be nice to document the intended use, that was far from obvious. We keep thinking we need to look at the number, but we don't understand at first the the moment we get called back is important. You seem to assume that we can "pick" a frame, like if the sink was pulling whatever it wants randomly, that unfortunately not how things works. We can though introduce a small queue (some late queue) so we only start blocking upstream when that queue is full. And it would help making decisions
For live it's much more complex. The entire story about declared latency is because if we don't declare any latency, that queue will always be empty. Worst case, the report will always tell use that we have displayed the frame late. I'm quite sure you told me that the render pipeline can have multiple step, where submitting frame 1 2 3 at 1 blank distance, will render on blank 3 4 5 with effectively 3 blank latency. That latency is what we need to report for proper A/V sink in live pipeline, and changing is to be done with care as it breaks the audio. That we need some ideas, cause right now we have no clue.https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/246directsoundsrc: device-name doesn't accept windows given names if there are s...2023-06-23T12:18:46ZBugzilla Migration Userdirectsoundsrc: device-name doesn't accept windows given names if there are special characters in## Submitted by Thomas Roos
**[Link to original bug (#748681)](https://bugzilla.gnome.org/show_bug.cgi?id=748681)**
## Description
device-name doesn't accept windows given names if there are special characters in - eg. german "(High...## Submitted by Thomas Roos
**[Link to original bug (#748681)](https://bugzilla.gnome.org/show_bug.cgi?id=748681)**
## Description
device-name doesn't accept windows given names if there are special characters in - eg. german "(High Definition Audio-Gerät)". But works if you change the naming in windows registry, which is only possible for testing purposes. May an device-ID (GUID) based solution is needed!?
$ GST_DEBUG=3,directsoundsrc:5 gst-launch-1.0.exe directsoundsrc buffer-time=10000 device-name="mic (High Definition Audio-Gerät)" ! directsoundsink buffer-time=200000
0:00:00.043914339 3444 02D27260 WARN audioresample gstaudioresample.c:1537:plugin_init: Orc disabled, can't benchmark int vs. float resampler
0:00:00.048490103 3444 02D27260 WARN GST_PERFORMANCE gstaudioresample.c:1540:plugin_init: orc disabled, no benchmarking done
0:00:00.064831675 3444 02D27260 DEBUG directsoundsrc gstdirectsoundsrc.c:164:gst_directsound_src_class_init: initializing directsoundsrc class
0:00:00.139332696 3444 02D27260 DEBUG directsoundsrc gstdirectsoundsrc.c:259:gst_directsound_src_init:<GstDirectSoundSrc@00513220> initializing directsoundsrc
0:00:00.144179343 3444 02D27260 ERROR GST_PIPELINE grammar.y:453:gst_parse_element_set: could not set property "device-name" in element "directsoundsrc0" to "mic (High Definition Audio-Gerät)"
0:00:00.148407097 3444 02D27260 DEBUG directsoundsrc gstdirectsoundsrc.c:202:gst_directsound_src_getcaps:`<directsoundsrc0>` get caps
0:00:00.151341707 3444 02D27260 DEBUG directsoundsrc gstdirectsoundsrc.c:202:gst_directsound_src_getcaps:`<directsoundsrc0>` get caps
[Invalid UTF-8] WARNING: erroneous pipeline: could not set property "device-name" in element "directsoundsrc0" to "mic (High Definition Audio-Ger\xe4t)"https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/181h264parse, h265parse: disable passthrough if upstream doesn't provide profile...2021-09-24T14:32:50ZBugzilla Migration Userh264parse, h265parse: disable passthrough if upstream doesn't provide profile/level info## Submitted by Tim Müller `@tpm`
**[Link to original bug (#738281)](https://bugzilla.gnome.org/show_bug.cgi?id=738281)**
## Description
+++ This bug was initially created as a clone of [Bug 732239](https://bugzilla.gnome.org/show_b...## Submitted by Tim Müller `@tpm`
**[Link to original bug (#738281)](https://bugzilla.gnome.org/show_bug.cgi?id=738281)**
## Description
+++ This bug was initially created as a clone of [Bug 732239](https://bugzilla.gnome.org/show_bug.cgi?id=732239) +++
> Now we only need to disable passthrough mode in h26*parse if
> not all information is available in the upstream caps yet :)
Info could be set on caps directly or be in codec_data for AVC of course.
Question is what about byte-stream input, do we re-check pass through mode once we have acquired the initial caps? do we wait for SPS/PPS before outputting anything? (might do that already)