GStreamer issueshttps://gitlab.freedesktop.org/groups/gstreamer/-/issues2021-09-24T14:34:34Zhttps://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/420TS play few seconds playback stuck(drop frame)2021-09-24T14:34:34ZBugzilla Migration UserTS play few seconds playback stuck(drop frame)## Submitted by Roland Jon `@rland`
**[Link to original bug (#770375)](https://bugzilla.gnome.org/show_bug.cgi?id=770375)**
## Description
Created attachment 334120
mpegtspacketizer_basesink_level_6_log
There is a TS file(htt...## Submitted by Roland Jon `@rland`
**[Link to original bug (#770375)](https://bugzilla.gnome.org/show_bug.cgi?id=770375)**
## Description
Created attachment 334120
mpegtspacketizer_basesink_level_6_log
There is a TS file(https://pan.baidu.com/s/1gfgA9Kv)
With ffplay, VLC playback is OK,but once with GStreamer It only play few seconds then playback will stuck.
I try to dig mpegtspacketizer.c:mpegts_packetizer_pts_to_ts()’ code as follows:
--------
GstClockTime
mpegts_packetizer_pts_to_ts (MpegTSPacketizer2 * packetizer,
GstClockTime pts, guint16 pcr_pid)
{
GstClockTime res = GST_CLOCK_TIME_NONE;
MpegTSPCR *pcrtable;
.....
if (refpcr != G_MAXINT64) {
/*************!!!!Modifid here!!!!*************************/
res = pts;
// res =
// pts - PCRTIME_TO_GSTTIME (refpcr) + PCRTIME_TO_GSTTIME (refpcroffset);
/***********************************************************/
}
else
GST_WARNING ("No groups, can't calculate timestamp");
} else
GST_WARNING ("Not enough information to calculate proper timestamp");
PACKETIZER_GROUP_UNLOCK (packetizer);
GST_DEBUG ("Returning timestamp %" GST_TIME_FORMAT " for pts %"
GST_TIME_FORMAT " pcr_pid:0x%04x", GST_TIME_ARGS (res),
GST_TIME_ARGS (pts), pcr_pid);
return res;
}
--------
The above modification is just not to make a PCR `<=>` PTS conversion, while using the original PTS,the result is more smoother playback than before.But,I don't know what causes this phenomenon, so I file a new bug to track this issue.
**Attachment 334120**, "mpegtspacketizer_basesink_level_6_log":
[gst.log.zip](/uploads/5bb74c1280b1b8d9ec7b256c5221e6bb/gst.log.zip)
Version: 1.8.2https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/419player: Implement a playlist API and next/previous commands2022-05-06T15:08:12ZBugzilla Migration Userplayer: Implement a playlist API and next/previous commands## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#770331)](https://bugzilla.gnome.org/show_bug.cgi?id=770331)**
## Description
See attached patch, and also following patch for gtk-play to make use of this.
Plea...## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#770331)](https://bugzilla.gnome.org/show_bug.cgi?id=770331)**
## Description
See attached patch, and also following patch for gtk-play to make use of this.
Please review the new API and let me know if this makes any sense or could be
simplified somehow.
### Depends on
* [Bug 770368](https://bugzilla.gnome.org/show_bug.cgi?id=770368)https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/418compositor: Does not support reverse playback2021-09-24T14:34:33ZBugzilla Migration Usercompositor: Does not support reverse playback## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#770047)](https://bugzilla.gnome.org/show_bug.cgi?id=770047)**
## Description
+++ This bug was initially created as a clone of [Bug 769624](https://bugzilla.gnome.org...## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#770047)](https://bugzilla.gnome.org/show_bug.cgi?id=770047)**
## Description
+++ This bug was initially created as a clone of [Bug 769624](https://bugzilla.gnome.org/show_bug.cgi?id=769624) +++
Problem for GES to do reverse playback if mixing is enabled (default).
### Blocking
* [Bug 764788](https://bugzilla.gnome.org/show_bug.cgi?id=764788)https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/417audioaggreator: Does not support reverse playback / mixed input rates2021-09-24T14:34:33ZBugzilla Migration Useraudioaggreator: Does not support reverse playback / mixed input rates## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#770046)](https://bugzilla.gnome.org/show_bug.cgi?id=770046)**
## Description
+++ This bug was initially created as a clone of [Bug 769624](https://bugzilla.gnome.org...## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#770046)](https://bugzilla.gnome.org/show_bug.cgi?id=770046)**
## Description
+++ This bug was initially created as a clone of [Bug 769624](https://bugzilla.gnome.org/show_bug.cgi?id=769624) +++
Problem for GES to do reverse playback or mixed rate input if mixing is enabled (default).
### Blocking
* [Bug 764788](https://bugzilla.gnome.org/show_bug.cgi?id=764788)https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/416h265parser: refactoring some methods to be more like h264parser2021-09-24T14:34:32ZBugzilla Migration Userh265parser: refactoring some methods to be more like h264parser## Submitted by m][sko
**[Link to original bug (#769717)](https://bugzilla.gnome.org/show_bug.cgi?id=769717)**
## Description
I think we should refactor h265parser's methods to be more like h264parser
for example gst_h265_parse_pr...## Submitted by m][sko
**[Link to original bug (#769717)](https://bugzilla.gnome.org/show_bug.cgi?id=769717)**
## Description
I think we should refactor h265parser's methods to be more like h264parser
for example gst_h265_parse_process_nal, gst_h265_parse_handle_frame
in gst_h265_parse_handle_frame
there is part with "no SPS/PPS yet, nal Type: %d %s, Size: %u will be dropped"
that is really not good
h264parser don't have something like thishttps://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/415tsdemux: playback is faster than normal speed for certain files2021-09-24T14:34:32ZBugzilla Migration Usertsdemux: playback is faster than normal speed for certain files## Submitted by Nirmal Palanisamy
**[Link to original bug (#769565)](https://bugzilla.gnome.org/show_bug.cgi?id=769565)**
## Description
There are some streams in which PCR is present at interval more than 500ms(say for example 1 se...## Submitted by Nirmal Palanisamy
**[Link to original bug (#769565)](https://bugzilla.gnome.org/show_bug.cgi?id=769565)**
## Description
There are some streams in which PCR is present at interval more than 500ms(say for example 1 sec or 1.5 sec) at few positions. When there is a PCR gap of more than 500ms between last PCR and new PCR, a new PCR group is formed. While forming new PCR group, PCR offset is computed with respect to previous group and stored. But this stored PCR offset is not computed properly. And when this group is referred to compute timestamp for a frame, it is done based on first PCR and PCR offset of this group. Since the PCR offset is wrongly stored when forming the group, time stamp computation also goes wrong. PCR offset stored is little less than the right PCR offset. Because of lesser PCR offset, time stamp computed also lesser than actual. Due to this playback ends very fast than the actual duration. And also because of wrong PCR offset for last group(which set during initial scanning), duration of file computed is less than the actual.
Version: 1.8.0https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/414nvenc segfault after setting wrong width or height2021-09-24T14:34:32ZBugzilla Migration Usernvenc segfault after setting wrong width or height## Submitted by kap..@..nan.pl
**[Link to original bug (#769268)](https://bugzilla.gnome.org/show_bug.cgi?id=769268)**
## Description
After setting odd width or height nvenc will crash.
e.g.
set caps width: 1855, height 1060 ...## Submitted by kap..@..nan.pl
**[Link to original bug (#769268)](https://bugzilla.gnome.org/show_bug.cgi?id=769268)**
## Description
After setting odd width or height nvenc will crash.
e.g.
set caps width: 1855, height 1060
set pipline to GST_STATE_PLAYING
nvenc will crash
change width to even value (e.g. 1854) and nvenv will work.
Version: 1.8.2https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/413gstfaceoverlay: Detect multiple faces and allow to overlay multiple images wi...2021-09-24T14:34:31ZBugzilla Migration Usergstfaceoverlay: Detect multiple faces and allow to overlay multiple images with different types.## Submitted by César Fabián Orccón Chipana
**[Link to original bug (#769176)](https://bugzilla.gnome.org/show_bug.cgi?id=769176)**
## Description
Created attachment 332126
faceoverlay: Detect multiple iamges and allow to overlay ...## Submitted by César Fabián Orccón Chipana
**[Link to original bug (#769176)](https://bugzilla.gnome.org/show_bug.cgi?id=769176)**
## Description
Created attachment 332126
faceoverlay: Detect multiple iamges and allow to overlay multiple images with different types.
Problems with the current element:
- gstfaceoverlay only can add an overlayed image over an only one face.
- gstfaceoverlay only supports SVG files.
Solutions in the proposed patch:
- gstfaceoverlay used gstrsvgoverlay element. I have replaced it by a gstcairooverlay so I can draw whatever I want there, for example, multiple images over all the detected faces.
_ Since I can draw whatever I want, I can put gdkpixbuf in cairo, so that allows the plugin to support multiple faces.
This patch depends on https://bugzilla.gnome.org/show_bug.cgi?id=764011
**Patch 332126**, "faceoverlay: Detect multiple iamges and allow to overlay multiple images with different types.":
[0001-faceoverlay-Detect-multiple-images-and-allow-to-over.patch](/uploads/bd004fb8ff5dca73d933e1e40775989d/0001-faceoverlay-Detect-multiple-images-and-allow-to-over.patch)
### Depends on
* [Bug 764011](https://bugzilla.gnome.org/show_bug.cgi?id=764011)https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/412ahc: fix strict aliasing warnings2021-09-24T14:34:31ZBugzilla Migration Userahc: fix strict aliasing warnings## Submitted by Martin Kelly
**[Link to original bug (#769167)](https://bugzilla.gnome.org/show_bug.cgi?id=769167)**
## Description
Created attachment 332119
0001-ahc-fix-strict-aliasing-warnings.patch
gcc strict aliasing can...## Submitted by Martin Kelly
**[Link to original bug (#769167)](https://bugzilla.gnome.org/show_bug.cgi?id=769167)**
## Description
Created attachment 332119
0001-ahc-fix-strict-aliasing-warnings.patch
gcc strict aliasing can break code that performs type-punning, unless
said type punning uses unions. See:
http://www.cocoawithlove.com/2008/04/using-pointers-to-recast-in-c-is-bad.html
for more details about the UNION_CAST macro that I used.
**Patch 332119**, "0001-ahc-fix-strict-aliasing-warnings.patch":
[0001-ahc-fix-strict-aliasing-warnings.patch](/uploads/c56f48f1ad04b2931cb2f204de81b723/0001-ahc-fix-strict-aliasing-warnings.patch)https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/411mpegtsdemux: Fix definition of ATSC for AC-3 audio2021-09-24T14:34:30ZBugzilla Migration Usermpegtsdemux: Fix definition of ATSC for AC-3 audio## Submitted by Gabby Park
**[Link to original bug (#768939)](https://bugzilla.gnome.org/show_bug.cgi?id=768939)**
## Description
Created attachment 331751
mpegtsdemux: Fix definition of ATSC for AC-3
The ATSC specification w...## Submitted by Gabby Park
**[Link to original bug (#768939)](https://bugzilla.gnome.org/show_bug.cgi?id=768939)**
## Description
Created attachment 331751
mpegtsdemux: Fix definition of ATSC for AC-3
The ATSC specification was described in ATSC A/65:2013 for MPEG-2 Transport Stream.
There is mismatch between the specification document and gstmpegtsdescriptor code.
So I moved descriptor_tag(0x81) as defined in GstMpegtsMiscDescriptorType to ATSC field and removed the unused descriptor_tag(0x83) defined in GstMpegtsATSCDescriptorType.
Additionally, I added a descriptor_tag(0xCC) for E-AC-3 audio stream.
Thanks.
**Patch 331751**, "mpegtsdemux: Fix definition of ATSC for AC-3":
[0001-mpegtsdemux-Fix-definition-of-ATSC-for-AC-3-audio.patch](/uploads/90153e82e95a7e287c8bd6649433f101/0001-mpegtsdemux-Fix-definition-of-ATSC-for-AC-3-audio.patch)https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/410ion: DMA Buf allocator based on ion2021-09-24T14:34:30ZBugzilla Migration Userion: DMA Buf allocator based on ion## Submitted by kevin
**[Link to original bug (#768794)](https://bugzilla.gnome.org/show_bug.cgi?id=768794)**
## Description
DMA Buf allocator based on ion
### Blocking
* [Bug 770585](https://bugzilla.gnome.org/show_bug.cgi?id=7...## Submitted by kevin
**[Link to original bug (#768794)](https://bugzilla.gnome.org/show_bug.cgi?id=768794)**
## Description
DMA Buf allocator based on ion
### Blocking
* [Bug 770585](https://bugzilla.gnome.org/show_bug.cgi?id=770585)https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/409kmssink: Add tile format support2021-09-24T14:34:29ZBugzilla Migration Userkmssink: Add tile format support## Submitted by Nicolas Dufresne `@ndufresne`
**[Link to original bug (#768703)](https://bugzilla.gnome.org/show_bug.cgi?id=768703)**
## Description
This is a bit tricky. Tiled format in DRM are set with two values. The first is the...## Submitted by Nicolas Dufresne `@ndufresne`
**[Link to original bug (#768703)](https://bugzilla.gnome.org/show_bug.cgi?id=768703)**
## Description
This is a bit tricky. Tiled format in DRM are set with two values. The first is the format family (like NV12), and then you add a modifier that specify the transformation. The main issue right now is that I don't see anything in DRM to probe the availability of each modifiers for each formats.https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/408h265parse: do not forget VPS/SPS/PPS after pushing codec header2021-09-24T14:34:29ZBugzilla Migration Userh265parse: do not forget VPS/SPS/PPS after pushing codec header## Submitted by Thijs Vermeir `@tvermeir`
**[Link to original bug (#768532)](https://bugzilla.gnome.org/show_bug.cgi?id=768532)**
## Description
Created attachment 331030
0001-h265parse-do-not-forget-VPS-SPS-PPS-after-pushing-co.p...## Submitted by Thijs Vermeir `@tvermeir`
**[Link to original bug (#768532)](https://bugzilla.gnome.org/show_bug.cgi?id=768532)**
## Description
Created attachment 331030
0001-h265parse-do-not-forget-VPS-SPS-PPS-after-pushing-co.patch
have_vps, have_sps and have_pps are used to store if
VPS/SPS/PPS headers are found. This flags should not be cleared
if the codec header is pushed, only when the stream is reset.
In h264parse, this variables are used to store if *new* codec headers
are found and a flag is used to indicate the presence of the codec
headers.
This fixes streaming with config-interval>0.
**Patch 331030**, "0001-h265parse-do-not-forget-VPS-SPS-PPS-after-pushing-co.patch":
[0001-h265parse-do-not-forget-VPS-SPS-PPS-after-pushing-co.patch](/uploads/6d4cb1905cb7df8df38281af7fd8451e/0001-h265parse-do-not-forget-VPS-SPS-PPS-after-pushing-co.patch)https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/406dashdemux: Problems playing youtube streams2021-09-24T14:34:29ZBugzilla Migration Userdashdemux: Problems playing youtube streams## Submitted by cod..@..il.com
**[Link to original bug (#768460)](https://bugzilla.gnome.org/show_bug.cgi?id=768460)**
## Description
Created attachment 330919
MPD
Using source from 1.9.0.1.
Youtube uses mpeg-dash as one...## Submitted by cod..@..il.com
**[Link to original bug (#768460)](https://bugzilla.gnome.org/show_bug.cgi?id=768460)**
## Description
Created attachment 330919
MPD
Using source from 1.9.0.1.
Youtube uses mpeg-dash as one method to distribute its live streams, and I am trying to use gstreamer to play a stream directly from youtube servers. Attached is part of the MPD for one of these streams. I am assuming the MPD is valid. It is from the live stream here: https://www.youtube.com/watch?v=njCDZWTI-xg. Here is a link to an MPD for this file: https://manifest.googlevideo.com/api/manifest/dash/playlist_type/DVR/key/yt6/upn/OlXrWwK-70c/source/yt_live_broadcast/sparams/as,gcr,hfr,id,ip,ipbits,itag,playlist_type,requiressl,source,expire/gcr/us/signature/80C14B2024BE29F75C1DBF68CDBDEF411A632D5E.893DDA5444737864A0068A89270AB08F26ACF9D3/fexp/9407060,9416126,9416891,9422596,9428398,9431012,9433096,9433223,9433946,9435526,9435876,9437066,9437553,9437742,9439652,9440376/ip/216.228.112.21/requiressl/yes/itag/0/as/fmp4_audio_clear,webm_audio_clear,webm2_audio_clear,fmp4_sd_hd_clear,webm2_sd_hd_clear/expire/1467351997/id/njCDZWTI-xg.54/sver/3/ipbits/0/hfr/1
I was hoping to get your help with several issues relating to dashdemux when dealing with these streams:
1) Youtube puts the `<SegmentTimeline>` in the `<Period>` but outside of any `<AdaptdationSet>`. I assume this means that this timeline should be valid for all future `<SegmentList>`'s. The current parser doesn't seem to like this as after the SegmentList is parsed, I get the error "segment has neither duration nor timeline" printed from gst_mpdparser_parse_mult_seg_base_type_ext().
To get around this for the time being I put a huge hack in gst_mpdparser_parse_segment_timeline_node that will store any timelines that have been parsed (in these files - only one) and add them to any node that did not contain a timeline in the method gst_mpdparser_parse_mult_seg_base_type_ext(). This gets me past this problem for the time being, but then I ran into the next issue (which may be caused by doing this, I'm not sure).
2) gst_mpd_client_get_period_index_at_time () iterates through the periods to try to find one that is valid for the current TOD. The 1 period in the file is checked and has a correct start time, but for some reason has a duration of -1. This causes the calculation to fail and return G_MAXUINT causing dashdemux to think there are no streams valid for the current time. I'm not sure if -1 is an error or means "plays forever" - but I worked around this issue by checking if the duration was -1, and if so calling it a valid period.
This seemed to get me to where the stream ought to be able to play using gst-play-1.0 but I never see any video or hear any audio. If I enable GST_DEBUG=6 I see that dashdemux downloaded a single fragment from the server but it never seems to send it on to the next filter. It will continually download the manifest with a small delay, re-parse it, and continue. The pipeline never exits the prerolling phase.
Thanks for any help you can provide. I am happy to help implement a real solution to these issues if you want to provide suggestions.
**Attachment 330919**, "MPD":
[earth.xml](/uploads/cc1351d4247d8963cbfaa72f49bd844b/earth.xml)https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/405amcvideodec: Expose non-GL formats on Lollipop+2021-09-24T14:34:28ZBugzilla Migration Useramcvideodec: Expose non-GL formats on Lollipop+## Submitted by Olivier Crête `@ocrete`
**[Link to original bug (#768264)](https://bugzilla.gnome.org/show_bug.cgi?id=768264)**
## Description
Before Lollipop, the only way to know which format a decoder would decode into was to sta...## Submitted by Olivier Crête `@ocrete`
**[Link to original bug (#768264)](https://bugzilla.gnome.org/show_bug.cgi?id=768264)**
## Description
Before Lollipop, the only way to know which format a decoder would decode into was to start decoding, and then just before getting the first frame, we could get the format. Since Lollipop, you can get the output format by just doing a configure(), this only requires the mime type of the codec and the resolution. I've tried just using 640x480 as I assume all machines should do that. The second assumption is that the output format is the same irrelevant of the resolution. This also means that although a decoder can in theory output many formats, we can know exactly which one it will output for each input format.
Just as I'm attaching the patches, I'm thinking that maybe we should also move the gl_output_only flag to be per input format.https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/403applemedia: Reproducible segfault with h2642021-09-24T14:34:28ZBugzilla Migration Userapplemedia: Reproducible segfault with h264## Submitted by David Rajchenbach-Teller (please use "needinfo")
**[Link to original bug (#768137)](https://bugzilla.gnome.org/show_bug.cgi?id=768137)**
## Description
I'm currently attempting to export a h264 stream from the local ...## Submitted by David Rajchenbach-Teller (please use "needinfo")
**[Link to original bug (#768137)](https://bugzilla.gnome.org/show_bug.cgi?id=768137)**
## Description
I'm currently attempting to export a h264 stream from the local network.
I'm using a D-Link 5020L IPCamera streaming h264, from a MacOS X 10.11.
Steps to reproduce:
- gst-launch-1.0 --verbose uridecodebin uri=http://USER:PASS@10.243.30.114/h264.cgi ! udpsink host=127.0.0.1 port=8080
- wait 1 or 2 seconds.
Output:
$ ./stream.sh
Setting pipeline to PAUSED ...
/GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0: source = "\(GstSoupHTTPSrc\)\ source"
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstTypeFindElement:typefindelement0.GstPad:src: caps = "video/x-h264\,\ stream-format\=\(string\)byte-stream"
/GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind: force-caps = "video/x-h264\,\ stream-format\=\(string\)byte-stream"
/GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin0: sink-caps = "video/x-h264\,\ stream-format\=\(string\)byte-stream"
/GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstQueue2:queue2-0.GstPad:src: caps = "video/x-h264\,\ stream-format\=\(string\)byte-stream"
/GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstQueue2:queue2-0.GstPad:src: caps = "video/x-h264\,\ stream-format\=\(string\)byte-stream"
/GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin0.GstGhostPad:sink.GstProxyPad:proxypad0: caps = "video/x-h264\,\ stream-format\=\(string\)byte-stream"
/GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind.GstPad:src: caps = "video/x-h264\,\ stream-format\=\(string\)byte-stream"
/GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin0/GstH264Parse:h264parse0.GstPad:sink: caps = "video/x-h264\,\ stream-format\=\(string\)byte-stream"
/GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind.GstPad:sink: caps = "video/x-h264\,\ stream-format\=\(string\)byte-stream"
/GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin0.GstGhostPad:sink: caps = "video/x-h264\,\ stream-format\=\(string\)byte-stream"
/GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin0/GstH264Parse:h264parse0.GstPad:src: caps = "video/x-h264\,\ stream-format\=\(string\)avc\,\ width\=\(int\)640\,\ height\=\(int\)480\,\ framerate\=\(fraction\)0/1\,\ parsed\=\(boolean\)true\,\ alignment\=\(string\)au\,\ profile\=\(string\)baseline\,\ level\=\(string\)3\,\ codec_data\=\(buffer\)0142001effe100136742001ea9501407b42000007d00001d4c008001000468ce3c80"
Got context from element 'vtdechw0': gst.gl.GLDisplay=context, gst.gl.GLDisplay=(GstGLDisplay)"\(GstGLDisplayCocoa\)\ gldisplaycocoa0";
/GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin0/GstVtdecHw:vtdechw0.GstPad:src: caps = "video/x-raw\(memory:GLMemory\)\,\ format\=\(string\)NV12\,\ width\=\(int\)640\,\ height\=\(int\)480\,\ interlace-mode\=\(string\)progressive\,\ pixel-aspect-ratio\=\(fraction\)1/1\,\ chroma-site\=\(string\)jpeg\,\ colorimetry\=\(string\)bt601\,\ framerate\=\(fraction\)0/1\,\ texture-target\=\(string\)rectangle"
/GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin0/GstVtdecHw:vtdechw0.GstPad:sink: caps = "video/x-h264\,\ stream-format\=\(string\)avc\,\ width\=\(int\)640\,\ height\=\(int\)480\,\ framerate\=\(fraction\)0/1\,\ parsed\=\(boolean\)true\,\ alignment\=\(string\)au\,\ profile\=\(string\)baseline\,\ level\=\(string\)3\,\ codec_data\=\(buffer\)0142001effe100136742001ea9501407b42000007d00001d4c008001000468ce3c80"
/GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin0/GstCapsFilter:capsfilter0.GstPad:src: caps = "video/x-h264\,\ stream-format\=\(string\)avc\,\ width\=\(int\)640\,\ height\=\(int\)480\,\ framerate\=\(fraction\)0/1\,\ parsed\=\(boolean\)true\,\ alignment\=\(string\)au\,\ profile\=\(string\)baseline\,\ level\=\(string\)3\,\ codec_data\=\(buffer\)0142001effe100136742001ea9501407b42000007d00001d4c008001000468ce3c80"
/GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin0/GstCapsFilter:capsfilter0.GstPad:sink: caps = "video/x-h264\,\ stream-format\=\(string\)avc\,\ width\=\(int\)640\,\ height\=\(int\)480\,\ framerate\=\(fraction\)0/1\,\ parsed\=\(boolean\)true\,\ alignment\=\(string\)au\,\ profile\=\(string\)baseline\,\ level\=\(string\)3\,\ codec_data\=\(buffer\)0142001effe100136742001ea9501407b42000007d00001d4c008001000468ce3c80"
/GstPipeline:pipeline0/GstUDPSink:udpsink0.GstPad:sink: caps = "video/x-raw\(memory:GLMemory\)\,\ format\=\(string\)NV12\,\ width\=\(int\)640\,\ height\=\(int\)480\,\ interlace-mode\=\(string\)progressive\,\ pixel-aspect-ratio\=\(fraction\)1/1\,\ chroma-site\=\(string\)jpeg\,\ colorimetry\=\(string\)bt601\,\ framerate\=\(fraction\)0/1\,\ texture-target\=\(string\)rectangle"
/GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0.GstGhostPad:src_0.GstProxyPad:proxypad2: caps = "video/x-raw\(memory:GLMemory\)\,\ format\=\(string\)NV12\,\ width\=\(int\)640\,\ height\=\(int\)480\,\ interlace-mode\=\(string\)progressive\,\ pixel-aspect-ratio\=\(fraction\)1/1\,\ chroma-site\=\(string\)jpeg\,\ colorimetry\=\(string\)bt601\,\ framerate\=\(fraction\)0/1\,\ texture-target\=\(string\)rectangle"
/GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstQueue2:queue2-0: max-size-bytes = 1479060
Prerolled, waiting for buffering to finish...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Caught SIGSEGV
exec gdb failed: No such file or directory
Spinning. Please run 'gdb gst-launch-1.0 79952' to continue debugging, Ctrl-C to quit, or Ctrl-\ to dump core.
LLDB stack:
bt
```
* thread #1: tid = 0x391d13, 0x00007fff94ce1f72 libsystem_kernel.dylib`mach_msg_trap + 10, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP
* frame #0: 0x00007fff94ce1f72 libsystem_kernel.dylib`mach_msg_trap + 10
frame #1: 0x00007fff94ce13b3 libsystem_kernel.dylib`mach_msg + 55
frame #2: 0x00007fff986c21c4 CoreFoundation`__CFRunLoopServiceMachPort + 212
frame #3: 0x00007fff986c168c CoreFoundation`__CFRunLoopRun + 1356
frame #4: 0x00007fff986c0ed8 CoreFoundation`CFRunLoopRunSpecific + 296
frame #5: 0x00007fff8b1d6935 HIToolbox`RunCurrentEventLoopInMode + 235
frame #6: 0x00007fff8b1d676f HIToolbox`ReceiveNextEventCommon + 432
frame #7: 0x00007fff8b1d65af HIToolbox`_BlockUntilNextEventMatchingListInModeWithFilter + 71
frame #8: 0x00007fff97124df6 AppKit`_DPSNextEvent + 1067
frame #9: 0x00007fff97124226 AppKit`-[NSApplication _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 454
frame #10: 0x0000000106200161 libgstgl-1.0.0.dylib`gst_gl_display_cocoa_nsapp_iteration + 204
frame #11: 0x0000000101951c8e libglib-2.0.0.dylib`g_timeout_dispatch + 23
frame #12: 0x00000001019547ae libglib-2.0.0.dylib`g_main_context_dispatch + 276
frame #13: 0x0000000101954a98 libglib-2.0.0.dylib`g_main_context_iterate + 413
frame #14: 0x0000000101954cee libglib-2.0.0.dylib`g_main_loop_run + 207
frame #15: 0x00000001017a38ee libgstreamer-1.0.0.dylib`gst_bus_poll + 286
frame #16: 0x000000010178420a gst-launch-1.0`event_loop + 3271
frame #17: 0x0000000101782f96 gst-launch-1.0`main + 2010
frame #18: 0x00007fff98ab25ad libdyld.dylib`start + 1
frame #19: 0x00007fff98ab25ad libdyld.dylib`start + 1
```
Unfortunately, Ctrl+\ doesn't seem to coredump, so I can't attach more details atm.https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/402waylandsink: add support wayland presentation time interface2023-01-13T15:20:13ZBugzilla Migration Userwaylandsink: add support wayland presentation time interface## Submitted by Wonchul Lee
**[Link to original bug (#768079)](https://bugzilla.gnome.org/show_bug.cgi?id=768079)**
## Description
I bring comments about this tasks and wrap writer's name with angle brackets, sorry for the poor read...## Submitted by Wonchul Lee
**[Link to original bug (#768079)](https://bugzilla.gnome.org/show_bug.cgi?id=768079)**
## Description
I bring comments about this tasks and wrap writer's name with angle brackets, sorry for the poor readability.
Waylandsink was handled by George Kiagiadakis and he had written presentation time interface codes for the demo, but the interface has been changed and settled down as a stable protocol.
I was starting it based on George's work (http://cgit.collabora.com/git/user/gkiagia/gst-plugins-bad.git/log/?h=demo), removing presentation queue and considering display stack delay.
It was predicting latency at display stack from wl_commit/damage/attach to frame presence and Pekka Paalanen(pq) advised that would not estimate the delay from wl_surface_commit() to display.
(it's part of comments)
`<pq>` wonchul, if you are trying to estimate the delay from wl_surface_commit() to display, and you don't sync the time you call commit() to the incoming events, that's going to be a lot less accurate.
`<pq>` 11:11:07> no, I literally meant replacing the queueing protocol calls with a queue implementation in the sink, so you don't use the queueing protocol anymore, but rely only on the feedback protocol to trigger attach+commits from the queue.
`<pq>` 11:12:27> the queue being a timestamp-ordered list of frame, just like in the weston implementation.
So, the way estimating the delay from wayland is not much accurate.
I turned to add a queue holding buffers before doing render() in the waylandsink
<Olivier Crête>
I'm a bit concerned about adding a queue in the sink that would increase the latency unnecessarily. I wonder if this could be done while queueing around 1 buffer there in normal streaming. Are we talking about queuing the actual frames or just information about the frames?
`<Wonchul Lee>`
I've queued reference of frames and tried to render based on the wayland presentation clock.
It could bring some delay depending on specific contents by adding a queue in the sink, It's not clear to me what specific factor cause delay yet, but yes, it would increase the latency at the moment.
The idea was disabling clock synchronization in gstbasesink and rendering(wayland commit/damage/attach) frames based on the calibrated wayland clock. I pushed the reference of gstbuffer to the queue and set the async clock callback to request render at a right time, and then rendered or dropped it depending on the adjusted timestamp.
This changes have issues that adjusted timestamp what requested to render is getting late than expected and it could cause dropping most of the frames at some cases since the adjusted timestamp was always late.
So I'm referring audiobasesink to adjust clock synchronization for the frames with wayland clock.
<Olivier Crête>
This work has two separate goals:
When the video has a different framerate than the display framerate, it should drops frames more or less evenly, so if you need to display 4 out of 5 frames, it should be something like 1,2,3,4,6,7,8,9,11,... Or if you need to display 30/60 frames it should display 1,3,5,7,9, etc .. Currently, GstBaseSink is not very clever about that.
And we have to be careful as this can be also caused by the compositor not being able to keep up. It's not because the display can do 60fps that the compositor is actually able to produce 60 new frames, it could be limited to a lower number, so we'll also have to make sure we're protected against that.
We want to guess the latency added by the display stack. The current GStreamer video sinks more or less assume that a buffer is rendered immediately when the render() vmethod returns, but this is not really how current display hardware work. Especially when you have double or triple buffering. So we want to know how much in advance to submit the buffer, but not too early to not display it one interval too early.
I just asked @nicolas a quick question about how he though we should do this, then we spent two hours whiteboarding ideas about this and we've barely been able to define the problem.
Here are some ideas we bounced around:
After submitting one frame (the first frame? the preroll frame?), we can have an idea of the upper bound of the latency for the live pipeline case. It should be the time between the moment a frame was submitted and when it was finally rendered + the "refresh". We can probably delay sending the async-done until the presented event of the first frame has arrived.
For the non-live case, we can probably find a way to submit the frame as early as possible before the next. Finding that time is the tricky part I think
@wonchul: could you summarize the different things your tried, what were the hypothesis and what were the results? It's important to keep these kinds of records for the Tax R&D filings (and so we can keep up with your work).
@pq or @daniels:
what is the logic behind the seq field, how do you expect it can be used? Do you know any example where it is used?
I'm also not sure how we can detect the case where the compositor cannot keep up? Or is the compositor is gnome-shell and has a gc that makes it miss a couple frames for no good reason?
From the info is the presented event (or any other way), is there a way we can evaluate when is the latest we can submit a buffer to have it arrive in time for a specific refresh? Or do we have to try and then do some kind of search to find what those deadlines are in practice?
`<Pekka Paalanen>`
seq field of wp_presentation_feedback.presented event:
No examples of use, I don't think. I didn't originally considerer it as needed, but it was added to allow implementing GLX_OML_sync_control on top of it. I do not think we should generally depend on seq unless you specifically care about the refresh count instead of timings. My intention with the design was that new code can work better with timestamps, while old code you don't want to port to timestamps could use seq as it has always done. Timestamps are "accurate", while seq may have been estimated from a clock in the kernel and may change its rate or may not have a constant rate at all.
seq comes from a time, when display refresh was a known guaranteed constant frequency, and you could use it as a clock by simply counting cycles. I believe all timing-sensitive X11 apps have been written with this assumption. But it is no longer exactly true, it has caveats (hard to maintain across video mode switches or display suspends, lacking hardware support, etc.), and with new display tech it will become even less true (variable refresh rate, self-refresh panels, ...).
seq is not guaranteed to be provided, it may be zero depending on the graphics stack used by the compositor. I'm also not sure what it means if you don't have both VSYNC and HW_COMPLETION in flags
The timestamp OTOH is always provided, but it may have some caveats which should be indicated by unset bits in flags.
Compositor not keeping up:
Maybe you could use the tv + refresh from presented event to guess when the compositor should be presenting your frame, and compare afterwards with what actually happened?
I can't really think of a good way to know if the compositor cannot keep up or why it cannot keep up. Hickups can happen and the compositor probably won't know why either. All I can say is collect statistics and analyze then over time. This might be a topic for further investigations, but to get more information about which steps take too much time we need some kernel support (explicit fencing) that is being developed, and make the compositor use that information.
Only hand-waving, sorry.
Finding the deadline:
I don't think there is a way to know really, and also the compositor might be adjusting its own schedules, so it might be variable.
The way I imaged it is that from presented event you compute the time of the next possible presentation, and if you want to hit that, submit a frame ASAP. This should get you just below one display-frame-cycle latency in any case, if your rendering is already complete.
If we really need the deadline, that would call for extending the protocol, so that the compositor could tell you when the deadline is. The compositor chooses the deadline based on how fast it thinks it can do a composition and hit the right vblank.
`<Wonchul Lee>`
About the latency, I tried to get latency added by the display stack from the wl commit/damage/attach to the present frame. It's a variable delay depending on the situation as pq mentioned before and could disturb targeting next present. The way we could assume optimal latency by accumulating it and observe a gap by the presentation feedback, maybe not always reliable.
I tried to synchronize GStreamer clock time with presentation feedback to render a frame on time and added a queue in GstWaylandSink to request render on each presentation feedback if there's a frame on time, similar to what George did. It's not well fit with GstBaseSink though, and GstWaylandSink needs to disable BaseSink time synchronization and computing itself. I faced unexpected underflow (consistently increasing delay) when playing with mpegts stream, so It also needs proper QOS handling to prevent underflow.
I would be good to get reliable latency from the display stack to make use of it when synchronizing presenting time whether computing it GstWaylandSink itself or not, there's a latency what we're missing anyway, though I'm not sure it's feasible.
`<Pekka Paalanen>`
@wonchul btw. what do you mean when you say "synchronize GStreamer clock time with presentation feedback"?
Does it mean something else than looking at what clock is advertised by wp_presentation.clock_id and then synchronizing GStreamer clock with clock_gettime() using the given clock id? Or does synchronizing mean something else than being able to convert a timestamp from one clock domain to the other domain?
`<Nicolas Dufresne>`
@pq I would need some clarification about submitting frame ASAP. If we blindly do that, frames will get displayed too soon on screen (in playback, decoders are much faster then the expected render speed). In GStreamer, we have infrastructure to wait until the moment is right. The logic (simplified) is to wait for the right moment minus the "currently expected" render latency, and submit. This is in playback case of course, and is to ensure the best possible A/V sync. In that case we expect the presentation information to be helpful in constantly correcting that moment. What we miss, is some semantic, as just blindly obey to the computed render delay of last frames does not seem like best idea. We expected to be able to calculate, or estimate, a submission window that will (most of the time) hit the screen at an estimated time.
For the live case, we're still quite screwed. Nothing seems to improve our situation. We need at start to pick a latency, and if later find that latency was too small (the latency is the window in which we are able to adapt), we end-up screwing up the audio (a glitch) to increase that latency window. So again, some semantic that we could use to calculate a pessimistic latency from the first presentation report would be nice.
<Olivier Crête>
I think that in the live case you can probably keep a 1 frame queue at the sink, so when a new frame arrives, you can decide if you want to present the queued one at the next refresh or replace it with a new one. Then the thread that talks to the compositor (and gets the events, etc), can pick the buffers from the "queue" to send to the compositor.
`<Nicolas Dufresne>`
Ok, that make sense for non-live. Would be nice to document the intended use, that was far from obvious. We keep thinking we need to look at the number, but we don't understand at first the the moment we get called back is important. You seem to assume that we can "pick" a frame, like if the sink was pulling whatever it wants randomly, that unfortunately not how things works. We can though introduce a small queue (some late queue) so we only start blocking upstream when that queue is full. And it would help making decisions
For live it's much more complex. The entire story about declared latency is because if we don't declare any latency, that queue will always be empty. Worst case, the report will always tell use that we have displayed the frame late. I'm quite sure you told me that the render pipeline can have multiple step, where submitting frame 1 2 3 at 1 blank distance, will render on blank 3 4 5 with effectively 3 blank latency. That latency is what we need to report for proper A/V sink in live pipeline, and changing is to be done with care as it breaks the audio. That we need some ideas, cause right now we have no clue.https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/401h265parse: Add support for stereoscopic / multiview video2021-09-24T14:34:27ZBugzilla Migration Userh265parse: Add support for stereoscopic / multiview video## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#767939)](https://bugzilla.gnome.org/show_bug.cgi?id=767939)**
## Description
See summary## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#767939)](https://bugzilla.gnome.org/show_bug.cgi?id=767939)**
## Description
See summaryhttps://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/400mxfdemux/mux: Add support for the sampling field in JPEG2000 caps2021-09-24T14:34:27ZBugzilla Migration Usermxfdemux/mux: Add support for the sampling field in JPEG2000 caps## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#767904)](https://bugzilla.gnome.org/show_bug.cgi?id=767904)**
## Description
The MXF metadata contains all we need to know, we just need to expose/use it.## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#767904)](https://bugzilla.gnome.org/show_bug.cgi?id=767904)**
## Description
The MXF metadata contains all we need to know, we just need to expose/use it.https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/399decklinkvideosink: Add support for mixed interlace-mode2021-09-24T14:34:26ZBugzilla Migration Userdecklinkvideosink: Add support for mixed interlace-mode## Submitted by Dave
**[Link to original bug (#767779)](https://bugzilla.gnome.org/show_bug.cgi?id=767779)**
## Description
Decklinkvideosink will reject interlace-mode=mixed when set to an interlaced mode even if actual source file...## Submitted by Dave
**[Link to original bug (#767779)](https://bugzilla.gnome.org/show_bug.cgi?id=767779)**
## Description
Decklinkvideosink will reject interlace-mode=mixed when set to an interlaced mode even if actual source file is interleaved.
Version: 1.8.0