gst-plugins-bad issueshttps://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues2021-09-24T14:34:39Zhttps://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/437aggregator, videoaggregator: force waiting even in live mode and other features2021-09-24T14:34:39ZBugzilla Migration Useraggregator, videoaggregator: force waiting even in live mode and other features## Submitted by Marianna S. Buschle
**[Link to original bug (#774049)](https://bugzilla.gnome.org/show_bug.cgi?id=774049)**
## Description
In the recent Gstreamer conference (2016) I have presented a talk about using Gstreamer pipel...## Submitted by Marianna S. Buschle
**[Link to original bug (#774049)](https://bugzilla.gnome.org/show_bug.cgi?id=774049)**
## Description
In the recent Gstreamer conference (2016) I have presented a talk about using Gstreamer pipelines for image processing in industrial applications.
In this talk I mentioned that I had the need of muxing some metadata generated by processing (and thus modifying) image frames into the original (unmodified) images.
For this purpose I have created my own muxing element (using the Compositor as inspiration) which uses the VideoAggregator as the base class.
However the VideoAggreagtor and it's base class the Aggregator didn't quite cover my needs so I ended up modifying my own local copies of these 2 classes.
During the conference there was interest expressed in hearing which where my different use cases because maybe they could be added upstream.
Since I want to mux what is actually the same original frame (just divided and processed differently by 2 separate tee branches) I want to do timestamp matching of the buffers (I don't care about maintaining the fps and don't want to duplicate frames):
-Therefore I want to produce output frames only when I have received input frames in all pads (in contrast to the Aggregator producing output as soon as 1 pad has data, in case of a live source which is what I have).
My easy solution for this was adding a new property to the Aggregator that forces it to work as with non-live sources (so that it requires data in all pads).
-Then I added to VideoAggregator a property that defines if it should work "normally" (producing frames in a deadline, fps) or if it should check the buffer timestamps and only produce frames when they match. I have created a function like the fill_queues() function which does the timestamp checking.
-In case I'm only producing frames based on matched timestamps I have problems with QOS because the segment times might not match anymore (since frames are not necessarily produced regularly anymore). To solve this I have created a function that adjusts the segment times based on the produced frames.
-Lastly, in my "Compositor like" Muxer element I needed the option to drop frames in some cases so I have added a new GstFlowReturn to VideoAggregator (GST_FLOW_VIDEO_AGGREGATOR_DROP_FRAME == GST_FLOW_CUSTOM_SUCCESS) just like it exists for the BaseTransform
Version: 1.xhttps://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/436ogg: gzipped files won't play2021-09-24T14:34:38ZBugzilla Migration Userogg: gzipped files won't play## Submitted by Philippe Normand `@philn`
**[Link to original bug (#774041)](https://bugzilla.gnome.org/show_bug.cgi?id=774041)**
## Description
Some people have the strange idea of compressing ogg files with gzip and serve that on ...## Submitted by Philippe Normand `@philn`
**[Link to original bug (#774041)](https://bugzilla.gnome.org/show_bug.cgi?id=774041)**
## Description
Some people have the strange idea of compressing ogg files with gzip and serve that on their HTTP server:
https://d1x2efl61akomv.cloudfront.net/assets/knock.ogg
This asset is from http://appear.in
Sebastian suggested a gzip demuxer/decoder could be added to support this use-case.https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/435Add a fisheye to equirectangular transform plugin2021-09-24T14:34:38ZBugzilla Migration UserAdd a fisheye to equirectangular transform plugin## Submitted by Ian McKellar
**[Link to original bug (#773680)](https://bugzilla.gnome.org/show_bug.cgi?id=773680)**
## Description
Created attachment 338780
Patch
Panoramic videos on the web (YouTube, Facebook) use an equire...## Submitted by Ian McKellar
**[Link to original bug (#773680)](https://bugzilla.gnome.org/show_bug.cgi?id=773680)**
## Description
Created attachment 338780
Patch
Panoramic videos on the web (YouTube, Facebook) use an equirectangular projection. My cheap panoramic camera uses a fish-eye lens to capture a fairly wide angle. I needed a tool to transform a fish-eye video to an equirectangular video so I extended the gst-plugins-bad geometrictransform to do that.
It's sort of the opposite of geometrictransform's circle element.
I developed it against 1.8 since that's what's on my Debian machine. This patch is against master but I haven't actually tested it since I don't have a full gstreamer master build.
**Patch 338780**, "Patch":
[gstfisheyetoequirectangular.diff](/uploads/08a96d9bd9ba3a28d3b429968adc3b0b/gstfisheyetoequirectangular.diff)https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/434tsdemux: detect synchronous KLV streams2021-11-05T09:33:46ZBugzilla Migration Usertsdemux: detect synchronous KLV streams## Submitted by nic..@..il.com
**[Link to original bug (#773186)](https://bugzilla.gnome.org/show_bug.cgi?id=773186)**
## Description
Created attachment 337975
proposed patch to manage synchronous KLV streams in tsdemux element
...## Submitted by nic..@..il.com
**[Link to original bug (#773186)](https://bugzilla.gnome.org/show_bug.cgi?id=773186)**
## Description
Created attachment 337975
proposed patch to manage synchronous KLV streams in tsdemux element
KLV metadata streams could be injected by a synchronous or an asynchronous way into MPEG-TS.
-> This patch gives to the tsdemux element the capacity to detect this kind of metadata stream.
**Patch 337975**, "proposed patch to manage synchronous KLV streams in tsdemux element":
[tsdemux_synchronous_klv.patch](/uploads/362abde020de5a87502dd1fa53ca5050/tsdemux_synchronous_klv.patch)
Version: 1.8.3https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/433bayer: add unit tests2021-09-24T14:34:37ZBugzilla Migration Userbayer: add unit tests## Submitted by Nicolas Dufresne `@ndufresne`
**[Link to original bug (#772876)](https://bugzilla.gnome.org/show_bug.cgi?id=772876)**
## Description
As of now, there is not even a template unit test for the bayer converter, and yet ...## Submitted by Nicolas Dufresne `@ndufresne`
**[Link to original bug (#772876)](https://bugzilla.gnome.org/show_bug.cgi?id=772876)**
## Description
As of now, there is not even a template unit test for the bayer converter, and yet a lot of bug reports. It's a clear indication we need at least to create a minimal test so we can further improve when fixing bugs.https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/429videoparsers: Proxy / parse colorimetry, chroma-site and interlaced-mode/fiel...2021-09-24T14:34:36ZBugzilla Migration Uservideoparsers: Proxy / parse colorimetry, chroma-site and interlaced-mode/field-order## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#772179)](https://bugzilla.gnome.org/show_bug.cgi?id=772179)**
## Description
See summary, also [bug 771376](https://bugzilla.gnome.org/show_bug.cgi?id=771376)## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#772179)](https://bugzilla.gnome.org/show_bug.cgi?id=772179)**
## Description
See summary, also [bug 771376](https://bugzilla.gnome.org/show_bug.cgi?id=771376)https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/426GstPlayer: add equalizer support [API]2021-09-24T14:34:35ZBugzilla Migration UserGstPlayer: add equalizer support [API]## Submitted by Sachin Kumar Chauhan
**[Link to original bug (#771761)](https://bugzilla.gnome.org/show_bug.cgi?id=771761)**
## Description
Hi,
Below is the draft of proposed equalizer api's to be added in GstPlayer.
I would lik...## Submitted by Sachin Kumar Chauhan
**[Link to original bug (#771761)](https://bugzilla.gnome.org/show_bug.cgi?id=771761)**
## Description
Hi,
Below is the draft of proposed equalizer api's to be added in GstPlayer.
I would like to open a discussion to finalize this. Please provide inputs on this.
typedef enum
{
GST_PLAYER_EQUALIZER_NONE,
GST_PLAYER_EQUALIZER_3BANDS,
GST_PLAYER_EQUALIZER_10BANDS,
GST_PLAYER_EQUALIZER_NBANDS
} GstPlayerEqualizerType;
struct _GstPlayer
{
.
.
.
gboolean is_equalizer_enabled;
GstPlayerEqualizerType equalizer_type;
};
gboolean gst_player_is_equalizer_enabled( GstPlayer * player);
void gst_player_enable_equalizer (GstPlayer * player, GstPlayerEqualizerType type); // passing GST_PLAYER_EUQALIZER_NONE will remove existing equalizer element.
int gst_player_get_equalizer_band_count (GstPlayer * player);
void gst_player_set_equalizer_band_count ( GstPlayer * player, int band_count); //for nband equalizer. in case of 3 or 10 bands, function will ignore this call
double gst_player_get_band_value( GstPlayer * player, int band_num);
void gst_player_set_band_value( GstPlayer * player, int band_num, double value);
Thanks,
Sachin k Chauhanhttps://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/423videoparse support for bayer missing2021-09-24T14:34:35ZBugzilla Migration Uservideoparse support for bayer missing## Submitted by cJ-..@..oub.eu
**[Link to original bug (#771079)](https://bugzilla.gnome.org/show_bug.cgi?id=771079)**
## Description
Hi,
According to http://gstreamer-devel.966125.n4.nabble.com/Error-in-bayer2rgb-pipeline-t...## Submitted by cJ-..@..oub.eu
**[Link to original bug (#771079)](https://bugzilla.gnome.org/show_bug.cgi?id=771079)**
## Description
Hi,
According to http://gstreamer-devel.966125.n4.nabble.com/Error-in-bayer2rgb-pipeline-td4678094.html, there's currently no way to reinterpret a buffer as bayer data.
Regards,
Version: 1.8.2https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/421coloreffects: add "enhance" effect2021-09-24T14:34:34ZBugzilla Migration Usercoloreffects: add "enhance" effect## Submitted by DEEPAK SRIVASTAVA
**[Link to original bug (#770530)](https://bugzilla.gnome.org/show_bug.cgi?id=770530)**
## Description
Created attachment 334327
Added enhance effect in coloreffects element
Adding color enha...## Submitted by DEEPAK SRIVASTAVA
**[Link to original bug (#770530)](https://bugzilla.gnome.org/show_bug.cgi?id=770530)**
## Description
Created attachment 334327
Added enhance effect in coloreffects element
Adding color enhancement effect in coloreffects element.
**Patch 334327**, "Added enhance effect in coloreffects element":
[Added-enhance-color-effect-in-coloreffects-element.patch](/uploads/1a60003f3b418956b8b13cd5ca7b0395/Added-enhance-color-effect-in-coloreffects-element.patch)https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/419player: Implement a playlist API and next/previous commands2022-05-06T15:08:12ZBugzilla Migration Userplayer: Implement a playlist API and next/previous commands## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#770331)](https://bugzilla.gnome.org/show_bug.cgi?id=770331)**
## Description
See attached patch, and also following patch for gtk-play to make use of this.
Plea...## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#770331)](https://bugzilla.gnome.org/show_bug.cgi?id=770331)**
## Description
See attached patch, and also following patch for gtk-play to make use of this.
Please review the new API and let me know if this makes any sense or could be
simplified somehow.
### Depends on
* [Bug 770368](https://bugzilla.gnome.org/show_bug.cgi?id=770368)https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/416h265parser: refactoring some methods to be more like h264parser2021-09-24T14:34:32ZBugzilla Migration Userh265parser: refactoring some methods to be more like h264parser## Submitted by m][sko
**[Link to original bug (#769717)](https://bugzilla.gnome.org/show_bug.cgi?id=769717)**
## Description
I think we should refactor h265parser's methods to be more like h264parser
for example gst_h265_parse_pr...## Submitted by m][sko
**[Link to original bug (#769717)](https://bugzilla.gnome.org/show_bug.cgi?id=769717)**
## Description
I think we should refactor h265parser's methods to be more like h264parser
for example gst_h265_parse_process_nal, gst_h265_parse_handle_frame
in gst_h265_parse_handle_frame
there is part with "no SPS/PPS yet, nal Type: %d %s, Size: %u will be dropped"
that is really not good
h264parser don't have something like thishttps://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/413gstfaceoverlay: Detect multiple faces and allow to overlay multiple images wi...2021-09-24T14:34:31ZBugzilla Migration Usergstfaceoverlay: Detect multiple faces and allow to overlay multiple images with different types.## Submitted by César Fabián Orccón Chipana
**[Link to original bug (#769176)](https://bugzilla.gnome.org/show_bug.cgi?id=769176)**
## Description
Created attachment 332126
faceoverlay: Detect multiple iamges and allow to overlay ...## Submitted by César Fabián Orccón Chipana
**[Link to original bug (#769176)](https://bugzilla.gnome.org/show_bug.cgi?id=769176)**
## Description
Created attachment 332126
faceoverlay: Detect multiple iamges and allow to overlay multiple images with different types.
Problems with the current element:
- gstfaceoverlay only can add an overlayed image over an only one face.
- gstfaceoverlay only supports SVG files.
Solutions in the proposed patch:
- gstfaceoverlay used gstrsvgoverlay element. I have replaced it by a gstcairooverlay so I can draw whatever I want there, for example, multiple images over all the detected faces.
_ Since I can draw whatever I want, I can put gdkpixbuf in cairo, so that allows the plugin to support multiple faces.
This patch depends on https://bugzilla.gnome.org/show_bug.cgi?id=764011
**Patch 332126**, "faceoverlay: Detect multiple iamges and allow to overlay multiple images with different types.":
[0001-faceoverlay-Detect-multiple-images-and-allow-to-over.patch](/uploads/bd004fb8ff5dca73d933e1e40775989d/0001-faceoverlay-Detect-multiple-images-and-allow-to-over.patch)
### Depends on
* [Bug 764011](https://bugzilla.gnome.org/show_bug.cgi?id=764011)https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/410ion: DMA Buf allocator based on ion2021-09-24T14:34:30ZBugzilla Migration Userion: DMA Buf allocator based on ion## Submitted by kevin
**[Link to original bug (#768794)](https://bugzilla.gnome.org/show_bug.cgi?id=768794)**
## Description
DMA Buf allocator based on ion
### Blocking
* [Bug 770585](https://bugzilla.gnome.org/show_bug.cgi?id=7...## Submitted by kevin
**[Link to original bug (#768794)](https://bugzilla.gnome.org/show_bug.cgi?id=768794)**
## Description
DMA Buf allocator based on ion
### Blocking
* [Bug 770585](https://bugzilla.gnome.org/show_bug.cgi?id=770585)https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/409kmssink: Add tile format support2021-09-24T14:34:29ZBugzilla Migration Userkmssink: Add tile format support## Submitted by Nicolas Dufresne `@ndufresne`
**[Link to original bug (#768703)](https://bugzilla.gnome.org/show_bug.cgi?id=768703)**
## Description
This is a bit tricky. Tiled format in DRM are set with two values. The first is the...## Submitted by Nicolas Dufresne `@ndufresne`
**[Link to original bug (#768703)](https://bugzilla.gnome.org/show_bug.cgi?id=768703)**
## Description
This is a bit tricky. Tiled format in DRM are set with two values. The first is the format family (like NV12), and then you add a modifier that specify the transformation. The main issue right now is that I don't see anything in DRM to probe the availability of each modifiers for each formats.https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/402waylandsink: add support wayland presentation time interface2023-01-13T15:20:13ZBugzilla Migration Userwaylandsink: add support wayland presentation time interface## Submitted by Wonchul Lee
**[Link to original bug (#768079)](https://bugzilla.gnome.org/show_bug.cgi?id=768079)**
## Description
I bring comments about this tasks and wrap writer's name with angle brackets, sorry for the poor read...## Submitted by Wonchul Lee
**[Link to original bug (#768079)](https://bugzilla.gnome.org/show_bug.cgi?id=768079)**
## Description
I bring comments about this tasks and wrap writer's name with angle brackets, sorry for the poor readability.
Waylandsink was handled by George Kiagiadakis and he had written presentation time interface codes for the demo, but the interface has been changed and settled down as a stable protocol.
I was starting it based on George's work (http://cgit.collabora.com/git/user/gkiagia/gst-plugins-bad.git/log/?h=demo), removing presentation queue and considering display stack delay.
It was predicting latency at display stack from wl_commit/damage/attach to frame presence and Pekka Paalanen(pq) advised that would not estimate the delay from wl_surface_commit() to display.
(it's part of comments)
`<pq>` wonchul, if you are trying to estimate the delay from wl_surface_commit() to display, and you don't sync the time you call commit() to the incoming events, that's going to be a lot less accurate.
`<pq>` 11:11:07> no, I literally meant replacing the queueing protocol calls with a queue implementation in the sink, so you don't use the queueing protocol anymore, but rely only on the feedback protocol to trigger attach+commits from the queue.
`<pq>` 11:12:27> the queue being a timestamp-ordered list of frame, just like in the weston implementation.
So, the way estimating the delay from wayland is not much accurate.
I turned to add a queue holding buffers before doing render() in the waylandsink
<Olivier Crête>
I'm a bit concerned about adding a queue in the sink that would increase the latency unnecessarily. I wonder if this could be done while queueing around 1 buffer there in normal streaming. Are we talking about queuing the actual frames or just information about the frames?
`<Wonchul Lee>`
I've queued reference of frames and tried to render based on the wayland presentation clock.
It could bring some delay depending on specific contents by adding a queue in the sink, It's not clear to me what specific factor cause delay yet, but yes, it would increase the latency at the moment.
The idea was disabling clock synchronization in gstbasesink and rendering(wayland commit/damage/attach) frames based on the calibrated wayland clock. I pushed the reference of gstbuffer to the queue and set the async clock callback to request render at a right time, and then rendered or dropped it depending on the adjusted timestamp.
This changes have issues that adjusted timestamp what requested to render is getting late than expected and it could cause dropping most of the frames at some cases since the adjusted timestamp was always late.
So I'm referring audiobasesink to adjust clock synchronization for the frames with wayland clock.
<Olivier Crête>
This work has two separate goals:
When the video has a different framerate than the display framerate, it should drops frames more or less evenly, so if you need to display 4 out of 5 frames, it should be something like 1,2,3,4,6,7,8,9,11,... Or if you need to display 30/60 frames it should display 1,3,5,7,9, etc .. Currently, GstBaseSink is not very clever about that.
And we have to be careful as this can be also caused by the compositor not being able to keep up. It's not because the display can do 60fps that the compositor is actually able to produce 60 new frames, it could be limited to a lower number, so we'll also have to make sure we're protected against that.
We want to guess the latency added by the display stack. The current GStreamer video sinks more or less assume that a buffer is rendered immediately when the render() vmethod returns, but this is not really how current display hardware work. Especially when you have double or triple buffering. So we want to know how much in advance to submit the buffer, but not too early to not display it one interval too early.
I just asked @nicolas a quick question about how he though we should do this, then we spent two hours whiteboarding ideas about this and we've barely been able to define the problem.
Here are some ideas we bounced around:
After submitting one frame (the first frame? the preroll frame?), we can have an idea of the upper bound of the latency for the live pipeline case. It should be the time between the moment a frame was submitted and when it was finally rendered + the "refresh". We can probably delay sending the async-done until the presented event of the first frame has arrived.
For the non-live case, we can probably find a way to submit the frame as early as possible before the next. Finding that time is the tricky part I think
@wonchul: could you summarize the different things your tried, what were the hypothesis and what were the results? It's important to keep these kinds of records for the Tax R&D filings (and so we can keep up with your work).
@pq or @daniels:
what is the logic behind the seq field, how do you expect it can be used? Do you know any example where it is used?
I'm also not sure how we can detect the case where the compositor cannot keep up? Or is the compositor is gnome-shell and has a gc that makes it miss a couple frames for no good reason?
From the info is the presented event (or any other way), is there a way we can evaluate when is the latest we can submit a buffer to have it arrive in time for a specific refresh? Or do we have to try and then do some kind of search to find what those deadlines are in practice?
`<Pekka Paalanen>`
seq field of wp_presentation_feedback.presented event:
No examples of use, I don't think. I didn't originally considerer it as needed, but it was added to allow implementing GLX_OML_sync_control on top of it. I do not think we should generally depend on seq unless you specifically care about the refresh count instead of timings. My intention with the design was that new code can work better with timestamps, while old code you don't want to port to timestamps could use seq as it has always done. Timestamps are "accurate", while seq may have been estimated from a clock in the kernel and may change its rate or may not have a constant rate at all.
seq comes from a time, when display refresh was a known guaranteed constant frequency, and you could use it as a clock by simply counting cycles. I believe all timing-sensitive X11 apps have been written with this assumption. But it is no longer exactly true, it has caveats (hard to maintain across video mode switches or display suspends, lacking hardware support, etc.), and with new display tech it will become even less true (variable refresh rate, self-refresh panels, ...).
seq is not guaranteed to be provided, it may be zero depending on the graphics stack used by the compositor. I'm also not sure what it means if you don't have both VSYNC and HW_COMPLETION in flags
The timestamp OTOH is always provided, but it may have some caveats which should be indicated by unset bits in flags.
Compositor not keeping up:
Maybe you could use the tv + refresh from presented event to guess when the compositor should be presenting your frame, and compare afterwards with what actually happened?
I can't really think of a good way to know if the compositor cannot keep up or why it cannot keep up. Hickups can happen and the compositor probably won't know why either. All I can say is collect statistics and analyze then over time. This might be a topic for further investigations, but to get more information about which steps take too much time we need some kernel support (explicit fencing) that is being developed, and make the compositor use that information.
Only hand-waving, sorry.
Finding the deadline:
I don't think there is a way to know really, and also the compositor might be adjusting its own schedules, so it might be variable.
The way I imaged it is that from presented event you compute the time of the next possible presentation, and if you want to hit that, submit a frame ASAP. This should get you just below one display-frame-cycle latency in any case, if your rendering is already complete.
If we really need the deadline, that would call for extending the protocol, so that the compositor could tell you when the deadline is. The compositor chooses the deadline based on how fast it thinks it can do a composition and hit the right vblank.
`<Wonchul Lee>`
About the latency, I tried to get latency added by the display stack from the wl commit/damage/attach to the present frame. It's a variable delay depending on the situation as pq mentioned before and could disturb targeting next present. The way we could assume optimal latency by accumulating it and observe a gap by the presentation feedback, maybe not always reliable.
I tried to synchronize GStreamer clock time with presentation feedback to render a frame on time and added a queue in GstWaylandSink to request render on each presentation feedback if there's a frame on time, similar to what George did. It's not well fit with GstBaseSink though, and GstWaylandSink needs to disable BaseSink time synchronization and computing itself. I faced unexpected underflow (consistently increasing delay) when playing with mpegts stream, so It also needs proper QOS handling to prevent underflow.
I would be good to get reliable latency from the display stack to make use of it when synchronizing presenting time whether computing it GstWaylandSink itself or not, there's a latency what we're missing anyway, though I'm not sure it's feasible.
`<Pekka Paalanen>`
@wonchul btw. what do you mean when you say "synchronize GStreamer clock time with presentation feedback"?
Does it mean something else than looking at what clock is advertised by wp_presentation.clock_id and then synchronizing GStreamer clock with clock_gettime() using the given clock id? Or does synchronizing mean something else than being able to convert a timestamp from one clock domain to the other domain?
`<Nicolas Dufresne>`
@pq I would need some clarification about submitting frame ASAP. If we blindly do that, frames will get displayed too soon on screen (in playback, decoders are much faster then the expected render speed). In GStreamer, we have infrastructure to wait until the moment is right. The logic (simplified) is to wait for the right moment minus the "currently expected" render latency, and submit. This is in playback case of course, and is to ensure the best possible A/V sync. In that case we expect the presentation information to be helpful in constantly correcting that moment. What we miss, is some semantic, as just blindly obey to the computed render delay of last frames does not seem like best idea. We expected to be able to calculate, or estimate, a submission window that will (most of the time) hit the screen at an estimated time.
For the live case, we're still quite screwed. Nothing seems to improve our situation. We need at start to pick a latency, and if later find that latency was too small (the latency is the window in which we are able to adapt), we end-up screwing up the audio (a glitch) to increase that latency window. So again, some semantic that we could use to calculate a pessimistic latency from the first presentation report would be nice.
<Olivier Crête>
I think that in the live case you can probably keep a 1 frame queue at the sink, so when a new frame arrives, you can decide if you want to present the queued one at the next refresh or replace it with a new one. Then the thread that talks to the compositor (and gets the events, etc), can pick the buffers from the "queue" to send to the compositor.
`<Nicolas Dufresne>`
Ok, that make sense for non-live. Would be nice to document the intended use, that was far from obvious. We keep thinking we need to look at the number, but we don't understand at first the the moment we get called back is important. You seem to assume that we can "pick" a frame, like if the sink was pulling whatever it wants randomly, that unfortunately not how things works. We can though introduce a small queue (some late queue) so we only start blocking upstream when that queue is full. And it would help making decisions
For live it's much more complex. The entire story about declared latency is because if we don't declare any latency, that queue will always be empty. Worst case, the report will always tell use that we have displayed the frame late. I'm quite sure you told me that the render pipeline can have multiple step, where submitting frame 1 2 3 at 1 blank distance, will render on blank 3 4 5 with effectively 3 blank latency. That latency is what we need to report for proper A/V sink in live pipeline, and changing is to be done with care as it breaks the audio. That we need some ideas, cause right now we have no clue.https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/401h265parse: Add support for stereoscopic / multiview video2021-09-24T14:34:27ZBugzilla Migration Userh265parse: Add support for stereoscopic / multiview video## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#767939)](https://bugzilla.gnome.org/show_bug.cgi?id=767939)**
## Description
See summary## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#767939)](https://bugzilla.gnome.org/show_bug.cgi?id=767939)**
## Description
See summaryhttps://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/399decklinkvideosink: Add support for mixed interlace-mode2021-09-24T14:34:26ZBugzilla Migration Userdecklinkvideosink: Add support for mixed interlace-mode## Submitted by Dave
**[Link to original bug (#767779)](https://bugzilla.gnome.org/show_bug.cgi?id=767779)**
## Description
Decklinkvideosink will reject interlace-mode=mixed when set to an interlaced mode even if actual source file...## Submitted by Dave
**[Link to original bug (#767779)](https://bugzilla.gnome.org/show_bug.cgi?id=767779)**
## Description
Decklinkvideosink will reject interlace-mode=mixed when set to an interlaced mode even if actual source file is interleaved.
Version: 1.8.0https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/393opencv: add cvclahe element to apply adaptive histogram equalization2021-09-24T14:34:25ZBugzilla Migration Useropencv: add cvclahe element to apply adaptive histogram equalization## Submitted by Joshua M. Doe
**[Link to original bug (#766865)](https://bugzilla.gnome.org/show_bug.cgi?id=766865)**
## Description
Created attachment 328507
patch adding cvclahe element
Patch attached which adds cvclahe, an...## Submitted by Joshua M. Doe
**[Link to original bug (#766865)](https://bugzilla.gnome.org/show_bug.cgi?id=766865)**
## Description
Created attachment 328507
patch adding cvclahe element
Patch attached which adds cvclahe, an OpenCV element which applies contrast limited adaptive histogram equalization (https://en.wikipedia.org/wiki/Adaptive_histogram_equalization) to RGB, GRAY8, and GRAY16_LE video.
This is useful to bring out detail in high dynamic range scenes. While it provides interesting results in RGB and GRAY8, it is most useful in GRAY16_LE video.
**Patch 328507**, "patch adding cvclahe element":
[0001-opencv-add-new-element-cvclahe-to-apply-CLAHE-to-16-.patch](/uploads/7638c1babee8919a4f75b0757e5765d8/0001-opencv-add-new-element-cvclahe-to-apply-CLAHE-to-16-.patch)https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/389kmssink: add libdrm in gst-libs/ext2021-09-24T14:34:24ZBugzilla Migration Userkmssink: add libdrm in gst-libs/ext## Submitted by Víctor Manuel Jáquez Leal `@vjaquez`
**[Link to original bug (#766468)](https://bugzilla.gnome.org/show_bug.cgi?id=766468)**
## Description
It would be nice to add libdrm/libkms in gst-libs/ext so the users won't nee...## Submitted by Víctor Manuel Jáquez Leal `@vjaquez`
**[Link to original bug (#766468)](https://bugzilla.gnome.org/show_bug.cgi?id=766468)**
## Description
It would be nice to add libdrm/libkms in gst-libs/ext so the users won't need to have installed it in their systems (small embedded systems), also we could add particular hacks there for specific SoCs.https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/387android: Implement a video sink that provides an android.view.View2021-09-24T14:34:23ZBugzilla Migration Userandroid: Implement a video sink that provides an android.view.View## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#766044)](https://bugzilla.gnome.org/show_bug.cgi?id=766044)**
## Description
Similar to gtk(gl)sink, caopengllayersink, qmlvideosink. This could provide a simple sub...## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#766044)](https://bugzilla.gnome.org/show_bug.cgi?id=766044)**
## Description
Similar to gtk(gl)sink, caopengllayersink, qmlvideosink. This could provide a simple subclass of android.opengl.GLSurfaceView (which can create the GL context and everything for us), and then share its own GL context with the pipeline context.
As it requires some Java/JNI code, this should be in the androidmedia plugin.