GStreamer issueshttps://gitlab.freedesktop.org/groups/gstreamer/-/issues2018-11-29T17:07:31Zhttps://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/21Implement a VobSub decoder2018-11-29T17:07:31ZSebastian DrögeImplement a VobSub decoderAround https://docs.rs/vobsub/0.2.3/vobsub/, probably best as GstVideoDecoder subclass once we have it... or GstElement.
Needs some changes to expose the `&[u8]` based API that is currently internally, and accepts elementary stream pa...Around https://docs.rs/vobsub/0.2.3/vobsub/, probably best as GstVideoDecoder subclass once we have it... or GstElement.
Needs some changes to expose the `&[u8]` based API that is currently internally, and accepts elementary stream packets.
The repository also has some SRT code, might be worth adding something around that too.
CC @emk who brought this to my attention on reddithttps://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/2750vtenc: vtenc_h264 causing too many Redistribute latency...2023-08-04T20:57:05ZBugzilla Migration Uservtenc: vtenc_h264 causing too many Redistribute latency...## Submitted by Miki Grof-Tisza
**[Link to original bug (#789415)](https://bugzilla.gnome.org/show_bug.cgi?id=789415)**
## Description
I'm having some trouble with the pipeline:
gst-launch-1.0 --gst-debug=*:2,vtenc:4 videotestsrc ...## Submitted by Miki Grof-Tisza
**[Link to original bug (#789415)](https://bugzilla.gnome.org/show_bug.cgi?id=789415)**
## Description
I'm having some trouble with the pipeline:
gst-launch-1.0 --gst-debug=*:2,vtenc:4 videotestsrc ! videorate ! video/x-raw, format=UYVY, width=1920, height=1080, framerate=30/1 ! queue ! vtenc_h264 ! fakesink
I’m running gstreamer version 1.12.3 built from git source, on a 2017 15” Macbook Pro, w/macOS 10.12.6.
The relevant output:
Setting pipeline to PLAYING ...
New clock: GstSystemClock
0:00:00.154950000 72088 0x7ff6d7010370 INFO vtenc vtenc.c:1070:gst_vtenc_update_latency:`<vtenc_h264-0>` latency status 0 frames 5 fps 30/1 time 0:00:00.166666665
Redistribute latency...
0:00:00.235169000 72088 0x7ff6d7010370 INFO vtenc vtenc.c:1070:gst_vtenc_update_latency:`<vtenc_h264-0>` latency status 0 frames 6 fps 30/1 time 0:00:00.199999998
Redistribute latency...
0:00:00.241439000 72088 0x7ff6d7010370 INFO vtenc vtenc.c:1070:gst_vtenc_update_latency:`<vtenc_h264-0>` latency status 0 frames 5 fps 30/1 time 0:00:00.166666665
Redistribute latency...
0:00:00.253913000 72088 0x7ff6d7010370 INFO vtenc vtenc.c:1070:gst_vtenc_update_latency:`<vtenc_h264-0>` latency status 0 frames 6 fps 30/1 time 0:00:00.199999998
Redistribute latency...
0:00:00.278467000 72088 0x7ff6d7010370 INFO vtenc vtenc.c:1070:gst_vtenc_update_latency:`<vtenc_h264-0>` latency status 0 frames 7 fps 30/1 time 0:00:00.233333331
Redistribute latency...
0:00:00.288046000 72088 0x7ff6d7010370 INFO vtenc vtenc.c:1070:gst_vtenc_update_latency:`<vtenc_h264-0>` latency status 0 frames 6 fps 30/1 time 0:00:00.199999998
Redistribute latency...
0:00:00.371569000 72088 0x7ff6d7010370 INFO vtenc vtenc.c:1070:gst_vtenc_update_latency:`<vtenc_h264-0>` latency status 0 frames 7 fps 30/1 time 0:00:00.233333331
Redistribute latency...
0:00:03.043466000 72088 0x7ff6d7010370 INFO vtenc vtenc.c:1070:gst_vtenc_update_latency:`<vtenc_h264-0>` latency status 0 frames 6 fps 30/1 time 0:00:00.199999998
Redistribute latency...
0:00:03.065692000 72088 0x7ff6d7010370 INFO vtenc vtenc.c:1070:gst_vtenc_update_latency:`<vtenc_h264-0>` latency status 0 frames 5 fps 30/1 time 0:00:00.166666665
Redistribute latency...
0:00:03.090887000 72088 0x7ff6d7010370 INFO vtenc vtenc.c:1070:gst_vtenc_update_latency:`<vtenc_h264-0>` latency status 0 frames 4 fps 30/1 time 0:00:00.133333332
Redistribute latency...
The vtenc_h264 element is calling gst_video_encoder_set_latency() seemingly way too often. It results in gst-launch printing "Redistribute latency..." quite often (several times per second sometimes).
What's happening seems to be the element keeping track of the underlying encoder's pending frame count. If the pending frame count ever changes (checked every frame), then it calls gst_video_encoder_set_latency().
Is it not the case that instead of tracking exact latency each frame and forcing a pipeline latency redistribution every time it changes at all, the element should just check if the latency is greater than the currently configured range (checked via call to gst_video_encoder_get_latency()) and only call gst_video_encoder_set_latency() if it's outside the range?
I will attach a patch that works for me shortly.
Version: 1.12.3Piotr BrzezińskiPiotr Brzezińskihttps://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/issues/411rtpmanager: improve performance in NACK/RTX handling2021-09-24T13:32:42ZBugzilla Migration Userrtpmanager: improve performance in NACK/RTX handling## Submitted by Miguel París Díaz `@mparisdiaz`
**[Link to original bug (#789338)](https://bugzilla.gnome.org/show_bug.cgi?id=789338)**
## Description
Hello,
I am providing several changes to improve NACK/RTX handling.
The gener...## Submitted by Miguel París Díaz `@mparisdiaz`
**[Link to original bug (#789338)](https://bugzilla.gnome.org/show_bug.cgi?id=789338)**
## Description
Hello,
I am providing several changes to improve NACK/RTX handling.
The general remarks are:
- Add GstRTPRetransmissionListRequest that allows send a list of seqnum to ask for NACKS. avoiding sending a lot of events on packet looses bursts.
- GstRtpRtxQueue: several changes to avoid unnecessary operations.
- Add "rtt" field in both GstRTPRetransmissionRequest and GstRTPRetransmissionListRequest that allows upstream elements like GstRtpRtxQueue to send retransmissions under the RTT.
Anyway, each commit has more detailed info.
Version: 1.8.3https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/625[hlsdemux] Use of ID3 PTS for buffer timestamping for live radio streaming2021-09-24T14:35:54ZBugzilla Migration User[hlsdemux] Use of ID3 PTS for buffer timestamping for live radio streaming## Submitted by mki..@..il.com
**[Link to original bug (#789253)](https://bugzilla.gnome.org/show_bug.cgi?id=789253)**
## Description
In live radio streaming use case, buffers pushed by hlsdemux don't have time information and base ...## Submitted by mki..@..il.com
**[Link to original bug (#789253)](https://bugzilla.gnome.org/show_bug.cgi?id=789253)**
## Description
In live radio streaming use case, buffers pushed by hlsdemux don't have time information and base parser stamps them based on properties like ADTS frame duration. In case of the network problems, when not all manifests are downloaded, gaps in playback are not correctly set.https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/issues/71Add Multi-Frame Context support for Parallel Encode use cases2021-09-24T12:23:07ZBugzilla Migration UserAdd Multi-Frame Context support for Parallel Encode use cases## Submitted by Sreerenj Balachandran `@sree`
**[Link to original bug (#789240)](https://bugzilla.gnome.org/show_bug.cgi?id=789240)**
## Description
VA-API is going to bring some new apis to support Multi-Frame processing.
C...## Submitted by Sreerenj Balachandran `@sree`
**[Link to original bug (#789240)](https://bugzilla.gnome.org/show_bug.cgi?id=789240)**
## Description
VA-API is going to bring some new apis to support Multi-Frame processing.
Currently, Multi-frame processing is an optimization for making better hardware resource utilization in multi-stream encoding/transcoding applications.
This allows combining several jobs from different parallel pipelines inside one GPU task execution to better reuse engines that can't be loaded fully in single frame case, as well as decrease CPU overhead for task submission.
Here is the Pull Request landed in libva:https://github.com/01org/libva/pull/112
I am not sure how we could implement this in gstreamer.
These new APIs bringing the idea of a having a single "super-encoder" element which can accept multiple input streams and encode them to different formats
like one h264 and the other to hevc or multiple h264 encoded streams.
From VA-API perspective there could be 3-4 new APIs
vaCreateMFContext(VADisplay dpy, VAMFContextID *mf_context) : Create a new MultiFrameContext
vaMFAddContext (VADisplay dpy, VAMFContextID mf_context, VAContextID context): To add an already created VAContext to the MFContext
vaMFReleaseContext (VADisplay dpy, VAMFContextID mf_context, VAContextID context): To release the vacontext from global context
vaMFSubmit (VADisplay dpy, VAMFContextID mf_context, VAContextID * contexts, int num_contexts): Make the end of rendering for a pictures in contexts passed with submission
vaQueryXXX: needed for checking the feature support for specific entrypoint combinations. This not not included in current PullRequest though.
Fron Intel-vaapi-driver perspective,
If a context is attached to MultiFrame Context, underlined driver won't begin the actual batchbuffer submission in vaEndPicture(), instead
it waits for the vaMFSubmit from middleware to start actual processing.https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/623assrender: doesn't properly render multiple subtitles2021-09-24T14:35:53ZBugzilla Migration Userassrender: doesn't properly render multiple subtitles## Submitted by Andreas Frisch `@fraxinas`
**[Link to original bug (#789239)](https://bugzilla.gnome.org/show_bug.cgi?id=789239)**
## Description
gstreamer only displays the first (ASS) subtitle rectangle and ignores all others that...## Submitted by Andreas Frisch `@fraxinas`
**[Link to original bug (#789239)](https://bugzilla.gnome.org/show_bug.cgi?id=789239)**
## Description
gstreamer only displays the first (ASS) subtitle rectangle and ignores all others that might be supposed to be displayed simultaneously.
check out:
gst-play-1.0 http://dreambox.guru/multiple-subtitles-ass.mkv
versus the same video played with vlchttps://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/issues/70vaapisink shows blank and white output2021-09-24T12:23:06ZBugzilla Migration Uservaapisink shows blank and white output## Submitted by Mats
**[Link to original bug (#789236)](https://bugzilla.gnome.org/show_bug.cgi?id=789236)**
## Description
Created attachment 361925
Screen shot of Black and White Output
Hi,
I m using the pipeline
gs...## Submitted by Mats
**[Link to original bug (#789236)](https://bugzilla.gnome.org/show_bug.cgi?id=789236)**
## Description
Created attachment 361925
Screen shot of Black and White Output
Hi,
I m using the pipeline
gst-launch -v videotestsrc ! vaapisink and the see a black and while output on one H/W system (Config 1 1). The same Hard disk on a different H/W system (different processor than Config1), shows colored o/p.
I used GST_DEBUG=vaapisink:5 gst-launch-1.0 videotestsrc ! vaapisnk to compare the debug logs on both system and debug traces are the same.
Is there any further debug approaches that you could suggest ?
Thank you
Mats
**Attachment 361925**, "Screen shot of Black and White Output":
![Screenshot_from_2017-10-20_16-08-43](/uploads/982768c9ff0bab09b4f70f9b18eeec34/Screenshot_from_2017-10-20_16-08-43.png)https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/2781Gst.Bin __init__ changed behavior between 1.10 and 1.12 GStreamer version2023-07-06T16:40:08ZBugzilla Migration UserGst.Bin __init__ changed behavior between 1.10 and 1.12 GStreamer version## Submitted by Andreu N
**[Link to original bug (#789055)](https://bugzilla.gnome.org/show_bug.cgi?id=789055)**
## Description
In version 1.10 Gst.Bin component behaved like a regular GObject class used in python. I mean, you could...## Submitted by Andreu N
**[Link to original bug (#789055)](https://bugzilla.gnome.org/show_bug.cgi?id=789055)**
## Description
In version 1.10 Gst.Bin component behaved like a regular GObject class used in python. I mean, you could define a new component using Gst.Bin as parent, define properties as GObject.ParamFlags.CONSTRUCT_ONLY and pass values to __init__ method in order to set them.
class MyBin(Gst.Bin):
foo = GObject.Property(type=str,
flags=GObject.ParamFlags.CONSTRUCT_ONLY |
GObject.ParamFlags.READWRITE)
def __init__(self, *args, **kwargs):
Gst.Bin.__init__(self, *args, **kwargs)
# your stuff
my_bin = MyBin(foo='bar')
But currently in Gst version 1.12 it raise an error because it does not pass properties to GObject.Object.__init__.
TypeError: __init__() got an unexpected keyword argument 'foo'
Version: 1.12.xhttps://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/issues/393textoverlay fails to render very long text with halignment=absolute and wrap-...2021-09-24T13:23:15ZBugzilla Migration Usertextoverlay fails to render very long text with halignment=absolute and wrap-mode=none## Submitted by David
**[Link to original bug (#788968)](https://bugzilla.gnome.org/show_bug.cgi?id=788968)**
## Description
I'm trying to scroll some text horizontally like in the news. This text must not wrap.
So I tried some...## Submitted by David
**[Link to original bug (#788968)](https://bugzilla.gnome.org/show_bug.cgi?id=788968)**
## Description
I'm trying to scroll some text horizontally like in the news. This text must not wrap.
So I tried something like this:
gst-launch-1.0 videotestsrc ! video/x-raw,width=300,height=300 ! textoverlay text="aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" wrap-mode=none halignment=absolute x-absolute=0.5 ! autovideosink
which fails with
(gst-launch-1.0:27547): Pango-WARNING **: pango-layout.c:3209: broken PangoLayout
by "fails" I mean that the text does not render at all.
If I remove "halignment=absolute" or "wrap-mode=none" this works.
Tested by me on debian with gst 1.10.4-1 and by thiagoss on master with the same result.
I'd gladly accept a workaround, if you have any, for now.https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/issues/392rtspconnection: tests sometimes get stuck while connecting to host2021-09-24T13:23:14ZBugzilla Migration Userrtspconnection: tests sometimes get stuck while connecting to host## Submitted by Jonathan Karlsson
**[Link to original bug (#788866)](https://bugzilla.gnome.org/show_bug.cgi?id=788866)**
## Description
Sometimes the tests in tests/check/libs/rtspconnection.c get stuck in create_connection on g_so...## Submitted by Jonathan Karlsson
**[Link to original bug (#788866)](https://bugzilla.gnome.org/show_bug.cgi?id=788866)**
## Description
Sometimes the tests in tests/check/libs/rtspconnection.c get stuck in create_connection on g_socket_client_connect_to_host.
It can happen from any of the test cases.
Easily reproduced when running the tests "forever", seen within seconds.
Can also happen when running the suite once.https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/620mxfmux : Mxfmux is not functional with splitmuxsink2021-09-24T14:35:53ZBugzilla Migration Usermxfmux : Mxfmux is not functional with splitmuxsink## Submitted by Baby octopus
**[Link to original bug (#788827)](https://bugzilla.gnome.org/show_bug.cgi?id=788827)**
## Description
There are two issues
1. Pad templates of mxfmux is not generic like that of mp4. Splitmuxsink does...## Submitted by Baby octopus
**[Link to original bug (#788827)](https://bugzilla.gnome.org/show_bug.cgi?id=788827)**
## Description
There are two issues
1. Pad templates of mxfmux is not generic like that of mp4. Splitmuxsink does't have support for such pad templates(ex: mpeg_audio_sink_%u). Its good to have generic pad template such as audio_%d, video_%d etc
2. With workaround for above issue(hardcoding in splitmuxsink), I tried to run a pipeline to fragment filed and mux them into MXF. Issue happens during the creation of second fragment. EOS isn't handled properly I guess which is leading to crash.
Here is the pipeline
gst-launch-1.0 videotestsrc is-live=1 ! video/x-raw,format=I420 ! x264enc ! splitmuxsink muxer=mxfmux location=/root/out_%d.mxf max-size-time=4000000000
ERROR:mxfmux.c:1571:gst_mxf_mux_handle_eos: assertion failed: (mux->offset == body_partition)
Version: 1.12.1https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/2509rtpjitterbuffer/h264parse timestamp issue (regression)2023-04-20T11:16:15ZBugzilla Migration Userrtpjitterbuffer/h264parse timestamp issue (regression)## Submitted by Nicola `@drakkan`
**[Link to original bug (#788777)](https://bugzilla.gnome.org/show_bug.cgi?id=788777)**
## Description
Created attachment 361248
logs that show the issue
In a pipeline like this:
rtspsrc...## Submitted by Nicola `@drakkan`
**[Link to original bug (#788777)](https://bugzilla.gnome.org/show_bug.cgi?id=788777)**
## Description
Created attachment 361248
logs that show the issue
In a pipeline like this:
rtspsrc ! rtph264depay ! h264parse ! ...
using rtsp over tcp can happen that rtspsrc outputs buffers with the same timestamp, when this happen h264parse outputs buffer with invalid timestamps.
Please take a look at the attached logs, you can see:
rtpjitterbuffer.c:916:rtp_jitter_buffer_calculate_pts:[00m backwards timestamps, using previous time
so different buffers with pts 0:15:23.020362975 are sended.
Not sure how to handle this case, we need to change rtpjitterbuffer or h264parse?
This problem seems to happen only using rtsp over tcp, I'm unable to reproduce it using rtsp over udp.
**Attachment 361248**, "logs that show the issue":
[log.txt](/uploads/beac41b97709f74dfb724f589e3b11e4/log.txt)
Version: 1.xhttps://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/619dashdemux: segmentbase type with 'sidx' is not working as expected.2021-09-24T14:35:52ZBugzilla Migration Userdashdemux: segmentbase type with 'sidx' is not working as expected.## Submitted by Jun Xie
**[Link to original bug (#788763)](https://bugzilla.gnome.org/show_bug.cgi?id=788763)**
## Description
e.g.
http://dash.edgesuite.net/dash264/TestCases/1a/netflix/exMPD_BIP_TC1.mpd
currently, a whole ...## Submitted by Jun Xie
**[Link to original bug (#788763)](https://bugzilla.gnome.org/show_bug.cgi?id=788763)**
## Description
e.g.
http://dash.edgesuite.net/dash264/TestCases/1a/netflix/exMPD_BIP_TC1.mpd
currently, a whole file is downloaded without using range download, and also bitrate switch is not available.
The expected behaviour shall be that 'sidx' parsed,
and segments be retrieved by range download, and bitrate can be switched.https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/618ahc2src: Introduce a new source for Android Camera 2 NDK API2021-09-24T14:35:52ZBugzilla Migration Userahc2src: Introduce a new source for Android Camera 2 NDK API## Submitted by Justin Kim `@joykim`
**[Link to original bug (#788696)](https://bugzilla.gnome.org/show_bug.cgi?id=788696)**
## Description
Created attachment 361159
ahc2src: new source element for camera2ndk on Android
Since...## Submitted by Justin Kim `@joykim`
**[Link to original bug (#788696)](https://bugzilla.gnome.org/show_bug.cgi?id=788696)**
## Description
Created attachment 361159
ahc2src: new source element for camera2ndk on Android
Since Android Nougat, Android provides Camera 2 NDK APIs so no JNI wrappers are required to implement ahc2src. Therefore, cebero's 'target_distro_version' should be 'DistroVersion.ANDROID_NOUGAT' or greater to build this element.
One outstanding result of this element is the quick response to discarding internal buffers so 763308 will not happen.
~~**Patch 361159**~~, "ahc2src: new source element for camera2ndk on Android":
[0001-ahc2src-Add-support-android-camera2ndk.patch](/uploads/ebeaad27b3bce661f77d5b36a6ecc894/0001-ahc2src-Add-support-android-camera2ndk.patch)https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/617h264parse: May break stream further rather then fixing it2021-09-24T14:35:51ZBugzilla Migration Userh264parse: May break stream further rather then fixing it## Submitted by Nicolas Dufresne `@ndufresne`
**[Link to original bug (#788595)](https://bugzilla.gnome.org/show_bug.cgi?id=788595)**
## Description
As found with this file in [bug 787795](https://bugzilla.gnome.org/show_bug.cgi?id=...## Submitted by Nicolas Dufresne `@ndufresne`
**[Link to original bug (#788595)](https://bugzilla.gnome.org/show_bug.cgi?id=788595)**
## Description
As found with this file in [bug 787795](https://bugzilla.gnome.org/show_bug.cgi?id=787795), with the following FLV file:
https://bugzilla.gnome.org/attachment.cgi?id=360317
h264parse turns a slightly broken stream into an un-playable stream. As an example, the display video will be gray with the parser:
filesrc location=test.flv ! flvdemux ! h264parse ! avdec_h264 ! autovideosink
But looks fine without:
filesrc location=/tmp/test.flv ! flvdemux ! video/x-h264,alignment=au ! avdec_h264 ! autovideosinkhttps://gitlab.freedesktop.org/gstreamer/gst-libav/-/issues/34libav: add avsrc2021-09-24T12:52:24ZBugzilla Migration Userlibav: add avsrc## Submitted by Nicola `@drakkan`
**[Link to original bug (#788583)](https://bugzilla.gnome.org/show_bug.cgi?id=788583)**
## Description
Created attachment 361025
add avsrc
This patch add an avsrc element. I use it to receive...## Submitted by Nicola `@drakkan`
**[Link to original bug (#788583)](https://bugzilla.gnome.org/show_bug.cgi?id=788583)**
## Description
Created attachment 361025
add avsrc
This patch add an avsrc element. I use it to receive rtmp streams.
I think the patch should be improved in several ways to be accepted upstream, anyway it works fine for my use case as is and for now I have no more time to work on it.
I think this patch can help other people that have to deal with rtmp in gstreamer to save time.
rtmp support in ffmpeg seems quite good and well supported, it works perfectly with the stream I have to receive, for example rtmp2 element is unable to receive them.
To enable avsrc element you need to enable network protocols in libav for example configuring it this way:
./configure --with-libav-extra-configure="--enable-network --enable-openssl --enable-protocol=rtmp --enable-protocol=rtmpe --enable-protocol=rtmps --enable-protocol=rtmpt --enable-protocol=rtmpte --enable-protocol=rtmpts --enable-protocol=tls_openssl"
**Patch 361025**, "add avsrc":
[0001-add-avsrc-element.patch](/uploads/9048ccc9e2ce69ecc742ebefc9f78e59/0001-add-avsrc-element.patch)
Version: 1.xhttps://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/2892audiobasesink: get time is racy2023-08-08T16:37:58ZBugzilla Migration Useraudiobasesink: get time is racy## Submitted by Philippe Renon
**[Link to original bug (#788562)](https://bugzilla.gnome.org/show_bug.cgi?id=788562)**
## Description
Here is a simplified version of the get_time method:
```
/* we call this function without hol...## Submitted by Philippe Renon
**[Link to original bug (#788562)](https://bugzilla.gnome.org/show_bug.cgi?id=788562)**
## Description
Here is a simplified version of the get_time method:
```
/* we call this function without holding the lock on sink for performance
* reasons. Try hard to not deal with and invalid ringbuffer and rate. */
static GstClockTime
gst_audio_base_sink_get_time (GstClock * clock, GstAudioBaseSink * sink)
{
[SNIP]
/* our processed samples are always increasing */
samples = gst_audio_ring_buffer_samples_done (ringbuffer); (1)
/** racy if a new sample is written when we are here... */
/* the number of samples not yet processed, this is still queued in the
* device (not played for playback). */
delay = gst_audio_ring_buffer_delay (ringbuffer); (2)
samples -= delay;
result = gst_util_uint64_scale_int (samples, GST_SECOND, rate);
return result;
}
```
It computes the time as time = samples - delay.
Where samples is the number of samples that have been written to the audio device and delay represents how much remains to be played by the device.
The race condition happens like this:
- samples value is gotten at line (1)
- a new sample is written to the device (but the samples variable does not account for it)
- delay value is gotten at (2) and is bigger than expected because of new sample that was written
If this happens then the delay is too big and when it gets subtracted to samples, the resulting time goes backwards (by 10ms in my case which corresponds I believe to the duration of a sample).
I don't know how to fix this issue because there are three classes involved : the audiobasesink, the audioringbugger and the specific sink (a directsoundsink in my case). The specific sink is involved because the get_delay method is implemented there.
There are solutions to mitigate the issue:
1. Get the delay before getting the samples
This will not solve the race condition but will make it less likely to happen.
If it happens, the clock will jump forward (and might jump backwards on a subsequent get_time
invocation).
2. Remember the last time value returned by get_time.
If the new time is before the last time then return the last time.
On a side note, I believe that directsoundsink delay method is also racy in a similar way.
Version: 1.12.xhttps://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/issues/390rtspconnection: needs locking mechanisms?2021-09-24T13:23:11ZBugzilla Migration Userrtspconnection: needs locking mechanisms?## Submitted by Jonathan Karlsson
**[Link to original bug (#788549)](https://bugzilla.gnome.org/show_bug.cgi?id=788549)**
## Description
Created attachment 360949
Test case to show the issue when closing the connection while writi...## Submitted by Jonathan Karlsson
**[Link to original bug (#788549)](https://bugzilla.gnome.org/show_bug.cgi?id=788549)**
## Description
Created attachment 360949
Test case to show the issue when closing the connection while writing to it
When looking into
https://bugzilla.gnome.org/show_bug.cgi?id=785684 and
https://bugzilla.gnome.org/show_bug.cgi?id=771525,
we noticed that if the rtspconnection gets closed by one thread while another thread is writing to it, there will be errors. We also noticed that no members of rtspconnection are protected by any locks. Is this intentional?
I have added a test case to show the situation where one thread is sending when the other one is closing.
Is this something we should handle when providing the patch for https://bugzilla.gnome.org/show_bug.cgi?id=785684? Maybe by adding locks, or handling the gst_rtsp_connection_close in some other way in the new write_vectors-method?
**Patch 360949**, "Test case to show the issue when closing the connection while writing to it":
[0001-rtspconnection-Send-while-closing-connection.patch](/uploads/2ead80d4bd91dce297aa423441aa5a30/0001-rtspconnection-Send-while-closing-connection.patch)https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/issues/389Opusenc fails with exception2021-09-24T13:23:10ZBugzilla Migration UserOpusenc fails with exception## Submitted by Steve Chapman
**[Link to original bug (#788537)](https://bugzilla.gnome.org/show_bug.cgi?id=788537)**
## Description
Created attachment 360934
Console log of gst-launch with gst_debug=4
When I use the opusenc ...## Submitted by Steve Chapman
**[Link to original bug (#788537)](https://bugzilla.gnome.org/show_bug.cgi?id=788537)**
## Description
Created attachment 360934
Console log of gst-launch with gst_debug=4
When I use the opusenc documented example pipeline from gst-launch-1.0 it fails with an exception in libopus-0.dll.
i.e. gst-launch-1.0 -v audiotestsrc wave=sine num-buffers=100 ! audioconvert ! opusenc ! oggmux ! filesink location=sine.ogg
Results in exception:
Unhandled exception at 0x6252D01B (libopus-0.dll) in gst-launch-1.0.exe: 0xC0000005: Access violation reading location 0xFFFFFFFF
I am testing this on Windows 10 with GStreamer 1.12.0.
GST_DEBUG=4 log of console activity attached as
I have also tried a variation on the pipeline that calls out most of the opusenc parameters but that also fails with the same exception, i.e.
gst-launch-1.0 audiotestsrc num-buffers=100 ! "audio/x-raw,channels=1,rate=48000" ! opusenc audio-type=2048 bandwidth=1103 bitrate=20000 bitrate-type=1 inband-fec=FALSE packet-loss-percentage=0 dtx=FALSE ! fakesink
I then tried updating to GStreamer 1.12.3 but the result was the same.
Finally I extracted libopus-0.dll from the 1.10.2 GStreamer MSI and put it in place of the 1.12.3 version and find the problem goes away.
**Attachment 360934**, "Console log of gst-launch with gst_debug=4":
[opusfail1.txt](/uploads/3f90ae4b4162391b1898469ee54023a1/opusfail1.txt)
Version: 1.12.3https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/issues/409pulsesink: playing after EOS causes a gap in playback after seek2021-09-24T13:32:42ZBugzilla Migration Userpulsesink: playing after EOS causes a gap in playback after seek## Submitted by mki..@..il.com
**[Link to original bug (#788529)](https://bugzilla.gnome.org/show_bug.cgi?id=788529)**
## Description
Created attachment 360920
Scenario file for gst-validate
If after EOS playing state of pipe...## Submitted by mki..@..il.com
**[Link to original bug (#788529)](https://bugzilla.gnome.org/show_bug.cgi?id=788529)**
## Description
Created attachment 360920
Scenario file for gst-validate
If after EOS playing state of pipeline is maintain, PulseAudio still wants to render client's data and calls request callback. pulsesink doesn't have data to write, thus PulseAudio render "silence". If after some waiting time (e.g. 30 seconds), GStreamer's client seeks (e.g. to 0), a user will hear proper playback for a while (e.g. 12 seconds), than "silence" will be rendered by PulseAudio till stream time doesn't reach waiting time (earlier mentioned 30 seconds).
To reproduce, please run command with attached scenario file:
gst-validate-1.0 --set-scenario=brainson.scenario uridecodebin uri=https://play.podtrac.com/APM-BrainsOn/play.publicradio.org/itunes/d/podcast/minnesota/podcasts/brains_on/2017/02/brainson_20170214_69_20170214_64.mp3 ! autoaudiosink
It seems that:
- pulsesink calculates absolute offset in PulseAudio's client buffer based on running (?) time,
- PulseAudio doesn't increase write index, when silence is rendered (samples for silence are stored in separate memory block), i.e. underrun is happened,
- PulseAudio doesn't correct write index, if data are provided after silence,
- PulseAudio increases read index during silence.
IMHO pulsesink expects that PulseSink' read index should not be increased or write index will be rewind to position of read index after underrun.
**Attachment 360920**, "Scenario file for gst-validate":
[brainson.scenario](/uploads/173691d685eb9b2c8d949dd96fa9321e/brainson.scenario)