gst-rtsp-server issueshttps://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues2021-09-24T11:03:45Zhttps://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/50rtspclientsink don't starts send packets time to time2021-09-24T11:03:45ZBugzilla Migration Userrtspclientsink don't starts send packets time to time## Submitted by Sergey
**[Link to original bug (#797311)](https://bugzilla.gnome.org/show_bug.cgi?id=797311)**
## Description
%subj%, minimal example https://github.com/RSATom/gstrtspclientsink-bug. It just do connects one by one to...## Submitted by Sergey
**[Link to original bug (#797311)](https://bugzilla.gnome.org/show_bug.cgi?id=797311)**
## Description
%subj%, minimal example https://github.com/RSATom/gstrtspclientsink-bug. It just do connects one by one to rtsp server. The problem could happen in the very first connect or some connects later.
Version: 1.14.xhttps://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/49gst_rtsp_media_seek fails after pipeline is playing2021-09-24T11:03:45ZBugzilla Migration Usergst_rtsp_media_seek fails after pipeline is playing## Submitted by Benjamin Kleine
**[Link to original bug (#797195)](https://bugzilla.gnome.org/show_bug.cgi?id=797195)**
## Description
There seems to be a seeking issue concerning gstreamer rtsp-server versions 1.13+.
I want to...## Submitted by Benjamin Kleine
**[Link to original bug (#797195)](https://bugzilla.gnome.org/show_bug.cgi?id=797195)**
## Description
There seems to be a seeking issue concerning gstreamer rtsp-server versions 1.13+.
I want to play an existing h264-video file using the rtsp-server and seek to a specific timestamp after the pipeline has been completely set up.
The initial connection is established correctly and the video is played correctly. When calling gst_rtsp_media_seek after a few (10) seconds on the stream media, the video gets stuck and stops playing.
Using older gstreamer versions (1.12.5) the media_seek call is working fine.
Checking the commits, the fix of [Bug 788340](https://bugzilla.gnome.org/show_bug.cgi?id=788340) might have to do with this, but I'm not that deep into the code to be able to fix it.
Version: 1.14.xhttps://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/48rtsp-stream: RTCP doesn't work correctly in the client-settings case2021-09-24T11:03:46ZBugzilla Migration Userrtsp-stream: RTCP doesn't work correctly in the client-settings case## Submitted by Patricia Muscalu
**[Link to original bug (#796917)](https://bugzilla.gnome.org/show_bug.cgi?id=796917)**
## Description
If the multiple multicast destinations are selected, RTCP will only ever work for the first mult...## Submitted by Patricia Muscalu
**[Link to original bug (#796917)](https://bugzilla.gnome.org/show_bug.cgi?id=796917)**
## Description
If the multiple multicast destinations are selected, RTCP will only ever work for the first multicast client. We need multiple sockets in order to make it work and this implies that for each new client that requests a specific multicast destination a new udpsrc and a multiupdsink have to be added to the stream.https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/47Add API to allow/disallow specific multicast destinations2021-09-24T11:03:46ZBugzilla Migration UserAdd API to allow/disallow specific multicast destinations## Submitted by Patricia Muscalu
**[Link to original bug (#796916)](https://bugzilla.gnome.org/show_bug.cgi?id=796916)**
## Description
The current implementation relies on the address pool:
the server checks if the suggested dest...## Submitted by Patricia Muscalu
**[Link to original bug (#796916)](https://bugzilla.gnome.org/show_bug.cgi?id=796916)**
## Description
The current implementation relies on the address pool:
the server checks if the suggested destination/port is present in the pool and reserves the address (doing that, no other clients are allowed to request the same multicast group, which is not correct, see the discussion in https://bugzilla.gnome.org/show_bug.cgi?id=793441).
Probably, the pre-configured address pool should be only involved in choosing the server-selected address/port pairs. Thus we would need an additional security step for checking if the the destination suggested by the client is allowed.https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/46Leaks various objects and asserts on shutdown2021-09-24T11:03:46ZBugzilla Migration UserLeaks various objects and asserts on shutdown## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#796881)](https://bugzilla.gnome.org/show_bug.cgi?id=796881)**
## Description
See attached patch to test-launch. Run with "videotestsrc ! x264enc ! rtph264pay name=pa...## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#796881)](https://bugzilla.gnome.org/show_bug.cgi?id=796881)**
## Description
See attached patch to test-launch. Run with "videotestsrc ! x264enc ! rtph264pay name=pay0" and make sure to attach a client before 10s are over.
First observation: it only actually quits once the client disconnects. gst_deinit() hangs:
```
Thread 1 (Thread 0x7f2420859680 (LWP 19897)):
#0 0x00007f242265da79 in syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1 0x00007f2422c4ee0f in g_cond_wait (cond=cond@entry=0x55f87474ef40, mutex=0x55f87474f850) at ../../../../glib/gthread-posix.c:1402
#2 0x00007f2422c31f3c in g_thread_pool_free (pool=0x55f87474ef20, immediate=0, wait_=<optimized out>)
at ../../../../glib/gthreadpool.c:776
#3 0x00007f24235b977a in default_cleanup (pool=0x55f87474a910 [GstTaskPool]) at gsttaskpool.c:88
#4 0x00007f24235b890d in init_klass_pool (klass=<optimized out>) at gsttask.c:161
#5 0x00007f24235b8b52 in gst_task_cleanup_all () at gsttask.c:381
#6 0x00007f242353c6c4 in gst_deinit () at gst.c:1095
#7 0x000055f8728292f4 in main (argc=<optimized out>, argv=<optimized out>) at test-launch.c:102
```https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/43Answer of RTSP server is (very) often not sent out2021-09-24T11:03:47ZBugzilla Migration UserAnswer of RTSP server is (very) often not sent out## Submitted by Marie Maurer
**[Link to original bug (#796361)](https://bugzilla.gnome.org/show_bug.cgi?id=796361)**
## Description
There seems to be some error, which happens with the RTSP server on our i.MX6 platform.
I receive ...## Submitted by Marie Maurer
**[Link to original bug (#796361)](https://bugzilla.gnome.org/show_bug.cgi?id=796361)**
## Description
There seems to be some error, which happens with the RTSP server on our i.MX6 platform.
I receive an RTSP OPTIONS and/or RTSP DESCRIBE message (which is sent by VLC from my PC), see partly at least an answer in GStreamer logfile, like
0:00:45.223877672 [334m 1023 [00m 0x2cf7e60 [36mINFO [00m [00m rtspclient rtsp-client.c:3460:handle_request: [00m client 0x2a098a0: received a request OPTIONS rtsp://10.5.122.41:8554/live 1.0
0:00:45.224764339 [334m 1023 [00m 0x2cf7e60 [33;01mLOG [00m [00m rtspclient rtsp-client.c:1147:default_pre_signal_handler:<GstRTSPClient@0x2a098a0> [00m returning GST_RTSP_STS_OK
0:00:45.226936672 [334m 1023 [00m 0x3afbc980 [33;01mLOG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1143:gst_v4l2_buffer_pool_dqbuf:<RecorderStreamerH264Encoder:pool:src> [00m dequeueing a buffer
0:00:45.227309005 [334m 1023 [00m 0x3afbc980 [33;01mLOG [00m [00m v4l2allocator gstv4l2allocator.c:1312:gst_v4l2_allocator_dqbuf:<RecorderStreamerH264Encoder:pool:src:allocator> [00m dequeued buffer 2 (flags 0x4011)
0:00:45.227481672 [334m 1023 [00m 0x3afbc980 [33;01mLOG [00m [00m v4l2allocator gstv4l2allocator.c:1341:gst_v4l2_allocator_dqbuf:<RecorderStreamerH264Encoder:pool:src:allocator> [00m Dequeued capture buffer, length: 2097152 bytesused: 54167 data_offset: 0
0:00:45.227712339 [334m 1023 [00m 0x3afbc980 [33;01mLOG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1176:gst_v4l2_buffer_pool_dqbuf:<RecorderStreamerH264Encoder:pool:src> [00m dequeued buffer 0x6e1a02b0 seq:418 (ix=2), mem 0x6e38b068 used 54167, plane=0, flags 00004011, ts 0:00:20.081505000, pool-queued=3, buffer=0x6e1a02b0
0:00:45.228077672 [334m 1023 [00m 0x3afbc980 [33;01mLOG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:116:gst_v4l2_buffer_pool_copy_buffer:<RecorderStreamerH264Encoder:pool:src> [00m copying buffer
0:00:45.228248005 [334m 1023 [00m 0x3afbc980 [37mDEBUG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:141:gst_v4l2_buffer_pool_copy_buffer:<RecorderStreamerH264Encoder:pool:src> [00m copy raw bytes
0:00:45.230225672 [334m 1023 [00m 0x3afbc980 [37mDEBUG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1380:gst_v4l2_buffer_pool_release_buffer:<RecorderStreamerH264Encoder:pool:src> [00m release buffer 0x6e1a02b0
0:00:45.230486672 [334m 1023 [00m 0x3afbc980 [33;01mLOG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1069:gst_v4l2_buffer_pool_qbuf:<RecorderStreamerH264Encoder:pool:src> [00m queuing buffer 2
0:00:45.230670005 [334m 1023 [00m 0x3afbc980 [33;01mLOG [00m [00m v4l2allocator gstv4l2allocator.c:1265:gst_v4l2_allocator_qbuf:<RecorderStreamerH264Encoder:pool:src:allocator> [00m queued buffer 2 (flags 0x4003)
0:00:45.230801672 [334m 1023 [00m 0x6e123d50 [33;01mLOG [00m [00m v4l2allocator gstv4l2allocator.c:1265:gst_v4l2_allocator_qbuf:<LiveVideov4l2sink:pool:sink:allocator> [00m queued buffer 1 (flags 0x2002)
0:00:45.230809339 [334m 1023 [00m 0x3afbc980 [33;01mLOG [00m [00m v4l2videoenc gstv4l2videoenc.c:628:gst_v4l2_video_enc_get_oldest_frame:`<RecorderStreamerH264Encoder>` [00m Oldest frame is 418 0:00:20.081505530 and 0 frames left
0:00:45.230990005 [334m 1023 [00m 0x6e123d50 [37mDEBUG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1380:gst_v4l2_buffer_pool_release_buffer:<LiveVideov4l2sink:pool:sink> [00m release buffer 0x6c528200
0:00:45.231132672 [334m 1023 [00m 0x6e123d50 [33;01mLOG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1463:gst_v4l2_buffer_pool_release_buffer:<LiveVideov4l2sink:pool:sink> [00m buffer 1 is queued
0:00:45.231212005 [334m 1023 [00m 0x6e123d50 [33;01mLOG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1009:gst_v4l2_buffer_pool_poll:<LiveVideov4l2sink:pool:sink> [00m polling device
0:00:45.231294005 [334m 1023 [00m 0x6e123d50 [33;01mLOG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1143:gst_v4l2_buffer_pool_dqbuf:<LiveVideov4l2sink:pool:sink> [00m dequeueing a buffer
0:00:45.232014339 [334m 1023 [00m 0x3afbc980 [33;01mLOG [00m [00m v4l2videoenc gstv4l2videoenc.c:645:gst_v4l2_video_enc_loop:`<RecorderStreamerH264Encoder>` [00m Allocate output buffer
0:00:45.232141005 [334m 1023 [00m 0x2cf7e60 [36mINFO [00m [00m rtspclient rtsp-client.c:3460:handle_request: [00m client 0x2a098a0: received a request DESCRIBE rtsp://10.5.122.41:8554/live 1.0
0:00:45.232184672 [334m 1023 [00m 0x3afbc980 [33;01mLOG [00m [00m v4l2videoenc gstv4l2videoenc.c:658:gst_v4l2_video_enc_loop:`<RecorderStreamerH264Encoder>` [00m Process output buffer
0:00:45.232306005 [334m 1023 [00m 0x2cf7e60 [33;01mLOG [00m [00m rtspclient rtsp-client.c:1147:default_pre_signal_handler:<GstRTSPClient@0x2a098a0> [00m returning GST_RTSP_STS_OK
0:00:45.232329339 [334m 1023 [00m 0x3afbc980 [37mDEBUG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1705:gst_v4l2_buffer_pool_process:<RecorderStreamerH264Encoder:pool:src> [00m process buffer 0x2b7fcce4
0:00:45.232444005 [334m 1023 [00m 0x3afbc980 [33;01mLOG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1009:gst_v4l2_buffer_pool_poll:<RecorderStreamerH264Encoder:pool:src> [00m polling device
0:00:45.232458339 [334m 1023 [00m 0x2cf7e60 [33;01mLOG [00m [00m rtspmountpoints rtsp-mount-points.c:251:gst_rtsp_mount_points_match: [00m Looking for mount point path /live
0:00:45.232784672 [334m 1023 [00m 0x2cf7e60 [36mINFO [00m [00m rtspmountpoints rtsp-mount-points.c:300:gst_rtsp_mount_points_match: [00m found media factory 0x53b02e68 for path /live
0:00:45.232978672 [334m 1023 [00m 0x2cf7e60 [36mINFO [00m [00;01;37;41m GST_PIPELINE gstparse.c:337:gst_parse_launch_full: [00m parsing pipeline description ' ( appsrc name=StreamingSrc min-latency=200000000 is-live=true do-timestamp=true format=3 ! queue ! rtph264pay name=pay0 config-interval=0 pt=96 ) '
0:00:45.233100005 [334m 1023 [00m 0x2cf7e60 [36mINFO [00m [00;01;37;41m GST_ELEMENT_FACTORY gstelementfactory.c:361:gst_element_factory_create: [00m creating element "appsrc"
0:00:45.234180005 [334m 1023 [00m 0x2cf7e60 [36mINFO [00m [00;01;37;41m GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstBaseSrc@0x2da321a0> [00m adding pad 'src'
0:00:45.234934672 [334m 1023 [00m 0x2cf7e60 [36mINFO [00m [00;01;37;41m GST_ELEMENT_FACTORY gstelementfactory.c:361:gst_element_factory_create: [00m creating element "queue"
0:00:45.235195672 [334m 1023 [00m 0x2cf7e60 [36mINFO [00m [00;01;37;41m GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstQueue@0x6e3aefa0> [00m adding pad 'sink'
0:00:45.235397005 [334m 1023 [00m 0x2cf7e60 [36mINFO [00m [00;01;37;41m GST_ELEMENT_PADS gstelement.c:670:gst_element_add_pad:<GstQueue@0x6e3aefa0> [00m adding pad 'src'
0:00:45.245540339 [334m 1023 [00m 0x6e123d50 [33;01mLOG [00m [00m v4l2allocator gstv4l2allocator.c:1312:gst_v4l2_allocator_dqbuf:<LiveVideov4l2sink:pool:sink:allocator> [00m dequeued buffer 2 (flags 0x2000)
0:00:45.245762339 [334m 1023 [00m 0x6e123d50 [33;01mLOG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1176:gst_v4l2_buffer_pool_dqbuf:<LiveVideov4l2sink:pool:sink> [00m dequeued buffer 0x6c5280c0 seq:0 (ix=2), mem 0x55def1e8 used 2073600, plane=0, flags 00002000, ts 0:00:00.000000000, pool-queued=2, buffer=0x6c5280c0
0:00:45.245886005 [334m 1023 [00m 0x6e123d50 [37mDEBUG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1380:gst_v4l2_buffer_pool_release_buffer:<LiveVideov4l2sink:pool:sink> [00m release buffer 0x6c5280c0
0:00:45.245960339 [334m 1023 [00m 0x6e123d50 [33;01mLOG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1448:gst_v4l2_buffer_pool_release_buffer:<LiveVideov4l2sink:pool:sink> [00m buffer 2 not queued, putting on free list
0:00:45.246031672 [334m 1023 [00m 0x6e123d50 [37mDEBUG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1380:gst_v4l2_buffer_pool_release_buffer:<LeftCamerav4l2src:pool:src> [00m release buffer 0x6e174b50
0:00:45.246101672 [334m 1023 [00m 0x6e123d50 [33;01mLOG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1069:gst_v4l2_buffer_pool_qbuf:<LeftCamerav4l2src:pool:src> [00m queuing buffer 5
0:00:45.246196672 [334m 1023 [00m 0x6e123d50 [33;01mLOG [00m [00m v4l2allocator gstv4l2allocator.c:1265:gst_v4l2_allocator_qbuf:<LeftCamerav4l2src:pool:src:allocator> [00m queued buffer 5 (flags 0x2003)
0:00:45.246278672 [334m 1023 [00m 0x6e123d50 [33;01mLOG [00m [00m v4l2allocator gstv4l2allocator.c:937:gst_v4l2_allocator_clear_dmabufin:<LiveVideov4l2sink:pool:sink:allocator> [00m clearing DMABUF import, fd 134 plane 0
0:00:45.246351672 [334m 1023 [00m 0x6e123d50 [33;01mLOG [00m [00m bufferpool gstbufferpool.c:1284:default_release_buffer:<LiveVideov4l2sink:pool:sink> [00m released buffer 0x6c5280c0 0
0:00:45.246420672 [334m 1023 [00m 0x6e123d50 [33;01mLOG [00m [00m bufferpool gstbufferpool.c:388:do_free_buffer:<LiveVideov4l2sink:pool:sink> [00m freeing buffer 0x6c5280c0 (2 left)
0:00:45.246497339 [334m 1023 [00m 0x6e123d50 [33;01mLOG [00m [00m v4l2allocator gstv4l2allocator.c:356:gst_v4l2_allocator_release:<LiveVideov4l2sink:pool:sink:allocator> [00m plane 0 of buffer 2 released
0:00:45.246562339 [334m 1023 [00m 0x6e123d50 [33;01mLOG [00m [00m v4l2allocator gstv4l2allocator.c:372:gst_v4l2_allocator_release:<LiveVideov4l2sink:pool:sink:allocator> [00m buffer 2 released
0:00:45.246950339 [334m 1023 [00m 0x6e123d50 [37mDEBUG [00m [00m v4l2sink gstv4l2sink.c:492:gst_v4l2sink_show_frame:`<LiveVideov4l2sink>` [00m render buffer: 0x6e174c90
0:00:45.247050005 [334m 1023 [00m 0x6e123c90 [37mDEBUG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1302:gst_v4l2_buffer_pool_acquire_buffer:<LeftCamerav4l2src:pool:src> [00m acquire
0:00:45.247056672 [334m 1023 [00m 0x6e123d50 [37mDEBUG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1705:gst_v4l2_buffer_pool_process:<LiveVideov4l2sink:pool:sink> [00m process buffer 0x2ebfc96c
0:00:45.247152005 [334m 1023 [00m 0x6e123d50 [33;01mLOG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1883:gst_v4l2_buffer_pool_process:<LiveVideov4l2sink:pool:sink> [00m alloc buffer from our pool
0:00:45.247150339 [334m 1023 [00m 0x55debef0 [37mDEBUG [00m [00m v4l2videoenc gstv4l2videoenc.c:714:gst_v4l2_video_enc_handle_frame:`<RecorderStreamerH264Encoder>` [00m Handling frame 419
0:00:45.247219672 [334m 1023 [00m 0x6e123d50 [37mDEBUG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1302:gst_v4l2_buffer_pool_acquire_buffer:<LiveVideov4l2sink:pool:sink> [00m acquire
0:00:45.247301672 [334m 1023 [00m 0x55debef0 [37mDEBUG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1705:gst_v4l2_buffer_pool_process:<RecorderStreamerH264Encoder:pool:sink> [00m process buffer 0x6e3a776c
0:00:45.247168672 [334m 1023 [00m 0x6e123c90 [33;01mLOG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1009:gst_v4l2_buffer_pool_poll:<LeftCamerav4l2src:pool:src> [00m polling device
0:00:45.247425672 [334m 1023 [00m 0x6e123d50 [33;01mLOG [00m [00m bufferpool gstbufferpool.c:1133:default_acquire_buffer:<LiveVideov4l2sink:pool:sink> [00m no buffer, trying to allocate
0:00:45.247433339 [334m 1023 [00m 0x55debef0 [33;01mLOG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1883:gst_v4l2_buffer_pool_process:<RecorderStreamerH264Encoder:pool:sink> [00m alloc buffer from our pool
0:00:45.247592339 [334m 1023 [00m 0x6e123d50 [33;01mLOG [00m [00m v4l2allocator gstv4l2allocator.c:975:gst_v4l2_allocator_alloc_dmabufin:<LiveVideov4l2sink:pool:sink:allocator> [00m allocating empty DMABUF import group
0:00:45.247660339 [334m 1023 [00m 0x55debef0 [37mDEBUG [00m [00m v4l2bufferpool gstv4l2bufferpool.c:1302:gst_v4l2_buffer_pool_acquire_buffer:<RecorderStreamerH264Encoder:pool:sink> [00m acquire
On console I see the content of the packets:
2018-05-23 18:19:42.302 Streamer.cpp(167): client_connected_cb client_connected_cb 1846863448
2018-05-23 18:19:42.302 Streamer.cpp(816): clientHasConnected user_data 1846863448
2018-05-23 18:19:42.303 Streamer.cpp(825): clientHasConnected setting m_connectedClientsCount to 1
-> One packet is received:
RTSP request message 0x2a0bc40
request line:
method: 'OPTIONS'
uri: 'rtsp://10.5.122.41:8554/live'
version: '1.0'
headers:
key: 'CSeq', value: '2'
key: 'User-Agent', value: 'LibVLC/3.0.2 (LIVE555 Streaming Media v2016.11.28)'
body:
-> Answer is sent via send_message (MMMM is my own debug output):
MMMM I am inside RTSP: send_message loglevel=9
RTSP response message 0x29e08c0c
status line:
code: '200'
reason: 'OK'
version: '1.0'
headers:
key: 'CSeq', value: '2'
key: 'Public', value: 'OPTIONS, DESCRIBE, ANNOUNCE, GET_PARAMETER, PAUSE, PLAY, RECORD, SETUP, SET_PARAMETER, TEARDOWN'
key: 'Server', value: 'GStreamer RTSP server'
body: length 0
-> Second incoming message, which is a DESCRIBE:
RTSP request message 0x2a0bc40
request line:
method: 'DESCRIBE'
2018-05-23 18:19:42.319 Streamer.cpp(241): sampleCallback uri: 'rtsp://10.5.122.41:8554/live'
appSrc == 0
version: '1.0'
headers:
key: 'CSeq', value: '3'
key: 'User-Agent', value: 'LibVLC/3.0.2 (LIVE555 Streaming Media v2016.11.28)'
key: 'Accept', value: 'application/sdp'
body:
-> Here the answer is missing!!! send_message is not called!!!
2018-05-23 18:19:42.374 Streamer.cpp(180): media_configure_cb client conrequest received 1846863448
-> After 5 seconds VLC tries to reconnect because previous connection was unsuccessful
-> but this is prohibited by our implementation. So in 5 seconds no call to send_message.
2018-05-23 18:19:47.302 Streamer.cpp(167): client_connected_cb client_connected_cb 1846863448
2018-05-23 18:19:47.303 Streamer.cpp(834): clientHasConnected second client attempted to connect
-> What is this? This is not the RTSP SETUP, but seems to be an answer? Our last answer?
-> but it was ok. Answer for second connection? But request was not yet traced...
MMMM I am inside RTSP: send_generic_response
MMMM I am inside RTSP: send_message loglevel=9
RTSP response message 0x29e08c0c
status line:
code: '503'
reason: 'Service Unavailable'
version: '1.0'
headers:
key: 'CSeq', value: '3'
key: 'Server', value: 'GStreamer RTSP server'
body: length 0
2018-05-23 18:20:02.449 Streamer.cpp(154): client_closed_cb client_closed_cb 1846863448
2018-05-23 18:20:02.450 Streamer.cpp(846): clientHasDisconnected setting m_connectedClientsCount to 0
-> Now client has disconnected, RTSP SETUP is dumped?
RTSP request message 0x2d85678
request line:
method: 'SETUP'
uri: 'rtsp://10.5.122.41:8554/live'
version: '1.0'
headers:
key: 'CSeq', value: '0'
key: 'Transport', value: 'RTP/AVP;unicast;client_port=9416-9417'
body:
2018-05-23 18:20:02.461 Streamer.cpp(180): media_configure_cb client conrequest received 1846863448
MMMM I am inside RTSP: send_generic_response
MMMM I am inside RTSP: send_message loglevel=9
RTSP response message 0x29e08c0c
status line:
code: '503'
reason: 'Service Unavailable'
version: '1.0'
headers:
key: 'CSeq', value: '0'
key: 'Server', value: 'GStreamer RTSP server'
body: length 0
Very mysterious...
Version: 1.14.0https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/37RTSP server for 1.4 doesn't allow new appsrc pipeline connections2021-09-24T11:03:48ZBugzilla Migration UserRTSP server for 1.4 doesn't allow new appsrc pipeline connections## Submitted by tan..@..acq.eu
**[Link to original bug (#793264)](https://bugzilla.gnome.org/show_bug.cgi?id=793264)**
## Description
Hello,
After few play/stop connections from one unique client, rtsp server doesn't allow any ...## Submitted by tan..@..acq.eu
**[Link to original bug (#793264)](https://bugzilla.gnome.org/show_bug.cgi?id=793264)**
## Description
Hello,
After few play/stop connections from one unique client, rtsp server doesn't allow any new ones.
Other behavior: multiple clients connect to server, new client connections will always work if at least one client is still connected but when all clients are disconnected, no new connection is possible.
RTSP logs: https://gist.github.com/Mezzano/d18f7ab278e2174adc9c045d5bdea307
Code is adapted from test-appsrc.c: https://gist.github.com/Mezzano/43789624d4983d70bcd9610ecaddbb30
Thanks for helping
Version: 1.xhttps://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/36DESCRIBE and ANNOUNCE fail for mount points that are a prefix, causing VLC to...2021-09-24T11:03:48ZBugzilla Migration UserDESCRIBE and ANNOUNCE fail for mount points that are a prefix, causing VLC to conclude that the URL is wrong## Submitted by gnu..@..og.com
**[Link to original bug (#792804)](https://bugzilla.gnome.org/show_bug.cgi?id=792804)**
## Description
I was trying to create an RTSP/HTTP proxy using the gst-rtsp-server framework. I created a factor...## Submitted by gnu..@..og.com
**[Link to original bug (#792804)](https://bugzilla.gnome.org/show_bug.cgi?id=792804)**
## Description
I was trying to create an RTSP/HTTP proxy using the gst-rtsp-server framework. I created a factory and tried to mount it on /proxy in hopes that I could serve up media when a client connected to rtsp://localhost:8554/proxy/bigbuckbunny or rtsp://localhost:8554/proxy/caminandes .
Unfortunately it appears that gst_rtsp_mount_points_match will not return the factory for a partial match if the gint*matched parameter is NULL.
This happens to be the case when handle_describe_request() calls find_media(). Likewise handle_announce_request(). So while the factory might be usable in a SETUP call, it causes a 404 for the media in a DESCRIBE RTSP request.
If it is important for the gst_rtsp_mount_points_match to return null for prefix matches when matched==null (which appears to be the behavior described in the API documentation), then I recommend you adjust handle_announce_request and handle_describe_request to pass in a non-zero matched pointer.
Version: 1.12.xhttps://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/35Race between gst_rtsp_client_close() and client_watch_notify() leads to undef...2021-09-24T11:03:49ZBugzilla Migration UserRace between gst_rtsp_client_close() and client_watch_notify() leads to undefined behaviour## Submitted by Kseniya Vasilchuk
**[Link to original bug (#790909)](https://bugzilla.gnome.org/show_bug.cgi?id=790909)**
## Description
Created attachment 364515
prevent undefined behaviour
The failed preroll causes client_w...## Submitted by Kseniya Vasilchuk
**[Link to original bug (#790909)](https://bugzilla.gnome.org/show_bug.cgi?id=790909)**
## Description
Created attachment 364515
prevent undefined behaviour
The failed preroll causes client_watch_notify() which calls rtsp_ctrl_timeout_remove() function.
This function calls g_main_context_find_source_by_id() which use priv->watch_context to find the source to destroy it.
But if in the same time gst_rtsp_client_close() was called (e.g. by gst_rtsp_server_client_filter() function) it can unref and set priv->watch_context to NULL before rtsp_ctrl_timeout_remove().
In this case manual on g_main_context_find_source_by_id() says:
if GMainContext is NULL, the default context will be used. But we don't use default context in rtsp client so we trying to find non-existent source, it means:
//-> from https://developer.gnome.org/glib/stable/glib-The-Main-Event-Loop.html
It is a programmer error to attempt to lookup a non-existent source.
More specifically: source IDs can be reissued after a source has been destroyed and therefore it is never valid to use this function with a source ID which may have already been removed. An example is when scheduling an idle to run in another thread with g_idle_add(): the idle may already have run and been removed by the time this function is called on its (now invalid) source ID. This source ID may have been reissued, leading to the operation being performed against the wrong source.
//<-
So we have undefined behaviour here.
P.S.
If we are lucky and default context does not have source with such id, it causes "g_source_destroy: assertion 'source != NULL' failed".
I've attached a patch to prevent use g_main_context_find_source_by_id() with NULL GMainContext. Please watch it.
~~**Patch 364515**~~, "prevent undefined behaviour":
[prevent_undefined_behaviour.patch](/uploads/7b609ca870f2c09ea6824b463b6d4c0c/prevent_undefined_behaviour.patch)https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/33rtsp: Allow pipelines to be shared between media2021-09-24T11:03:49ZBugzilla Migration Userrtsp: Allow pipelines to be shared between media## Submitted by Nick Kallen
**[Link to original bug (#779484)](https://bugzilla.gnome.org/show_bug.cgi?id=779484)**
## Description
In order to build videocompositor/mosaic RTSP server, I subclassed RTSPMediaFactory to create my own ...## Submitted by Nick Kallen
**[Link to original bug (#779484)](https://bugzilla.gnome.org/show_bug.cgi?id=779484)**
## Description
In order to build videocompositor/mosaic RTSP server, I subclassed RTSPMediaFactory to create my own elements directly. Each RTSPMedia created by the factory is in the same pipeline (so they can mux together). But RTSPMedia does not allow you to share pipelines between Media elements, because as part of the preparation process it prerolls the pipeline: it sets the pipeline state to pause and then it listens for confirmation that it transitioned from ready -> pause before activating everything. But if you are sharing a pipeline between RTSPMedia it will already be in state playing most of the time, and new medias that share the pipeline will not get prepared correctly.
My solution is to make the preroll method overridable.
I wanted some feedback on this before I continue...
Version: 1.11.1https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/32limit max-sessions to 1 leads to service unavailable after client crashes2021-09-24T11:03:50ZBugzilla Migration Userlimit max-sessions to 1 leads to service unavailable after client crashes## Submitted by Keith Thornton `@Keith`
**[Link to original bug (#778742)](https://bugzilla.gnome.org/show_bug.cgi?id=778742)**
## Description
Created attachment 345922
vlc attached successfuly followed after crash by failed attem...## Submitted by Keith Thornton `@Keith`
**[Link to original bug (#778742)](https://bugzilla.gnome.org/show_bug.cgi?id=778742)**
## Description
Created attachment 345922
vlc attached successfuly followed after crash by failed attempt
If max-sessions is limited to 1 by calling gst_rstp_session_pool_set_max_sessions, If the client terminattes without sending an rstp TEARDOWN message, no further sessions can be established.
The attached log was prodiced by changing the test-launch.exe on windows to set max-sessions to 1 starting the server with the example pipeline, attaching vlc as client and then killing it using the process explorer.
**Attachment 345922**, "vlc attached successfuly followed after crash by failed attempt":
[test.log](/uploads/073c96828ba5161f135fe8c81133fe4f/test.log)
Version: 1.xhttps://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/31The segfault happens when many clients work with rtsp server with GST_RTSP_LO...2021-09-24T11:03:50ZBugzilla Migration UserThe segfault happens when many clients work with rtsp server with GST_RTSP_LOWER_TRANS_TCP option on.## Submitted by Kseniya Vasilchuk
**[Link to original bug (#773532)](https://bugzilla.gnome.org/show_bug.cgi?id=773532)**
## Description
Created attachment 338506
stacktrace
I've added a stacktrace of segfault in attachment. ...## Submitted by Kseniya Vasilchuk
**[Link to original bug (#773532)](https://bugzilla.gnome.org/show_bug.cgi?id=773532)**
## Description
Created attachment 338506
stacktrace
I've added a stacktrace of segfault in attachment.
As you can see from it, the segfault happens inside "do_send_data" function mutex lock, and "do_send_data" function calls from "gst_rtsp_stream_transport_send_rtp" function.
So the reason of segfault is that GstRTSPClient finalizing happens before or in the time of "do_send_data" work. So "do_send_data" uses already destroyed client mutex.
I've made a patch to fix that problem. Please review it.
P.S. To reproduce this bug I took rtsp server from examples (test-readme.c) and add next line to it:
gst_rtsp_media_factory_set_protocols (factory, GST_RTSP_LOWER_TRANS_TCP);
Then I used many clients that were connected/disconnected simultaneously. Also I added sleep(1) inside "gst_rtsp_stream_transport_send_rtp" to reproduce it more often.
**Attachment 338506**, "stacktrace":
[segfault_stacktrace.txt](/uploads/0d7f0d80cb0e194eb31292af2d4e3fb8/segfault_stacktrace.txt)
Version: 1.8.2https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/30test-multicast: Does not seem to work work2021-09-24T11:03:50ZBugzilla Migration Usertest-multicast: Does not seem to work work## Submitted by Nicolas Dufresne `@ndufresne`
**[Link to original bug (#773182)](https://bugzilla.gnome.org/show_bug.cgi?id=773182)**
## Description
I can playback on master stream hosted by test-multicast. It seems to work with tes...## Submitted by Nicolas Dufresne `@ndufresne`
**[Link to original bug (#773182)](https://bugzilla.gnome.org/show_bug.cgi?id=773182)**
## Description
I can playback on master stream hosted by test-multicast. It seems to work with test-multicast2. Client is gst-play-1.0
Some warnings I see:
0:00:04.784753182 6718 0x1291540 WARN rtspmedia rtsp-media.c:3606:gst_rtsp_media_suspend: media 0x7f80c403e180 was not prepared
0:00:04.801068498 6718 0x1291540 WARN rtspmedia rtsp-media.c:3867:gst_rtsp_media_set_state: media 0x7f80c403e180 was not preparedhttps://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/28Reserve local port in the pool in multicast2021-09-24T11:03:51ZBugzilla Migration UserReserve local port in the pool in multicast## Submitted by Xavier Claessens `@xclaesse`
**[Link to original bug (#770969)](https://bugzilla.gnome.org/show_bug.cgi?id=770969)**
## Description
In the multicast case, we bind the socket on ANY address, so I guess the port should...## Submitted by Xavier Claessens `@xclaesse`
**[Link to original bug (#770969)](https://bugzilla.gnome.org/show_bug.cgi?id=770969)**
## Description
In the multicast case, we bind the socket on ANY address, so I guess the port should be taken from the unicast pool as well as from the multicast pool.
I think alloc_ports_one_family() should reserve a multicast address from the pool, then try to reserve the same port in the unicast pool, then loop until it finds a port that is available in both pools.
I added FIXME comments in gst_rtsp_stream_get_multicast_address() and gst_rtsp_stream_reserve_address() about this.https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/27Client cannot decide destination/port in SETUP2021-09-24T11:03:51ZBugzilla Migration UserClient cannot decide destination/port in SETUP## Submitted by Xavier Claessens `@xclaesse`
**[Link to original bug (#770954)](https://bugzilla.gnome.org/show_bug.cgi?id=770954)**
## Description
Currently the mcast address is allocated from the address pool on DESCRIBE, and only...## Submitted by Xavier Claessens `@xclaesse`
**[Link to original bug (#770954)](https://bugzilla.gnome.org/show_bug.cgi?id=770954)**
## Description
Currently the mcast address is allocated from the address pool on DESCRIBE, and only one mcast group is supported. So if a client asks for a specific destination/port in SETUP, default_configure_client_transport() will call gst_rtsp_stream_reserve_address() and will return an error.https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/26RTSPMediaFactory is missing the gobject (introspection) wrapper function for ...2021-09-24T11:03:51ZBugzilla Migration UserRTSPMediaFactory is missing the gobject (introspection) wrapper function for create_pipeline.## Submitted by gzp
**[Link to original bug (#770448)](https://bugzilla.gnome.org/show_bug.cgi?id=770448)**
## Description
I've tried to set the pipeline for a media externally fro python, but a required
virtual function cannot b...## Submitted by gzp
**[Link to original bug (#770448)](https://bugzilla.gnome.org/show_bug.cgi?id=770448)**
## Description
I've tried to set the pipeline for a media externally fro python, but a required
virtual function cannot be overloaded as it is missing from the gir:
GstRtspServer-1.0.gir:
...
`<virtual-method name="create_pipeline" introspectable="0">`
`<return-value>`
<type name="Gst.Element" c:type="GstElement*"/>
`</return-value>`
...
I'm not sure if it is intentional or accidental.
Thanks, Gzp
Version: 1.xhttps://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/25Out of segment data sent to clients when not wanted2021-09-24T11:03:51ZBugzilla Migration UserOut of segment data sent to clients when not wanted## Submitted by Linus Svensson
**[Link to original bug (#770133)](https://bugzilla.gnome.org/show_bug.cgi?id=770133)**
## Description
gst-rtsp-server sometimes sends buffers that are outside the current segment (before the segment i...## Submitted by Linus Svensson
**[Link to original bug (#770133)](https://bugzilla.gnome.org/show_bug.cgi?id=770133)**
## Description
gst-rtsp-server sometimes sends buffers that are outside the current segment (before the segment in this case), when it performs a seek.
The problem appears if a client issues an RTSP PLAY (without specifying a new start position) when the server is PAUSED at a delta unit in the stream, and the RTSP PLAY method requires the server to perform a seek. A seek will be required; if the stop position is updated (Range: npt=135.42), scale or speed is changed (currently not supported though), or something else that's updated with a seek. The server always performs a FLUSHING seek, but that is unfortunate since we want to keep the current position in case the start position isn't changed. GstRTSPMedia will actually seek to the current position (obtained with a gst_query_position on the pipeline) and set GST_SEEK_FLAG_ACCURATE in such case. The result (in a use case with a H.264 stream contained a matroska file) from such seek is that the segment will have a start time matching the seek'ed start position and the first buffer will be the previous non-delta unit in the stream (That's of course good :)), which results in an RTP stream starting at position X, and the Range in the RTSP PLAY response is position X + the delta between the seek'ed position and the non-delta unit. In this case the connected client already have received all frames until the delta frame at the seek'ed position.
Would it make sense to drop all buffers outside of the segment in GstRtpBasePayload, or make it configurable to do so?https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/24Possible race condition when fetching rtp info2021-09-24T11:03:52ZBugzilla Migration UserPossible race condition when fetching rtp info## Submitted by Joakim Johansson
**[Link to original bug (#769979)](https://bugzilla.gnome.org/show_bug.cgi?id=769979)**
## Description
If the application do not send pause before a new play request then is there a possibility for a...## Submitted by Joakim Johansson
**[Link to original bug (#769979)](https://bugzilla.gnome.org/show_bug.cgi?id=769979)**
## Description
If the application do not send pause before a new play request then is there a possibility for a race condition to occur when fetching the RTP-Info.
This is easily triggered by adding a short sleep just before
/* grab RTPInfo from the media now */
rtpinfo = gst_rtsp_session_media_get_rtpinfo (sessmedia);
in rtsp-client, handle_play_request and then execute two Play requests without a Pause in between.
Running a test case executing 10 Play requests with 1s delay between each other triggers this problem 80% of the times (without short sleep) and since the test case is checking the D-bit in the extension header is it mandatory that the RTP Info finds the absolute first rtp package after the play command.
The solution is to perform a forced pause if the application have not send a Pause command before the Play command.https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/23Uses the same thread for multiple clients and blocks them on media preparatio...2021-09-24T11:03:52ZBugzilla Migration UserUses the same thread for multiple clients and blocks them on media preparation, leads to deadlocks## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#767333)](https://bugzilla.gnome.org/show_bug.cgi?id=767333)**
## Description
Currently when a client connects, a thread is assigned to this client and everything tha...## Submitted by Sebastian Dröge `@slomo`
**[Link to original bug (#767333)](https://bugzilla.gnome.org/show_bug.cgi?id=767333)**
## Description
Currently when a client connects, a thread is assigned to this client and everything that happens with this client happens on that thread. By itself this is not a problem, but unfortunately from this very thread the rtsp-media is also waiting synchronously to be prepared.
This means that while the media is being prepared, no other client that uses the same thread can send requests or get responses. And especially this can lead to various deadlock situations, for example:
- two clients are connecting and creating a media (one for each) that is fed from an appsrc
- the appsrcs are both fed from the same, other pipeline
- the first client finishes preparing the media (-> the pipeline is blocked)
- the second client starts preparing the media
- the other pipeline is blocking because the first appsrc is blocked
- the second appsrc never gets enough data to preroll, and the first appsrc is never unblocked because the client has no way to send its PLAY request to unblock the pipeline
We should wait asynchronously for the media to be prepared, which unfortunately requires a few changes in the way how rtsp-client.c works. Currently request-response is a completely synchronous process.
Version: 1.8.1https://gitlab.freedesktop.org/gstreamer/gst-rtsp-server/-/issues/22RTSP server won't be destroyed until media is pre-rolled (or 20s timeout)2021-09-24T11:03:53ZBugzilla Migration UserRTSP server won't be destroyed until media is pre-rolled (or 20s timeout)## Submitted by Xavier Claessens `@xclaesse`
**[Link to original bug (#767021)](https://bugzilla.gnome.org/show_bug.cgi?id=767021)**
## Description
Created attachment 328727
test case
My application has an RTSP server that ca...## Submitted by Xavier Claessens `@xclaesse`
**[Link to original bug (#767021)](https://bugzilla.gnome.org/show_bug.cgi?id=767021)**
## Description
Created attachment 328727
test case
My application has an RTSP server that can be started/stopped by the user. When shutting down the server, I disconnect all clients, unref the server, and call gst_rtsp_thread_pool_cleanup() to be sure all resources has been cleaned.
If a client connects just before I stop the server, it will wait for pre-roll in gst_rtsp_media_get_status(). If pre-roll never happens it will unblock on a 20s timeout.
I think in the case that the client disconnects, it should unblock that condition.
**Attachment 328727**, "test case":
[test-shutdown.c](/uploads/32f8ef90019129a6e68623eb40ac3694/test-shutdown.c)
### Depends on
* [Bug 750111](https://bugzilla.gnome.org/show_bug.cgi?id=750111)