1. 05 Jul, 2021 1 commit
    • Göran Jönsson's avatar
      Protection against early RTCP packets. · 43572a89
      Göran Jönsson authored and GStreamer Marge Bot's avatar GStreamer Marge Bot committed
      When receiving RTCP packets early the funnel is not ready yet and
      GST_FLOW_FLUSHING will be returned when pushing data to it's srcpad.
      This causes the thread that handle RTCP packets to go to pause mode.
      Since this thread is in pause mode there will be no further callbacks to
      handle keep-alive for incoming RTCP packets. This will make the session
      time out if the client is not using another keep-alive mechanism.
      
      Change-Id: Idb29db05f59c06423fa693a2aeeacbe3a1883fc5
      Part-of: <!211>
      43572a89
  2. 24 May, 2021 1 commit
  3. 05 May, 2021 1 commit
  4. 01 Feb, 2021 1 commit
    • Branko Subasic's avatar
      rtsp-stream: avoid deadlock in send_func · 6fc8b963
      Branko Subasic authored
      Currently the send_func() runs in a thread of its own which is started
      the first time we enter handle_new_sample(). It runs in an outer loop
      until priv->continue_sending is FALSE, which happens when a TEARDOWN
      request is received. We use a local variable, cont, which is initialized
      to TRUE, meaning that we will always enter the outer loop, and at the
      end of the outer loop we assign it the value of priv->continue_sending.
      
      Within the outer loop there is an inner loop, where we wait to be
      signaled when there is more data to send. The inner loop is exited when
      priv->send_cookie has changed value, which it does when more data is
      available or when a TEARDOWN has been received.
      
      But if we get a TEARDOWN before send_func() is entered we will get stuck
      in the inner loop because no one will increase priv->session_cookie
      anymore.
      
      By not entering the outer loop in send_func() if priv->continue_sending
      is FALSE we make sure that we do not get stuck in send_func()'s inner
      loop should we receive a TEARDOWN before the send thread has started.
      
      Change-Id: I7338a0ea60ea435bb685f875965f5165839afa20
      Part-of: <!187>
      6fc8b963
  5. 18 Nov, 2020 1 commit
  6. 11 Nov, 2020 1 commit
    • David Phung's avatar
      rtsp-media: Ignore GstRTSPStreamBlocking from incomplete streams · 4f673af4
      David Phung authored
      To prevent cases with prerolling when the inactive stream prerolls first
      and the server proceeds without waiting for the active stream, we will
      ignore GstRTSPStreamBlocking messages from incomplete streams. When
      there are no complete streams (during DESCRIBE), we will listen to all
      streams.
      
      Part-of: <!167>
      4f673af4
  7. 10 Oct, 2020 1 commit
  8. 08 Oct, 2020 2 commits
  9. 09 Sep, 2020 2 commits
  10. 06 Jul, 2020 1 commit
  11. 03 May, 2020 1 commit
  12. 01 May, 2020 1 commit
  13. 30 Mar, 2020 1 commit
  14. 24 Feb, 2020 4 commits
  15. 11 Jan, 2020 1 commit
  16. 09 Jan, 2020 1 commit
    • Mathieu Duponchelle's avatar
      rtsp-stream: fix checking of TCP backpressure · e0a4355d
      Mathieu Duponchelle authored
      The internal index of our appsinks, while it can be used to
      determine whether a message is RTP or RTCP, is not necessarily
      the same as the interleaved channel. Let the stream-transport
      determine the channel to check backpressure for, the same way
      it determines the channel according to whether it is sending
      RTP or RTCP.
      e0a4355d
  17. 25 Nov, 2019 1 commit
    • Adam x Nilsson's avatar
      rtsp-stream: Removing invalid transports returns false · 9c5ca231
      Adam x Nilsson authored and johanadamnilsson's avatar johanadamnilsson committed
      When removing transports an assertion was that the transports passed in
      for removal are present in the list, however that can't be assumed.
      As an example if a transport was removed from a thread running
      send_tcp_message, the main thread can try to remove the same transport
      again if it gets a handle_pause_request. This will not effect the
      transport list but it will effect n_tcp_transports as it will be
      decrement and then have the wrong value.
      9c5ca231
  18. 04 Nov, 2019 1 commit
    • Niels De Graef's avatar
      Don't pass default GLib marshallers for signals · 45e77ecd
      Niels De Graef authored and GStreamer Marge Bot's avatar GStreamer Marge Bot committed
      By passing NULL to `g_signal_new` instead of a marshaller, GLib will
      actually internally optimize the signal (if the marshaller is available
      in GLib itself) by also setting the valist marshaller. This makes the
      signal emission a bit more performant than the regular marshalling,
      which still needs to box into `GValue` and call libffi in case of a
      generic marshaller.
      
      Note that for custom marshallers, one would use
      `g_signal_set_va_marshaller()` with the valist marshaller instead.
      45e77ecd
  19. 21 Oct, 2019 1 commit
    • Mathieu Duponchelle's avatar
      stream: refactor TCP backpressure handling · dd32924e
      Mathieu Duponchelle authored
      The previous implementation stopped sending TCP messages to
      all clients when a single one stopped consuming them, which
      obviously created problems for shared media.
      
      Instead, we now manage a backlog in stream-transport, and slow
      clients are removed once this backlog exceeds a maximum duration,
      currently hardcoded.
      
      Fixes #80
      dd32924e
  20. 14 Oct, 2019 1 commit
    • Adam x Nilsson's avatar
      rtsp-stream : fix race condition in send_tcp_message · 0b1b6670
      Adam x Nilsson authored and Mathieu Duponchelle's avatar Mathieu Duponchelle committed
      If one thread is inside the send_tcp_message function and are done
      sending rtp or rtcp messages so the n_outstanding variable is zero
      however have not exit the loop sending the messages. While sending its
      messages, transports have been added or removed to the transport list,
      so the cache should be updated. If now an additional thread comes to
      the function send_tcp_message and trying to send rtp messages it will
      first destroy the rtp cache that is still being iterated trough by the
      first thread.
      
      Fixes #81
      0b1b6670
  21. 28 Jun, 2019 1 commit
    • Göran Jönsson's avatar
      rtsp-stream: Not wait on receiver streams when pre-rolling · d1d40491
      Göran Jönsson authored
      Without this patch there are problem pre-rolling when using audio back
      channel.
      
      Without this patch a probe will be created for all streams including
      the stream for audio backchannel. To pre-roll all this pads have to
      receive data. Since the stream for audio backchannel is a receiver this
      will never happen.
      
      The solution is to never create any probes for streams that are for
      incomming data and instead set them as blocking already from beginning.
      d1d40491
  22. 06 Jun, 2019 1 commit
  23. 04 Jun, 2019 2 commits
  24. 23 Apr, 2019 2 commits
  25. 11 Apr, 2019 1 commit
    • Göran Jönsson's avatar
      rtsp_server: Free thread pool before clean transport cache · 3cfe8863
      Göran Jönsson authored
      If not waiting for free thread pool before clean transport caches, there
      can be a crash if a thread is executing in transport list loop in
      function send_tcp_message.
      
      Also add a check if priv->send_pool in on_message_sent to avoid that a
      new thread is pushed during wait of free thread pool. This is possible
      since when waiting for free thread pool mutex have to be unlocked.
      3cfe8863
  26. 10 Apr, 2019 1 commit
  27. 02 Feb, 2019 1 commit
  28. 30 Jan, 2019 2 commits
    • Sebastian Dröge's avatar
      c372643e
    • Sebastian Dröge's avatar
      rtsp-server: Add support for buffer lists · d708f973
      Sebastian Dröge authored
      This adds new functions for passing buffer lists through the different
      layers without breaking API/ABI, and enables the appsink to actually
      provide buffer lists.
      
      This should already reduce CPU usage and potentially context switches a
      bit by passing a whole buffer list from the appsink instead of
      individual buffers. As a next step it would be necessary to
        a) Add support for a vector of data for the GstRTSPMessage body
        b) Add support for sending multiple messages at once to the
          GstRTSPWatch and let it be handled internally
        c) Adding API to GOutputStream that works like writev()
      
      Fixes #29
      d708f973
  29. 29 Jan, 2019 1 commit
  30. 06 Dec, 2018 1 commit
  31. 14 Nov, 2018 2 commits
    • Linus Svensson's avatar
      rtsp-stream: Use seqnum-offset for rtpinfo · 18538592
      Linus Svensson authored and Sebastian Dröge's avatar Sebastian Dröge committed
      The sequence number in the rtpinfo is supposed to be the first RTP
      sequence number. The "seqnum" property on a payloader is supposed to be
      the number from the last processed RTP packet. The sequence number for
      payloaders that inherit gstrtpbasepayload will not be correct in case of
      buffer lists. In order to fix the seqnum property on the payloaders
      gst-rtsp-server must get the sequence number for rtpinfo elsewhere and
      "seqnum-offset" from the "stats" property contains the value of the
      very first RTP packet in a stream. The server will, however, try to look
      at the last simple in the sink element and only use properties on the
      payloader in case there no sink elements yet, and by looking at the last
      sample of the sink gives the server full control of which RTP packet it
      looks at. If the payloader does not have the "stats" property, "seqnum"
      is still used since "seqnum-offset" is only present in as part of
      "stats" and this is still an issue not solved with this patch.
      
      Needed for gst-plugins-base!17
      18538592
    • Linus Svensson's avatar
      rtsp-stream: Plug memory leak · 1c4d3b36
      Linus Svensson authored and Sebastian Dröge's avatar Sebastian Dröge committed
      Attaching a GSource to a context will increase the refcount. The idle
      source will never be free'd since the initial reference is never
      dropped.
      1c4d3b36