1. 22 Oct, 2018 1 commit
    • Edward Hervey's avatar
      rtsp-client: Remove timeout GSource on cleanup · ebafccb6
      Edward Hervey authored
      Avoids ending up with races where a timeout would still be around
      *after* a client was gone. This could happen rather easily in
      RTSP-over-HTTP mode on a local connection, where each RTSP message
      would be sent as a different HTTP connection with the same tunnelid.
      
      If not properly removed, that timeout would then try to free again
      a client (and its contents).
      ebafccb6
  2. 04 Oct, 2018 1 commit
  3. 03 Oct, 2018 1 commit
  4. 28 Sep, 2018 2 commits
  5. 24 Sep, 2018 1 commit
  6. 19 Sep, 2018 4 commits
  7. 01 Sep, 2018 1 commit
  8. 31 Aug, 2018 2 commits
  9. 29 Aug, 2018 2 commits
  10. 15 Aug, 2018 1 commit
  11. 14 Aug, 2018 16 commits
  12. 06 Aug, 2018 1 commit
  13. 01 Aug, 2018 1 commit
    • Mathieu Duponchelle's avatar
      rtsp-client: always allocate both IPV4 and IPV6 sockets · 12f8abb5
      Mathieu Duponchelle authored
      multiudpsink does not support setting the socket* properties
      after it has started, which meant that rtsp-server could no
      longer serve on both IPV4 and IPV6 sockets since the patches
      from https://bugzilla.gnome.org/show_bug.cgi?id=757488 were
      merged.
      
      When first connecting an IPV6 client then an IPV4 client,
      multiudpsink fell back to using the IPV6 socket.
      
      When first connecting an IPV4 client, then an IPV6 client,
      multiudpsink errored out, released the IPV4 socket, then
      crashed when trying to send a message on NULL nevertheless,
      that is however a separate issue.
      
      This could probably be fixed by handling the setting of
      sockets in multiudpsink after it has started, that will
      however be a much more significant effort.
      
      For now, this commit simply partially reverts the behaviour
      of rtsp-stream: it will continue to only create the udpsinks
      when needed, as was the case since the patches were merged,
      it will however when creating them, always allocate both
      sockets and set them on the sink before it starts, as was
      the case prior to the patches.
      
      Transport configuration will only error out if the allocation
      of UDP sockets fails for the actual client's family, this
      also downgrades the GST_ERRORs in alloc_ports_one_family
      to GST_WARNINGs, as failing to allocate is no longer
      necessarily fatal.
      
      https://bugzilla.gnome.org/show_bug.cgi?id=796875
      12f8abb5
  14. 27 Jul, 2018 1 commit
  15. 23 Jul, 2018 3 commits
    • Sebastian Dröge's avatar
      rtsp-stream: Slightly simplify locking · 37e75cb8
      Sebastian Dröge authored
      37e75cb8
    • David Svensson Fors's avatar
      Limit queued TCP data messages to one per stream · 12169f1e
      David Svensson Fors authored
      Before, the watch backlog size in GstRTSPClient was changed
      dynamically between unlimited and a fixed size, trying to avoid both
      unlimited memory usage and deadlocks while waiting for place in the
      queue. (Some of the deadlocks were described in a long comment in
      handle_request().)
      
      In the previous commit, we changed to a fixed backlog size of 100.
      This is possible, because we now handle RTP/RTCP data messages differently
      from RTSP request/response messages.
      
      The data messages are messages tunneled over TCP. We allow at most one
      queued data message per stream in GstRTSPClient at a time, and
      successfully sent data messages are acked by sending a "message-sent"
      callback from the GstStreamTransport. Until that ack comes, the
      GstRTSPStream does not call pull_sample() on its appsink, and
      therefore the streaming thread in the pipeline will not be blocked
      inside GstRTSPClient, waiting for a place in the queue.
      
      pull_sample() is called when we have both an ack and a "new-sample"
      signal from the appsink. Then, we know there is a buffer to write.
      
      RTSP request/response messages are not acked in the same way as data
      messages. The rest of the 100 places in the queue are used for
      them. If the queue becomes full of request/response messages, we
      return an error and close the connection to the client.
      
      Change-Id: I275310bc90a219ceb2473c098261acc78be84c97
      12169f1e
    • David Svensson Fors's avatar
      rtsp-client: Use fixed backlog size · 287345f6
      David Svensson Fors authored
      Change to using a fixed backlog size WATCH_BACKLOG_SIZE.
      
      Preparation for the next commit, which changes to a different way of
      avoiding both deadlocks and unlimited memory usage with the watch
      backlog.
      287345f6
  16. 16 Jul, 2018 2 commits