1. 19 Nov, 2018 2 commits
  2. 14 Nov, 2018 2 commits
    • Linus Svensson's avatar
      rtsp-stream: Use seqnum-offset for rtpinfo · 18538592
      Linus Svensson authored
      The sequence number in the rtpinfo is supposed to be the first RTP
      sequence number. The "seqnum" property on a payloader is supposed to be
      the number from the last processed RTP packet. The sequence number for
      payloaders that inherit gstrtpbasepayload will not be correct in case of
      buffer lists. In order to fix the seqnum property on the payloaders
      gst-rtsp-server must get the sequence number for rtpinfo elsewhere and
      "seqnum-offset" from the "stats" property contains the value of the
      very first RTP packet in a stream. The server will, however, try to look
      at the last simple in the sink element and only use properties on the
      payloader in case there no sink elements yet, and by looking at the last
      sample of the sink gives the server full control of which RTP packet it
      looks at. If the payloader does not have the "stats" property, "seqnum"
      is still used since "seqnum-offset" is only present in as part of
      "stats" and this is still an issue not solved with this patch.
      
      Needed for gst-plugins-base!17
      18538592
    • Linus Svensson's avatar
      rtsp-stream: Plug memory leak · 1c4d3b36
      Linus Svensson authored
      Attaching a GSource to a context will increase the refcount. The idle
      source will never be free'd since the initial reference is never
      dropped.
      1c4d3b36
  3. 01 Nov, 2018 5 commits
  4. 23 Oct, 2018 1 commit
  5. 22 Oct, 2018 1 commit
    • Edward Hervey's avatar
      rtsp-client: Remove timeout GSource on cleanup · ebafccb6
      Edward Hervey authored
      Avoids ending up with races where a timeout would still be around
      *after* a client was gone. This could happen rather easily in
      RTSP-over-HTTP mode on a local connection, where each RTSP message
      would be sent as a different HTTP connection with the same tunnelid.
      
      If not properly removed, that timeout would then try to free again
      a client (and its contents).
      ebafccb6
  6. 04 Oct, 2018 1 commit
  7. 03 Oct, 2018 1 commit
  8. 28 Sep, 2018 1 commit
  9. 24 Sep, 2018 1 commit
  10. 19 Sep, 2018 2 commits
  11. 31 Aug, 2018 2 commits
  12. 29 Aug, 2018 1 commit
  13. 14 Aug, 2018 15 commits
  14. 06 Aug, 2018 1 commit
  15. 01 Aug, 2018 1 commit
    • Mathieu Duponchelle's avatar
      rtsp-client: always allocate both IPV4 and IPV6 sockets · 12f8abb5
      Mathieu Duponchelle authored
      multiudpsink does not support setting the socket* properties
      after it has started, which meant that rtsp-server could no
      longer serve on both IPV4 and IPV6 sockets since the patches
      from https://bugzilla.gnome.org/show_bug.cgi?id=757488 were
      merged.
      
      When first connecting an IPV6 client then an IPV4 client,
      multiudpsink fell back to using the IPV6 socket.
      
      When first connecting an IPV4 client, then an IPV6 client,
      multiudpsink errored out, released the IPV4 socket, then
      crashed when trying to send a message on NULL nevertheless,
      that is however a separate issue.
      
      This could probably be fixed by handling the setting of
      sockets in multiudpsink after it has started, that will
      however be a much more significant effort.
      
      For now, this commit simply partially reverts the behaviour
      of rtsp-stream: it will continue to only create the udpsinks
      when needed, as was the case since the patches were merged,
      it will however when creating them, always allocate both
      sockets and set them on the sink before it starts, as was
      the case prior to the patches.
      
      Transport configuration will only error out if the allocation
      of UDP sockets fails for the actual client's family, this
      also downgrades the GST_ERRORs in alloc_ports_one_family
      to GST_WARNINGs, as failing to allocate is no longer
      necessarily fatal.
      
      https://bugzilla.gnome.org/show_bug.cgi?id=796875
      12f8abb5
  16. 23 Jul, 2018 3 commits
    • Sebastian Dröge's avatar
      rtsp-stream: Slightly simplify locking · 37e75cb8
      Sebastian Dröge authored
      37e75cb8
    • David Svensson Fors's avatar
      Limit queued TCP data messages to one per stream · 12169f1e
      David Svensson Fors authored
      Before, the watch backlog size in GstRTSPClient was changed
      dynamically between unlimited and a fixed size, trying to avoid both
      unlimited memory usage and deadlocks while waiting for place in the
      queue. (Some of the deadlocks were described in a long comment in
      handle_request().)
      
      In the previous commit, we changed to a fixed backlog size of 100.
      This is possible, because we now handle RTP/RTCP data messages differently
      from RTSP request/response messages.
      
      The data messages are messages tunneled over TCP. We allow at most one
      queued data message per stream in GstRTSPClient at a time, and
      successfully sent data messages are acked by sending a "message-sent"
      callback from the GstStreamTransport. Until that ack comes, the
      GstRTSPStream does not call pull_sample() on its appsink, and
      therefore the streaming thread in the pipeline will not be blocked
      inside GstRTSPClient, waiting for a place in the queue.
      
      pull_sample() is called when we have both an ack and a "new-sample"
      signal from the appsink. Then, we know there is a buffer to write.
      
      RTSP request/response messages are not acked in the same way as data
      messages. The rest of the 100 places in the queue are used for
      them. If the queue becomes full of request/response messages, we
      return an error and close the connection to the client.
      
      Change-Id: I275310bc90a219ceb2473c098261acc78be84c97
      12169f1e
    • David Svensson Fors's avatar
      rtsp-client: Use fixed backlog size · 287345f6
      David Svensson Fors authored
      Change to using a fixed backlog size WATCH_BACKLOG_SIZE.
      
      Preparation for the next commit, which changes to a different way of
      avoiding both deadlocks and unlimited memory usage with the watch
      backlog.
      287345f6