1. 21 Apr, 2020 1 commit
  2. 20 Apr, 2020 2 commits
    • Seungha Yang's avatar
      tests: splitmuxsink: Add more timecode based split test · ea1797cc
      Seungha Yang authored
      ... and split test cases to run tests in parallel
      ea1797cc
    • Seungha Yang's avatar
      splitmuxsink: Enhancement for timecode based split · ca48f526
      Seungha Yang authored
      The calculated threshold for timecode might be varying depending on
      "max-size-timecode" and framerate.
      For instance, with framerate 29.97 (30000/1001) and
      "max-size-timecode=00:02:00;02", every fragment will have identical
      number of frames 3598. However, when "max-size-timecode=00:02:00;00",
      calculated next keyframe via gst_video_time_code_add_interval()
      can be different per fragment, but this is the nature of timecode.
      To compensate such timecode drift, we should keep track of expected
      timecode of next fragment based on observed timecode.
      ca48f526
  3. 19 Apr, 2020 1 commit
  4. 16 Apr, 2020 1 commit
    • Håvard Graff's avatar
      rtpjitterbuffer: don't use RTX packets in rate-calc and reset-logic · 981d0c02
      Håvard Graff authored
      The problem was this:
      
      Due to the highly irregular arrival of RTX-packet the max-misorder variable
      could be pushed very low. (-10).
      
      If you then at some point get a big in the sequence-numbers (62 in the
      test) you end up sending RTX-requests for some of those packets, and then
      if the sender answers those requests, you are going to get a bunch of
      RTX-packets arriving. (-13 and then 5 more packets in the test)
      
      Now, if max-misorder is pushed very low at this point, these RTX-packets
      will trigger the handle_big_gap_buffer() logic, and because they arriving
      so neatly in order, (as they would, since they have been requested like
      that), the gst_rtp_jitter_buffer_reset() will be called, and two things
      will happen:
      1. priv->next_seqnum will be set to the first RTX packet
      2. the 5 RTX-packet will be pushed into the chain() function
      
      However, at this point, these RTX-packets are no longer valid, the
      jitterbuffer has already pushed lost-events for these, so they will now
      be dropped on the floor, and never make it to the waiting loop-function.
      
      And, since we now have a priv->next_seqnum that will never arrive
      in the loop-function, the jitterbuffer is now stalled forever, and will
      not push out another buffer.
      
      The proposed fixes:
      1. Don't use RTX in calculation of the packet-rate.
      2. Don't use RTX in large-gap logic, as they are likely to be dropped.
      981d0c02
  5. 15 Apr, 2020 3 commits
  6. 09 Apr, 2020 1 commit
  7. 08 Apr, 2020 4 commits
  8. 06 Apr, 2020 1 commit
  9. 05 Apr, 2020 1 commit
    • Jan Schmidt's avatar
      flvmux: Fix invalid padlist accesses. · a3933ea5
      Jan Schmidt authored
      Request pads can released at any time, so make sure to hold
      the object lock when iterating the element sinkpads list where
      that's safe, or to use other safe pad iteration patterns in
      other places.
      
      When choosing a best pad, return a reference to the pad to make sure it
      stays alive for output in the aggregator srcpad task.
      
      Should fix a spurious valgrind error in the CI flvmux tests and some
      other potential problems if the request sink pads are released while
      the element is running..
      
      Fixes gstreamer/gst-plugins-good#714
      a3933ea5
  10. 03 Apr, 2020 7 commits
  11. 02 Apr, 2020 3 commits
  12. 01 Apr, 2020 1 commit
  13. 31 Mar, 2020 4 commits
  14. 30 Mar, 2020 1 commit
    • Håvard Graff's avatar
      rtpjitterbuffer: fix waiting timer/queue code · 818b38eb
      Håvard Graff authored
      Changing the types from boolean to guint due to the ++ operand used on
      them, and only call JBUF_SIGNAL_QUEUE after settling down,
      or else you end up signaling the waiting code in chain() for every buffer
      pushed out.
      818b38eb
  15. 27 Mar, 2020 2 commits
  16. 26 Mar, 2020 2 commits
    • Jan Schmidt's avatar
      splitmuxsrc: Fix some deadlock conditions and a crash · 00a08c69
      Jan Schmidt authored
      When switching the splitmuxsrc state back to NULL quickly, it
      can encounter deadlocks shutting down the part readers that
      are still starting up, or encounter a crash if the splitmuxsrc
      cleaned up the parts before the async callback could run.
      
      Taking the state lock to post async-start / async-done messages can
      deadlock if the state change function is trying to shut down the
      element, so use some finer grained locks for that.
      00a08c69
    • Jan Schmidt's avatar
      splitmux: Make the unit test faster · 8ef172d8
      Jan Schmidt authored
      The playback test is considerably faster if it runs with the
      appsink set to sync=false
      8ef172d8
  17. 25 Mar, 2020 5 commits