1. 11 Jul, 2020 5 commits
  2. 10 Jul, 2020 4 commits
  3. 08 Jul, 2020 8 commits
  4. 07 Jul, 2020 7 commits
  5. 06 Jul, 2020 2 commits
  6. 04 Jul, 2020 1 commit
  7. 03 Jul, 2020 7 commits
  8. 02 Jul, 2020 6 commits
    • Tim-Philipp Müller's avatar
      Release 1.17.2 · 1408ffc6
      Tim-Philipp Müller authored
      1408ffc6
    • Philippe Normand's avatar
      wpe: Update plugin's doc cache · 8900f2d2
      Philippe Normand authored
      This was forgotten in !1392.
      
      Part-of: <!1402>
      8900f2d2
    • Nicolas Dufresne's avatar
      v4l2decoder: Track pending request · 1bef43f9
      Nicolas Dufresne authored
      With the asynchronous slice decoding, we only queue up to 2 slices
      per frames. That side effect is that now we are dequeuing bitstream
      buffers in both decoding and presentation order. This would lead to
      a bitstream buffer from a previous frame being dequeued instead of
      the expected last slice buffer and lead to us trying to queue an
      already queued bitstream buffer.
      
      We now fix this by tracking pending requests. As request are executed
      in decoding order, we marking a request done, we can effectively
      dequeue bitstream buffer from all previous request, as they have been
      executed already.
      
      Part-of: <!1395>
      1bef43f9
    • Nicolas Dufresne's avatar
      v4l2decoder: Improve debug tracing · a88e63dd
      Nicolas Dufresne authored
      Add some missing traces and move per-slice operation to TRACE level to
      reduce the noise level.
      
      Part-of: <!1395>
      a88e63dd
    • Nicolas Dufresne's avatar
      v4l2decoder: Convert request pool to GstQueueArray · d5a205cf
      Nicolas Dufresne authored
      The decoder is not being access from multiple threads, instead it is
      always protected by the streaming lock. For this reason, a
      GstAtomicQueue for the request pool is overkill and may even introduce
      unneeded overhead. Use a GstQueueArray in replacement, the
      GstQueueArray is a good fit since the number of item is predictable and
      unlikely to vary at run-time.
      
      Part-of: <!1395>
      d5a205cf
    • Nicolas Dufresne's avatar
      v4l2slh264dec: Wait on previous pending request in slice mode · a2eb1b57
      Nicolas Dufresne authored
      In slice mode, we'll do one request per slice. In order to recycle
      bitstream buffer, and not run-out, wait for the last pending
      request to complete and mark it done.
      
      We only wait after having queued the current slice in order to reduce
      that potential driver starvation and maintain performance (using dual
      buffering).
      
      Part-of: <!1395>
      a2eb1b57