Skip to content

rtpjitterbuffer: don't try and calculate packet-rate if seqnum are jumping

Turns out that the "big-gap"-logic of the jitterbuffer has been horribly broken.

For people using lost-events, an RTP-stream with a gap in sequencenumbers, would produce exactly that many lost-events immediately. So if your sequence-numbers jumped 20000, you would get 20000 lost-events in your pipeline...

The test that looks after this logic "test_push_big_gap", basically incremented the DTS of the buffer equal to the gap that was introduced, so that in fact this would be more of a "large pause" test, than an actual gap/discontinuity in the sequencenumbers.

Once the test was modified to not increment DTS (buffer arrival time) with a similar gap, all sorts of crazy started happening, including adding thousands of timers, and the logic that should have kicked in, the "handle_big_gap_buffer"-logic, was not called at all, why?

Because the number max_dropout is calculated using the packet-rate, and the packet-rate logic would, in this particular test, report that the new packet rate was over 400000 packets per second!!!

I believe the right fix is to don't try and update the packet-rate if there is any jumps in the sequence-numbers, and only do these calculations for nice, sequential streams.

Edited by Håvard Graff

Merge request reports