rtpjitterbuffer: "Cannot unwrap, any wrapping took place yet" being logged constanly
Hello,
I have a pipeline that looks like the following:
udpsrc address=%s port=%s caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)"MP2T-ES,payload=(int)33" ! rtpjitterbuffer latency=1000 ! rtpmp2tdepay ! tsdemux ! h264parse ! nvh264dec ! nvh264enc bitrate=2000 preset=low-latency-hp rc-mode=5 gop-size=30 ! video/x-h264,profile=baseline ! h264parse config-interval=-1 ! rtph264pay ! udpsink host=%s port=%s
and occasionally when the input RTP stream stops and restarts, we see the following log message printed constantly for what seems like every RTP packet that arrives:
WARN default gstrtpbuffer.c:1299:gst_rtp_buffer_ext_timestamp: Cannot unwrap, any wrapping took place yet. Returning 0 without updating extended timestamp.
Below are all the logs I have from the ~15 minutes the stream was not running:
2020-08-22T18:18:23.811550621Z 6:18:21.399952780 1 0x55c94a932050 INFO baseparse gstbaseparse.c:4116:gst_base_parse_set_latency:<h264parse0> min/max latency 0:00:00.033333333, 0:00:00.033333333
2020-08-22T18:18:25.456338563Z 6:18:23.045022266 1 0x55c94a932050 INFO h264parse gsth264parse.c:2186:gst_h264_parse_update_src_caps:<h264parse0> PAR 1/1
2020-08-22T18:18:25.456364342Z 6:18:23.045066142 1 0x55c94a932050 INFO baseparse gstbaseparse.c:4116:gst_base_parse_set_latency:<h264parse0> min/max latency 0:00:00.033333333, 0:00:00.033333333
2020-08-22T18:33:00.845752973Z 6:32:58.434381540 1 0x55c94a9321e0 WARN default gstrtpbuffer.c:1299:gst_rtp_buffer_ext_timestamp: Cannot unwrap, any wrapping took place yet. Returning 0 without updating extended timestamp.
2020-08-22T18:33:00.845854853Z 6:32:58.434441261 1 0x55c94a9321e0 WARN default gstrtpbuffer.c:1299:gst_rtp_buffer_ext_timestamp: Cannot unwrap, any wrapping took place yet. Returning 0 without updating extended timestamp.
2020-08-22T18:33:00.845869715Z 6:32:58.434451334 1 0x55c94a9321e0 WARN rtpjitterbuffer rtpjitterbuffer.c:768:rtp_jitter_buffer_calculate_pts: backward timestamps at server, schedule resync
2020-08-22T18:33:00.845880886Z 6:32:58.434476799 1 0x55c94a9321e0 INFO rtpjitterbuffer rtpjitterbuffer.c:810:rtp_jitter_buffer_calculate_pts: resync to time 6:32:58.053329223, rtptime 0:00:00.000000000
2020-08-22T18:33:00.855534289Z 6:32:58.444143915 1 0x55c94a9321e0 WARN default gstrtpbuffer.c:1299:gst_rtp_buffer_ext_timestamp: Cannot unwrap, any wrapping took place yet. Returning 0 without updating extended timestamp.
2020-08-22T18:33:00.855563047Z 6:32:58.444168852 1 0x55c94a9321e0 WARN default gstrtpbuffer.c:1299:gst_rtp_buffer_ext_timestamp: Cannot unwrap, any wrapping took place yet. Returning 0 without updating extended timestamp.
2020-08-22T18:33:00.855574427Z 6:32:58.444192181 1 0x55c94a9321e0 WARN default gstrtpbuffer.c:1299:gst_rtp_buffer_ext_timestamp: Cannot unwrap, any wrapping took place yet. Returning 0 without updating extended timestamp.
2020-08-22T18:33:00.855584921Z 6:32:58.444197777 1 0x55c94a9321e0 WARN default gstrtpbuffer.c:1299:gst_rtp_buffer_ext_timestamp: Cannot unwrap, any wrapping took place yet. Returning 0 without updating extended timestamp.
...
...
...
This continued for hours until we noticed and restarted the pipeline
In this above example, I noticed that the RTP time prior to stopping was 607034029
and the RTP time immediately after restarting was 4242112189
.
Looking at gst_rtp_buffer_ext_timestamp
, I was a little confused with what was going on here: https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/blob/master/gst-libs/gst/rtp/gstrtpbuffer.c#L1296
In this case, I believe diff = 3635078160
(4242112189 - 607034029
) and this number is then compared to G_MAXINT32
. It felt like this should be compared to G_MAXUINT32
instead, but I'm less confident about that after chatting with @ocrete.
Additionally, it doesn't seem like the jitterbuffer is resetting it's extended timestamp in rtp_jitter_buffer_calculate_pts
in this "backward timestamps at server, schedule resync" code path. Should it be? If it did get reset, I think that would have fixed the constant WARN logs.