gst_element_lost_state() interfers with base-time resetting of later gst_element_set_state()
Submitted by Sebastian Dröge
Created attachment 317583
Consider the following situation in a pipeline with
gst_element_lost_state()due to flushing for whatever reason
gst_element_set_state(PAUSED)on the pipeline due to
BUFFERING<100% before 1) finished to put the pipeline to
gst_element_set_state(PLAYING)on the pipeline much later because
What will happen here is that 1) does a "state change" without going through the
GstElement::change_state() machinery and as such no start_time will be set. It will update current/next/pending state to
PAUSED and target state stays at
PLAYING. 2) will then immediately return and only update the target state to
PAUSED but nothing will go through
GstElement::change_state() anywhere. Later 3) will change the state to
PLAYING again but
base_time is not updated as in 1) and 2) the start_time was not set.
The effect of this is that the running time continued all the time the pipeline was buffering, and as such now everything that was buffered is most likely too late and will be dropped.
Note that similar code to
gst_element_lost_state() is also in GstBin's
handle_async_start(), which will cause the same problems if a child element is posting async-start messages. So this situation could also happen in pipelines where sinks are dynamically added.
gst_element_lost_state() (and the async-start handling in GstBin) intentionally does not update
start_time as a) the clock might not work anymore at this point and b) the running time should continue if it can as losing state should just be something that happens very shortly and should not interrupt playback (e.g. of other pipeline branches inside the pipeline!).
Attached is a test application that reproduces this behaviour. Take a look at where the
LOST_STATE #define is used to get an idea of the different variants that can happen, and which work and which don't.
I'm not sure how to fix this without breaking other things. IMHO for 2.0 we should clean up all this state change machinery a lot and make sure we have a sensible state machine again that does not come with weird non-states like the ones that currently happen when state is lost.
Attachment 317583, "testcase":