Skip to content

Threadshare: various fixes

This MR is a set of fixes further to discussions in !228 (closed).

block_on

futures::executor::block_on executes a Future on current thread. As the name suggests, this function blocks the thread, meaning that no other tasks on the same executor can make progress. Not only does this defeats the purpose of the threadshare model, but it can also lead to deadlocks.

The first commit introduces a runtime::executor::block_on function which allows blocking on a Future provided it is NOT called under a Context.

Edit: this modification is being discussed in: !240 (comment 378454)

Mixed pipelines

There was a limitation with some kinds of pipelines, like:

    | ts-src |-->| queue |-->| ts-sink |

The ts-sink received the PadContext sticky event and added its pending tasks to the task queue. But ts-src PadSrc::push* functions returned before ts-sink add a chance to do this. PadSink now checks if the sink_chain* & sink_event* functions are executed under a Context thread. If it's not the case, the returned Future is executed immediately whether a PadContext sticky event was received or not.

delay_for / interval

The Futures returned by delay_for and interval must be instantiated under the executor in which they will be used. Since the Future might not be resolved under a Context eventually, these functions are now declared under runtime::time. They no longer accept a closure returning a Future as an argument and should be used like their counterparts in tokio. Note however that interval returns a plain Stream, so .next() should be used in place of .tick(). The delay_for and interval in runtime::time are actually wrappers around the tokio equivalent but should be preferred over the tokio versions in order to reduce breakage if we decided to change the underlying runtime environment in the future.

JoinHandle

[Pad]Context::spawn function now return a crate-defined JoinHandle, which is also a wrapper around the underlying tokio JoinHandle.

FlushStart & FlushStop

There were multiple problems with the implementations for FlushStart & FlushStop events handlers. FlushStop was incorrectly handled as a serialized event and the implementations needed to rely on block_ons. The solution consists in spawning a Future to handle FlushStart asynchronously. FlushStop is now correctly handled with the buffer flow and awaits for FlushStart to complete before executing.

Most block_on calls where removed from the event handlers. These are the exceptions in current code base:

  • ProxySink can't forward the non-serialized events to the pad of the ProxySrc directly and uses the shared queue, which currently uses an async Mutex. Fixing the pad problem should be enough to solve the problem. Edit: this is fixed in this MR.
  • JitterBuffer shares many data from the state & the settings which are used in the Pad{Src,Sink}Handler in non-async functions. I think a more in depth analysis of the data separation and selection of sync/async Mutexes or RwLocks should be engaged. I think this exceeds the scope of this MR.

Edit: see limitations discussed in: !240 (comment 378454)

check_reconfigure

While porting the threadshare package to tokio-0.2.x and introducing the Pad{Src,Sink} model, I overlooked the internally-linked pad reconfiguration & PadContext propagation in Queue & Proxy. This should be fixed now.

prepare_socket / complete_preparation

tokio's UdpSocket & TcpStream must be instantiated under their target executor. Due to throttling of the executor, if we Context::spawned the instantiation and then awaited in the element's prepare function, we ended up waiting max_throttling_duration for each ts-{UdpSrc,TcpClientSrc}-element in the pipeline. For the use cases targeted by the threadshare model, this lead to a looong NullToReady transition. So we decided to use an approach with an async prepare_socket function that was spawned on the target Context from the prepare function. In a subsequent state transition, we executed complete_preparation which purpose was to await for prepare_socket to complete.

A Context::enter function was introduced which is similar to tokio::runtime::enter. This function allows executing a function under the target executor. For UdpSocket, this is all we need to revert back to instantiating under the prepare function without noticeable delays for the NullToReady transition. However, TcpStream::connect is an async function. Since prepare is already executing under a runtime, we can't block_on TcpStream::connect in Context::enter, so I decided to keep the prepare_soket / complete_preparation approach for ts-tcpclientsrc.

Edited by François Laignel

Merge request reports

Loading