When switching drivers, process() gets called twice with same clock position
Consider this test program: https://gitlab.freedesktop.org/pvir/pipewire/-/blob/timer-test/src/examples/timer-test.c
- Start
PIPEWIRE_DEBUG=3 ./timer-test
- Start
fluidsynth -a jack -m jack
pw-link timer-test:output fluidsynth-midi:midi_00
-
pw-link fluidsynth-midi:left alsa_output.pci-0000_00_1b.0.analog-stereo:playback_FL
(replace with existing ALSA playback port) pw-link -d fluidsynth-midi:left alsa_output.pci-0000_00_1b.0.analog-stereo:playback_FL
At step 4 (going from Dummy-Driver to ALSA):
[I][47064.271885] default | [ timer-test.c: 34 on_process()] clock 30 position 7059139867 nsec:147065413895833 dt:21355849
[I][47064.293210] default | [ timer-test.c: 34 on_process()] clock 30 position 7059140891 nsec:147065435229166 dt:21331798
[I][47064.314515] default | [ timer-test.c: 34 on_process()] clock 30 position 7059141915 nsec:147065456562500 dt:21309116
[I][47064.335875] default | [ timer-test.c: 34 on_process()] clock 30 position 7059142939 nsec:147065477895833 dt:21355371
[I][47064.341142] pw.filter | [ filter.c: 837 handle_latency()] port 0x620000000080: set input latency 0.000000-1.000000 0-0 0-0
[I][47064.520750] default | [ timer-test.c: 34 on_process()] clock 30 position 7059142939 nsec:147065477895833 dt:184871842
[I][47064.542091] default | [ timer-test.c: 34 on_process()] clock 56 position 31031296 nsec:147065684098612 dt:21349317
[I][47064.563439] default | [ timer-test.c: 34 on_process()] clock 56 position 31032320 nsec:147065705431945 dt:21340187
[I][47064.584782] default | [ timer-test.c: 34 on_process()] clock 56 position 31033344 nsec:147065726766010 dt:21342016
At step 5 (going from ALSA to Dummy-Driver):
[I][47068.168690] default | [ timer-test.c: 34 on_process()] clock 56 position 31205376 nsec:147069310718422 dt:21327109
[I][47068.189992] default | [ timer-test.c: 34 on_process()] clock 56 position 31206400 nsec:147069332051143 dt:21299205
[I][47068.211362] default | [ timer-test.c: 34 on_process()] clock 56 position 31207424 nsec:147069353384462 dt:21373153
[I][47068.232698] default | [ timer-test.c: 34 on_process()] clock 56 position 31208448 nsec:147069374718071 dt:21324629
[I][47068.245309] default | [ timer-test.c: 34 on_process()] clock 56 position 31208448 nsec:147069374718071 dt:12622675
[I][47068.246243] pw.filter | [ filter.c: 837 handle_latency()] port 0x620000000080: set input latency 0.000000-0.000000 0-0 0-0
[I][47068.266519] default | [ timer-test.c: 34 on_process()] clock 30 position 7059331611 nsec:147069408562500 dt:21197396
[I][47068.287816] default | [ timer-test.c: 34 on_process()] clock 30 position 7059332635 nsec:147069429895833 dt:21312417
[I][47068.309188] default | [ timer-test.c: 34 on_process()] clock 30 position 7059333659 nsec:147069451229166 dt:21358223
Note that in both cases it gets called twice with the same clock position / nsec fields.
In the second case the callbacks also were faster than realtime, which will cause latency increase if something is buffering.
It seems it goes like this:
[I][48974.579775] default | [ timer-test.c: 34 on_process()] clock 30 position 7150835261 nsec:148975734604166 dt:21321604
...
[D][48974.609681] mod.client-node | [ remote-node.c: 562 client_node_port_set_param()] port 0x61b000000780: set_param Spa:Enum:ParamId:Latency 0x62d00001e458
...
[D][48974.611916] mod.client-node | [ remote-node.c: 1042 node_port_info_changed()] info changed 0x615000003078
...
[I][48974.623492] default | [ timer-test.c: 34 on_process()] clock 30 position 7150835261 nsec:148975734604166 dt:43713285
...
[D][48974.624855] mod.client-node | [ remote-node.c: 422 client_node_set_io()] node 0x615000003000: set io Spa:Enum:IO:Position 0x7f970a357230
...
[D][48974.625101] mod.client-node | [ remote-node.c: 841 client_node_set_activation()] node 0x61d000001480: set activation 30: 4294967295 (nil) 0 0
[D][48974.625142] mod.client-node | [ remote-node.c: 873 client_node_set_activation()] node 0x61d000001480: remove link 0x6110000057c0: id:30 state:0x7f970a359008 pending:-2/0
[D][48974.625180] mod.client-node | [ remote-node.c: 112 clear_link()] link 0x6110000057c0
[T][48974.625242] mod.client-node | [ remote-node.c: 105 do_deactivate_link()] link 0x6110000057c0 deactivate
...
[D][48974.625821] mod.client-node | [ remote-node.c: 841 client_node_set_activation()] node 0x61d000001480: set activation 52: 5 0x7f970a357000 0 2312
[D][48974.625854] mod.client-node | [ remote-node.c: 862 client_node_set_activation()] node 0x61d000001480: add link 0x611000005e00: memid:5 fd:20 id:52 state:0x7f970a357008 pending:2/5
[T][48974.625875] mod.client-node | [ remote-node.c: 804 do_activate_link()] link 0x611000005e00 activate
[I][48974.644826] default | [ timer-test.c: 34 on_process()] clock 52 position 802816 nsec:148975799616260 dt:21329775
so maybe there's some sort of race with the trigger eventfd and changing the activation record and the position IO.