... | ... | @@ -246,5 +246,27 @@ problems: |
|
|
Because of how PipeWire allocates memory, this can be done quite efficiently by changing offsets
|
|
|
in the sample buffers.
|
|
|
|
|
|
### Why is the API so complicated
|
|
|
|
|
|
'Can't an audio API be simply open/read/write?, 'why do I need a mainloop and callbacks', ' Isn't doing audio essentially just copying samples to and from a buffer?'.
|
|
|
|
|
|
For anything more than playing a beep, it is more complicated.
|
|
|
|
|
|
At the lowest level, the device decides when more samples need to be written or read to/from the device ringbuffer. This is usually implemented with an interrupt of some sort. For optimal performance, the application needs to directly react to this signal and read/write samples to the device ringbuffer. This is called the pull module.
|
|
|
|
|
|
This way, the application can wait with generating the audio data until the last possible moment to achieve the lowest possible latency. Volume updates or synthesizer can react to gui sliders and keyboard events with lower latency this way.
|
|
|
|
|
|
With a simple read/write model this cannot be done, you need an API to wait for the device signal either with a poll or an event. Additional API that provides timing information can also work but then you need to do polling or implement the timeouts or callbacks yourself, likely with less accurate results than what the device interrupt can provide.
|
|
|
|
|
|
That said, you can always write a simple open/read/write API on top of pull based API and PipeWire provides the more lowlevel APIs to make this possible. Look at [pa_simple[(https://freedesktop.org/software/pulseaudio/doxygen/simple.html), which also work fine on PipeWire.
|
|
|
|
|
|
There is some more about this here:
|
|
|
|
|
|
* [push vs pull in SDL](https://discourse.libsdl.org/t/sdl-audio-push-vs-pull-model/3923)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|