Cache gbm_bo and drm_fb for client buffers
Compare changes
Due to an influx of spam, we have had to impose restrictions on new accounts. Please see this wiki page for instructions on how to get full permissions. Sorry for the inconvenience.
The migration is almost done, at least the rest should happen in the background. There are still a few technical difference between the old cluster and the new ones, and they are summarized in this issue. Please pay attention to the TL:DR at the end of the comment.
In order to directly scan out client buffers through KMS, we wrap the dmabuf or EGL wl_buffer inside a drm_fb. We did this by creating a new drm_fb every time we needed it, which would take a reference on the client buffer, import a gbm_bo, and create a DRM framebuffer for it as well.
Doing this can be spectacularly expensive on some platforms: on a low-end Rockchip device with Panfrost, we were seeing the (admittedly pretty broken) Rockchip KMS driver consume a huge amount of a smaller in-order core just doing MMU maintenance and its own locking.
We can avoid this overhead by caching the drm_fb for the lifetime of the buffer, which is something I've wanted to do for a while. Doing so requires moving the buffer reference from the drm_fb itself to the plane_state which actually uses it.
I would've liked to have had more asserts here, but even if a buffer has a non-zero busy_count
, when the client destroys the buffer, the underlying weston_buffer
will also be destroyed. This means that we can't assert on having a valid client buffer or reference during plane-state duplication or freeing, because it might've vanished from underneath us. Oh well.
Tested on Intel and Rockchip (forward-port of RK3566 VOP2 KMS driver, mainline Panfrost, good for going from 60% single-core CPU usage to 5%). Have tried with most client states:
cc @derekf