Draft: wsi/wayland: linux-dmabuf version 6 (target device)
Client implementation for wayland/wayland-protocols!268. Draft until that is merged and released.
I believe this is correct, but is untested since it doesn't have a server impl yet. It also only implements it for EGL; I haven't looked at how the Vulkan WSI code handles it.
It took me a bit of time to understand the code here but it seems:
- Buffers are allocated for the
main_device
from default feedback - The
main_device
of surface feedback is ignored- The
compositor_using_another_device
variable is set but never read.
- The
- If
DRI_PRIME
is set,fd_render_gpu
may be different fromfd_display_gpu
- Then it allocates a linear buffer on
fd_display_gpu
, which should still correspond to themain_device
, whilefd_render_gpu
does not
- Then it allocates a linear buffer on
So unless I'm missing something, this should be correct. It doesn't change behavior but communicates to the server what device the buffer was allocated on. In place of the main_device
event (no longer sent with the new protocol version), it simply uses the first device it is able to open with the "sampling" flag. Which should be the same.
Communicating to the display server what GPU a buffer was allocated on is helpful. But hopefully things can be improved more than that. At a minimum, if the render GPU is advertised as a sampling GPU for the surface, it should be able to use a buffer on that GPU with an optimal modifier.