venus: add opaque fd resource support in render server
Background:
- For guest mapping (
vkMapMemory
forVK_MEMORY_PROPERTY_HOST_VISIBLE_BIT
) setup, venus currenty has forced external memory on the renderer side (see this for more details).- With hw implementations, venus prefers
VK_EXTERNAL_MEMORY_HANDLE_TYPE_DMA_BUF_BIT_EXT
- forces dma_buf export on
anv
,radv
andtu
- fallback to gbm alloc + dma_buf import on mali
- forces dma_buf export on
- For CI testing atop
lvp
(sw Vulkan implementation in mesa), sincelvp
only supportsVK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD_BIT
export, !651 (merged) was landed to bridge the gap. However:- it doesn't work for vkr behind the render server (fixed by this MR)
- it doesn't work with sandbox enabled (to be fixed later, and not urgent before https://gitlab.khronos.org/vulkan/vulkan/-/issues/2918 gets resolved)
- With hw implementations, venus prefers
- For vkr in virglrenderer with/without the render server config, the high level arch
- before this MR:
- without server:
- virtio-gpu device process {
virglrenderer
public API =>vkr
contexts + others (vrend
anddrm/msm
contexts) }- Ps:
vrend
is the host side renderer forvirgl
stack to provide GL support in the guest - Ps:
drm/msm
is the renderer backend forfreedreno
's virtgpu-native-context solution (or we call it render node forwarding)
- Ps:
- virtio-gpu device process {
- with server (
-Drender-server=true
):- process config (
-Drender-server-worker=process
):- virtio-gpu device process {
virglrenderer
public API =>proxy
contexts + others } => - render worker processes {
render
context =>virglrenderer
public API =>vkr
context }
- virtio-gpu device process {
- thread config (
-Drender-server-worker=thread
):- virtio-gpu device process {
virglrenderer
public API =>proxy
contexts + others } => - render server process {
render
contexts =>virglrenderer
public API =>vkr
contexts }
- virtio-gpu device process {
- process config (
- without server:
- after this MR:
- without server:
- same with above and still works, so that we can land it without disabling venus CI testing on the virglrenderer side
- with server:
- the thread config is preserved
- the diff against above is replacing the
virglrenderer
public API with the newvkr
renderer interface
- without server:
- before this MR:
Summary of the MR:
- first 7 commits are just refactors while making the threaded server config an official path instead of deprecation
- deprecating the thread config makes the server a looooot harder to debug with..
- now the dirty thread bits are self-contained in the render_state
- commits 8,9 implement vkr renderer interface and remove the virglrenderer middle block from the server side
- commits 10,11 update the opaque fd metadata naming and add the server support for it
- commit 12 updates naming aligned with multiple timeline
- commit 13 refactors render_state to use scoped lock
Test:
-
vkr
in render server atoplvp
passesdEQP-VK.memory.*
- no cts regressions with crosvm and qemu
- no perf regressions with gfxbench and angle traces
Followup:
- mandates render server config for venus
- uprev virglrenderer in mesa and enable render server in mesa CI
- uprev mesa in virglrenderer to bring in crosvm runner to enable render server in virglrenderer CI
- further server and venus cleanup after deprecating non-server config
- clean up server dependencies on random virgl miscs and drop
libvirglrenderer
dep from server- Some utils used by venus can include up to virglrenderer.h. We need to clean up or have separate util.
- decouple
vkr
fromvirgl_resource
(after iov cleanup in !943 (merged)) - decouple
vkr
fromvirgl_context
- clean up the legacy poll based fencing code paths from
vkr
- clean up server dependencies on random virgl miscs and drop
Edited by Yiwei Zhang