The idea is to add new context type(s) which allows using native Mesa drivers in a guest userspace, talking their own protocol to a small host side component which interfaces with the native host drm driver. The native driver in the guest uses the virtio_gpu driver as the "transport layer" for that protocol, as well as for handling GEM buffer mapping (and dma-buf/dma-fence sharing between host and guest, etc). If the protocol is designed properly to avoid synchronous round trips between host and guest in hot paths, the performance can match native (outside of VM), which can be a significant improvement over API virtualization. (Especially on lower powered devices where the overhead of running a gl driver on top of a gl driver is prohibitive.)
msm_contextmaps 1:1 to drm device fd in guest userspace, which maps 1:1 to GPU address space, preserving an isolated GPU address space per userspace (in guest or host) process.
- GEM objects in guest mostly map 1:1 to host GEM objects, with the addition of one "shmem" virtgpu GEM object used for host->guest communication
- The interface does not map 1:1 to host kernel interface in all cases, to reduce the number of round trips, and to make hot-paths asynchronous.
- Some parts such as
vnc_fenceI expect to be able to re-use for other future driver specific v-n-c context types.
Generally 99+% of native performance. A few games/benchmarks that hit userspace bo-cache misses (and end up doing synchronous GEM obj allocation mid-frame) drop down to low/mid 90's. By comparison virgl ranges from 42-83% of native on the same set of games/benchmarks.
I expect further improvements to be possible with GPU iova allocation in userspace, which would let us make GEM buffer allocation asynchronous, but that will require some new host kernel uabi.
Corresponding mesa MR: mesa/mesa!14900 (merged)