Draft: drm/msm: sparse / "VM_BIND" support
Conversion to DRM GPU VA Manager, and adding support for Vulkan Sparse Memory in the form of:
- A new
VM_BIND
submitqueue type for executing VMMSM_SUBMIT_BO_OP_MAP
/MAP_NULL
/UNMAP
commands - Extending the
SUBMIT
ioctl to allow submitting batches of one or moreMAP
/MAP_NULL
/UNMAP
commands to aVM_BIND
submitqueue
The UABI takes a slightly different approach from what other drivers have done, and what would make sense if starting from a clean sheet, ie separate VM_BIND
and EXEC
ioctls. But since we have to maintain support for the existing SUBMIT
ioctl, and because the fence, syncobj, and BO pinning is largely the same between legacy "BO-table" style SUBMIT
ioctls, and new-style VM updates submitted to a VM_BIND
submitqueue, I chose to go the route of extending the existing SUBMIT
ioctl rather than adding a new ioctl.
I also did not implement support for synchronous VM_BIND
commands. Since userspace could just immediately wait for the SUBMIT
to complete, I don't think we need this extra complexity in the kernel.
The corresponding mesa MR: mesa/mesa!32533
Notes/TODOs/Open Questions:
- The first handful of patches are from Bibek Kumar Patro's series, iommu/arm-smmu: introduction of ACTLR implementation for Qualcomm SoCs, which introduces PRR (Partially-Resident-Region) support, needed to implement
MAP_NULL
(for Vulkan Sparse Residency) - Why do
VM_BIND
commands need fence fd support, instead of just syncobjs? Mainly for the benefit of virtgpu drm native context guest<->host fence passing, where the host VMM is operating in terms of fence fd's (syncobs are just a convenience wrapper above adma_fence
, and don't exist below the guest kernel). - Currently shrinker support is disabled (hence this being in Draft/RFC state). To properly support the shrinker, we need to pre-allocate various objects and pages needed for the pagetables themselves, to move memory allocations out of the fence signaling path. This short-cut was taken to unblock userspace implementation of sparse buffer/image support.
- Could/should we do all the vm/vma updates synchronously and defer only the io-pgtable updates to the
VM_BIND
scheduler queue? This would simplify the previous point, in that we'd only have to pre-allocate pages for the io-pgtable updates. Currently we lose support for BO dumping for devcoredump. Ideally we'd plumbMSM_SUBMIT_BO_DUMP
flag in aMAP
commands thru to the resultingdrm_gpuva
s. To do this, I think we need to extenddrm_gpuva
with aflags
field.. the flags can be driver defined, butdrm_gpuvm
needs to know not to mergedrm_gpuva
s with different flags.
This MR can be found in patch form if you prefer: https://patchwork.freedesktop.org/series/142263/