The source project of this merge request has been removed.
Evict for non-compute and compute
Built on top of MR: !52 (closed) Implementation is based on @mlankhorst MR: !33 (closed)
High level flow which @mlankhorst agreed upon in: !51 (comment 1430920)
- Execs acquire all external BO and VM lock /w WW
- Execs validate all objects that have moved (potentially trigger a new move, say LMEM only object is in SMEM)
- Execs issue rebinds of any objects that have moved wait on DMA_RESV_USAGE_KERNEL slot (waits on moves, both ones initiated outside this exec and initiated by this exec)
- Execs wait on the last rebind fence (shared /w userptr rebinds as rebinds use a common per VM engine which is in-order)
- Execs wait on DMA_RESV_USAGE_KERNEL in external BO and VM (waiting on pending moves, although perhaps this isn't needed as we wait on rebinds but for safety this doesn't hurt)
- Execs install DMA_RESV_USAGE_BOOKING in VM (new moves stuck behind this)
- Execs install DMA_RESV_USAGE_WRITE in external BOs (new moves stuck behind this, WRITE needed for implicit fence to work)
- Moves wait on all of the BO fences (wait on pending execs + binds, also BO could be pointing to VM for non-external BOs)
- Moves install DMA_RESV_USAGE_KERNEL in BO slot (blocks new execs / binds)
- Moves add BO to a list in each VM it is attached to indicating a rebind needs to happen
- A move has a lock (either BO or VM) so a move, exec, and bind have mutually exclusive execution
- Dynamic moving of shared dma-bufs add BO to a list in each VM it attached to
Tested, on DG1, with a local version of the following IGT MR: https://gitlab.freedesktop.org/drm/xe/igt-gpu-tools/-/merge_requests/6
Will not work on platforms that require CPU binds as the support NIY but in theory is possible to implement if needed.
Will work on follow up MR tomorrow with compute VM support, have a plan which I think works. Next, it a doc MR explaining the design of XE.
Edited by Matthew Brost