- Mar 06, 2025
-
-
Matthew Brost authored
Add documentation for agree upon GPU SVM design principles, current status, and future plans. v4: - Address Thomas's feedback v5: - s/Current/Basline (Thomas) v7: - Add license (CI) - Add examples for design guideline reasoning (Alistair) - Add snippet about possible livelock with concurrent GPU and and CPU access (Alistair) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Acked-by:
Alistair Popple <apopple@nvidia.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-33-matthew.brost@intel.com
-
Matthew Brost authored
Used to show we can bounce memory multiple times which will happen once a real migration policy is implemented. Can be removed once migration policy is implemented. v3: - Pull some changes into the previous patch (Thomas) - Better commit message (Thomas) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by:
Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-32-matthew.brost@intel.com
-
Matthew Brost authored
Useful to experiment with notifier size and how it affects performance. v3: - Pull missing changes including in following patch (Thomas) v5: - Spell out power of 2 (Thomas) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by:
Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-31-matthew.brost@intel.com
-
Matthew Brost authored
Add some useful SVM debug logging fro SVM range which prints the range's state. v2: - Update logging with latest structure layout v3: - Better commit message (Thomas) - New range structure (Thomas) - s/COLLECTOT/s/COLLECTOR (Thomas) v4: - Drop partial evict message (Thomas) - Use %p for pointers print (Thomas) v6: - Cast dma_addr to u64 (CI) - Only compile if CONFIG_DRM_GPUSVM selected (CI, Lucas) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by:
Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-30-matthew.brost@intel.com
-
Matthew Brost authored
Wire xe_bo_move to GPU SVM migration via new helper xe_svm_bo_evict. v2: - Use xe_svm_bo_evict - Drop bo->range v3: - Kernel doc (Thomas) v4: - Add missing xe_bo.c code v5: - Add XE_BO_FLAG_CPU_ADDR_MIRROR flag in this patch (Thomas) - Add message on eviction failure v6: - Only compile if CONFIG_DRM_GPUSVM selected (CI, Lucas) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-29-matthew.brost@intel.com
-
Matthew Brost authored
Migration is implemented with range granularity, with VRAM backing being a VM private TTM BO (i.e., shares dma-resv with VM). The lifetime of the TTM BO is limited to when the SVM range is in VRAM (i.e., when a VRAM SVM range is migrated to SRAM, the TTM BO is destroyed). The design choice for using TTM BO for VRAM backing store, as opposed to direct buddy allocation, is as follows: - DRM buddy allocations are not at page granularity, offering no advantage over a BO. - Unified eviction is required (SVM VRAM and TTM BOs need to be able to evict each other). - For exhaustive eviction [1], SVM VRAM allocations will almost certainly require a dma-resv. - Likely allocation size is 2M which makes of size of BO (872) acceptable per allocation (872 / 2M == .0004158). With this, using TTM BO for VRAM backing store seems to be an obvious choice as it allows leveraging of the TTM eviction code. Current migration policy is migrate any SVM range greater than or equal to 64k once. [1] https://patchwork.freedesktop.org/series/133643/ v2: - Rebase on latest GPU SVM - Retry page fault on get pages returning mixed allocation - Use drm_gpusvm_devmem v3: - Use new BO flags - New range structure (Thomas) - Hide migration behind Kconfig - Kernel doc (Thomas) - Use check_pages_threshold v4: - Don't evict partial unmaps in garbage collector (Thomas) - Use %pe to print errors (Thomas) - Use %p to print pointers (Thomas) v5: - Use range size helper (Thomas) - Make BO external (Thomas) - Set tile to NULL for BO creation (Thomas) - Drop BO mirror flag (Thomas) - Hold BO dma-resv lock across migration (Auld, Thomas) v6: - s/drm_info/drm_dbg (Thomas) - s/migrated/skip_migrate (Himal) - Better debug message on VRAM migration failure (Himal) - Drop return BO from VRAM allocation function (Thomas) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by:
Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-28-matthew.brost@intel.com
-
Matthew Brost authored
Implement with a simple BO put which releases the device memory. v2: - Use new drm_gpusvm_devmem_ops v3: - Better commit message (Thomas) v4: - Use xe_bo_put_async (Thomas) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by:
Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-27-matthew.brost@intel.com
-
Matthew Brost authored
Get device pfns from BO's buddy blocks. Used in migrate_* core MM functions called in GPU SVM to migrate between device and system memory. v2: - Use new drm_gpusvm_devmem_ops v3: - Better commit message (Thomas) v5: - s/xe_mem_region/xe_vram_region (Rebase) Signed-off-by:
Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by:
Oak Zeng <oak.zeng@intel.com> Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-26-matthew.brost@intel.com
-
Matthew Brost authored
Add GPUSVM device memory copy vfunc functions and connect to migration layer. Used for device memory migration. v2: - Allow NULL device pages in xe_svm_copy - Use new drm_gpusvm_devmem_ops v3: - Prefix defines with XE_ (Thomas) - Change copy chunk size to 8M - Add a bunch of comments to xe_svm_copy to clarify behavior (Thomas) - Better commit message (Thomas) v5: - s/xe_mem_region/xe_vram_region (Rebase) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-25-matthew.brost@intel.com
-
Add support for mapping device pages to Xe SVM by attaching drm_pagemap to a memory region, which is then linked to a GPU SVM devmem allocation. This enables GPU SVM to derive the device page address. v3: - Better commit message (Thomas) - New drm_pagemap.h location v5: - s/xe_mem_region/xe_vram_region (Rebase) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Signed-off-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by:
Matthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-24-matthew.brost@intel.com
-
Matthew Brost authored
Add drm_gpusvm_devmem to xe_bo. Required to enable SVM migrations. Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by:
Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-23-matthew.brost@intel.com
-
Matthew Brost authored
Add SVM device memory mirroring which enables device pages for migration. Enabled via CONFIG_XE_DEVMEM_MIRROR Kconfig. Kconfig option defaults to enabled. If not enabled, SVM will work sans migration and KMD memory footprint will be less. v3: - Add CONFIG_XE_DEVMEM_MIRROR v4: - Fix Kconfig (Himal) - Use %pe to print errors (Thomas) - Fix alignment issue (Checkpatch) v5: - s/xe_mem_region/xe_vram_region (Rebase) v6: - Only compile if CONFIG_DRM_GPUSVM selected (CI, Lucas) - s/drm_info/drm_dbg/ Signed-off-by:
Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by:
Oak Zeng <oak.zeng@intel.com> Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by:
Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-22-matthew.brost@intel.com
-
Matthew Brost authored
Add functions which migrate to / from VRAM accepting a single DPA argument (VRAM) and array of dma addresses (SRAM). Used for SVM migrations. v2: - Don't unlock job_mutex in error path of xe_migrate_vram v3: - Kernel doc (Thomas) - Better commit message (Thomas) - s/dword/num_dword (Thomas) - Return error on to large of migration (Thomas) Signed-off-by:
Oak Zeng <oak.zeng@intel.com> Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-21-matthew.brost@intel.com
-
Matthew Brost authored
Add the DRM_XE_QUERY_CONFIG_FLAG_HAS_CPU_ADDR_MIRROR device query flag, which indicates whether the device supports CPU address mirroring. The intent is for UMDs to use this query to determine if a VM can be set up with CPU address mirroring. This flag is implemented by checking if the device supports GPU faults. v7: - Only report enabled if CONFIG_DRM_GPUSVM is selected (CI) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by:
Tejas Upadhyay <tejas.upadhyay@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-20-matthew.brost@intel.com
-
Matthew Brost authored
Support for CPU address mirror bindings in SRAM fully in place, enable the implementation. v3: - s/system allocator/CPU address mirror (Thomas) v7: - Only enable uAPI if selected by GPU SVM (CI) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-19-matthew.brost@intel.com
-
Matthew Brost authored
uAPI is designed with the use case that only mapping a BO to a malloc'd address will unbind a CPU-address mirror VMA. Therefore, allowing a CPU-address mirror VMA to unbind when the GPU has bindings in the range being unbound does not make much sense. This behavior is not supported, as it simplifies the code. This decision can always be revisited if a use case arises. v3: - s/arrises/arises (Thomas) - s/system allocator/GPU address mirror (Thomas) - Kernel doc (Thomas) - Newline between function defs (Thomas) v5: - Kernel doc (Thomas) v6: - Only compile if CONFIG_DRM_GPUSVM selected (CI, Lucas) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-18-matthew.brost@intel.com
-
Matthew Brost authored
Add unbind to SVM garbage collector. To facilitate add unbind support function to VM layer which unbinds a SVM range. Also teach PT layer to understand unbinds of SVM ranges. v3: - s/INVALID_VMA/XE_INVALID_VMA (Thomas) - Kernel doc (Thomas) - New GPU SVM range structure (Thomas) - s/DRM_GPUVA_OP_USER/DRM_GPUVA_OP_DRIVER (Thomas) v4: - Use xe_vma_op_unmap_range (Himal) v5: - s/PY/PT (Thomas) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-17-matthew.brost@intel.com
-
Matthew Brost authored
Add basic SVM garbage collector which destroy a SVM range upon a MMU UNMAP event. The garbage collector runs on worker or in GPU fault handler and is required as locks in the path of reclaim are required and cannot be taken the notifier. v2: - Flush garbage collector in xe_svm_close v3: - Better commit message (Thomas) - Kernel doc (Thomas) - Use list_first_entry_or_null for garbage collector loop (Thomas) - Don't add to garbage collector if VM is closed (Thomas) v4: - Use %pe to print error (Thomas) v5: - s/visable/visible (Thomas) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by:
Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-16-matthew.brost@intel.com
-
Matthew Brost authored
Add (re)bind to SVM page fault handler. To facilitate add support function to VM layer which (re)binds a SVM range. Also teach PT layer to understand (re)binds of SVM ranges. v2: - Don't assert BO lock held for range binds - Use xe_svm_notifier_lock/unlock helper in xe_svm_close - Use drm_pagemap dma cursor - Take notifier lock in bind code to check range state v3: - Use new GPU SVM range structure (Thomas) - Kernel doc (Thomas) - s/DRM_GPUVA_OP_USER/DRM_GPUVA_OP_DRIVER (Thomas) v5: - Kernel doc (Thomas) v6: - Only compile if CONFIG_DRM_GPUSVM selected (CI, Lucas) Signed-off-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Tested-by:
Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-15-matthew.brost@intel.com
-
Matthew Brost authored
Add DRM_GPUVA_OP_DRIVER which allows driver to define their own gpuvm ops. Useful for driver created ops which can be passed into the bind software pipeline. v3: - s/DRM_GPUVA_OP_USER/DRM_GPUVA_OP_DRIVER (Thomas) - Better commit message (Thomas) Cc: Danilo Krummrich <dakr@redhat.com> Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-14-matthew.brost@intel.com
-
Matthew Brost authored
Add SVM range invalidation vfunc which invalidates PTEs. A new PT layer function which accepts a SVM range is added to support this. In addition, add the basic page fault handler which allocates a SVM range which is used by SVM range invalidation vfunc. v2: - Don't run invalidation if VM is closed - Cycle notifier lock in xe_svm_close - Drop xe_gt_tlb_invalidation_fence_fini v3: - Better commit message (Thomas) - Add lockdep asserts (Thomas) - Add kernel doc (Thomas) - s/change/changed (Thomas) - Use new GPU SVM range / notifier structures - Ensure PTEs are zapped / dma mappings are unmapped on VM close (Thomas) v4: - Fix macro (Checkpatch) v5: - Use range start/end helpers (Thomas) - Use notifier start/end helpers (Thomas) v6: - Use min/max helpers (Himal) - Only compile if CONFIG_DRM_GPUSVM selected (CI, Lucas) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by:
Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-13-matthew.brost@intel.com
-
Matthew Brost authored
Clear root PT entry and invalidate entire VM's address space when closing the VM. Will prevent the GPU from accessing any of the VM's memory after closing. v2: - s/vma/vm in kernel doc (CI) - Don't nuke migration VM as this occur at driver unload (CI) v3: - Rebase and pull into SVM series (Thomas) - Wait for pending binds (Thomas) v5: - Remove xe_gt_tlb_invalidation_fence_fini in error case (Matt Auld) - Drop local migration bool (Thomas) v7: - Add drm_dev_enter/exit protecting invalidation (CI, Matt Auld) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-12-matthew.brost@intel.com
-
Add dma_addr res cursor which walks an array of drm_pagemap_dma_addr. Useful for SVM ranges and programing page tables. v3: - Better commit message (Thomas) - Use new drm_pagemap.h location v7: - Fix kernel doc (CI) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Signed-off-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by:
Matthew Brost <matthew.brost@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-11-matthew.brost@intel.com
-
Matthew Brost authored
Add SVM init / close / fini to faulting VMs. Minimual implementation acting as a placeholder for follow on patches. v2: - Add close function v3: - Better commit message (Thomas) - Kernel doc (Thomas) - Update chunk array to be unsigned long (Thomas) - Use new drm_gpusvm.h header location (Thomas) - Newlines between functions in xe_svm.h (Thomas) - Call drm_gpusvm_driver_set_lock in init (Thomas) v6: - Only compile if CONFIG_DRM_GPUSVM selected (CI, Lucas) v7: - Only select CONFIG_DRM_GPUSVM if DEVICE_PRIVATE (CI) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-10-matthew.brost@intel.com
-
Matthew Brost authored
Add the DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR flag, which is used to create unpopulated virtual memory areas (VMAs) without memory backing or GPU page tables. These VMAs are referred to as CPU address mirror VMAs. The idea is that upon a page fault or prefetch, the memory backing and GPU page tables will be populated. CPU address mirror VMAs only update GPUVM state; they do not have an internal page table (PT) state, nor do they have GPU mappings. It is expected that CPU address mirror VMAs will be mixed with buffer object (BO) VMAs within a single VM. In other words, system allocations and runtime allocations can be mixed within a single user-mode driver (UMD) program. Expected usage: - Bind the entire virtual address (VA) space upon program load using the DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR flag. - If a buffer object (BO) requires GPU mapping (runtime allocation), allocate a CPU address using mmap(PROT_NONE), bind the BO to the mmapped address using existing bind IOCTLs. If a CPU map of the BO is needed, mmap it again to the same CPU address using mmap(MAP_FIXED) - If a BO no longer requires GPU mapping, munmap it from the CPU address space and them bind the mapping address with the DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR flag. - Any malloc'd or mmapped CPU address accessed by the GPU will be faulted in via the SVM implementation (system allocation). - Upon freeing any mmapped or malloc'd data, the SVM implementation will remove GPU mappings. Only supporting 1 to 1 mapping between user address space and GPU address space at the moment as that is the expected use case. uAPI defines interface for non 1 to 1 but enforces 1 to 1, this restriction can be lifted if use cases arrise for non 1 to 1 mappings. This patch essentially short-circuits the code in the existing VM bind paths to avoid populating page tables when the DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR flag is set. v3: - Call vm_bind_ioctl_ops_fini on -ENODATA - Don't allow DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR on non-faulting VMs - s/DRM_XE_VM_BIND_FLAG_SYSTEM_ALLOCATOR/DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR (Thomas) - Rework commit message for expected usage (Thomas) - Describe state of code after patch in commit message (Thomas) v4: - Fix alignment (Checkpatch) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by:
Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-9-matthew.brost@intel.com
-
Matthew Brost authored
Xe depends on DRM_GPUSVM for SVM implementation, select it in Kconfig. v6: - Don't select DRM_GPUSVM if UML (CI) v7: - Only select DRM_GPUSVM if DEVICE_PRIVATE (CI) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-8-matthew.brost@intel.com
-
Matthew Brost authored
This patch introduces support for GPU Shared Virtual Memory (SVM) in the Direct Rendering Manager (DRM) subsystem. SVM allows for seamless sharing of memory between the CPU and GPU, enhancing performance and flexibility in GPU computing tasks. The patch adds the necessary infrastructure for SVM, including data structures and functions for managing SVM ranges and notifiers. It also provides mechanisms for allocating, deallocating, and migrating memory regions between system RAM and GPU VRAM. This is largely inspired by GPUVM. v2: - Take order into account in check pages - Clear range->pages in get pages error - Drop setting dirty or accessed bit in get pages (Vetter) - Remove mmap assert for cpu faults - Drop mmap write lock abuse (Vetter, Christian) - Decouple zdd from range (Vetter, Oak) - Add drm_gpusvm_range_evict, make it work with coherent pages - Export drm_gpusvm_evict_to_sram, only use in BO evict path (Vetter) - mmget/put in drm_gpusvm_evict_to_sram - Drop range->vram_alloation variable - Don't return in drm_gpusvm_evict_to_sram until all pages detached - Don't warn on mixing sram and device pages - Update kernel doc - Add coherent page support to get pages - Use DMA_FROM_DEVICE rather than DMA_BIDIRECTIONAL - Add struct drm_gpusvm_vram and ops (Thomas) - Update the range's seqno if the range is valid (Thomas) - Remove the is_unmapped check before hmm_range_fault (Thomas) - Use drm_pagemap (Thomas) - Drop kfree_mapping (Thomas) - dma mapp pages under notifier lock (Thomas) - Remove ctx.prefault - Remove ctx.mmap_locked - Add ctx.check_pages - s/vram/devmem (Thomas) v3: - Fix memory leak drm_gpusvm_range_get_pages - Only migrate pages with same zdd on CPU fault - Loop over al VMAs in drm_gpusvm_range_evict - Make GPUSVM a drm level module - GPL or MIT license - Update main kernel doc (Thomas) - Prefer foo() vs foo for functions in kernel doc (Thomas) - Prefer functions over macros (Thomas) - Use unsigned long vs u64 for addresses (Thomas) - Use standard interval_tree (Thomas) - s/drm_gpusvm_migration_put_page/drm_gpusvm_migration_unlock_put_page (Thomas) - Drop err_out label in drm_gpusvm_range_find_or_insert (Thomas) - Fix kernel doc in drm_gpusvm_range_free_pages (Thomas) - Newlines between functions defs in header file (Thomas) - Drop shall language in driver vfunc kernel doc (Thomas) - Move some static inlines from head to C file (Thomas) - Don't allocate pages under page lock in drm_gpusvm_migrate_populate_ram_pfn (Thomas) - Change check_pages to a threshold v4: - Fix NULL ptr deref in drm_gpusvm_migrate_populate_ram_pfn (Thomas, Himal) - Fix check pages threshold - Check for range being unmapped under notifier lock in get pages (Testing) - Fix characters per line - Drop WRITE_ONCE for zdd->devmem_allocation assignment (Thomas) - Use completion for devmem_allocation->detached (Thomas) - Make GPU SVM depend on ZONE_DEVICE (CI) - Use hmm_range_fault for eviction (Thomas) - Drop zdd worker (Thomas) v5: - Select Kconfig deps (CI) - Set device to NULL in __drm_gpusvm_migrate_to_ram (Matt Auld, G.G.) - Drop Thomas's SoB (Thomas) - Add drm_gpusvm_range_start/end/size helpers (Thomas) - Add drm_gpusvm_notifier_start/end/size helpers (Thomas) - Absorb drm_pagemap name changes (Thomas) - Fix driver lockdep assert (Thomas) - Move driver lockdep assert to static function (Thomas) - Assert mmap lock held in drm_gpusvm_migrate_to_devmem (Thomas) - Do not retry forever on eviction (Thomas) v6: - Fix drm_gpusvm_get_devmem_page alignment (Checkpatch) - Modify Kconfig (CI) - Compile out lockdep asserts (CI) v7: - Add kernel doc for flags fields (CI, Auld) Cc: Simona Vetter <simona.vetter@ffwll.ch> Cc: Dave Airlie <airlied@redhat.com> Cc: Christian König <christian.koenig@amd.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: <dri-devel@lists.freedesktop.org> Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-7-matthew.brost@intel.com
-
Introduce xe_bo_put_async to put a bo where the context is such that the bo destructor can't run due to lockdep problems or atomic context. If the put is the final put, freeing will be done from a work item. v5: - Kerenl doc for xe_bo_put_async (Thomas) v7: - Fix kernel doc (CI) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Signed-off-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Tested-by:
Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Reviewed-by:
Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-6-matthew.brost@intel.com
-
Introduce drm_pagemap ops to map and unmap dma to VRAM resources. In the local memory case it's a matter of merely providing an offset into the device's physical address. For future p2p the map and unmap functions may encode as needed. Similar to how dma-buf works, let the memory provider (drm_pagemap) provide the mapping functionality. v3: - Move to drm level include v4: - Fix kernel doc (G.G.) v5: - s/map_dma/device_map (Thomas) - s/unmap_dma/device_unmap (Thomas) v7: - Fix kernel doc (CI, Auld) - Drop P2P define (Thomas) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Signed-off-by:
Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-5-matthew.brost@intel.com
-
Matthew Brost authored
Avoid multiple CPU page faults to the same device page racing by trying to lock the page in do_swap_page before taking an extra reference to the page. This prevents scenarios where multiple CPU page faults each take an extra reference to a device page, which could abort migration in folio_migrate_mapping. With the device page being locked in do_swap_page, the migrate_vma_* functions need to be updated to avoid locking the fault_page argument. Prior to this change, a livelock scenario could occur in Xe's (Intel GPU DRM driver) SVM implementation if enough threads faulted the same device page. v3: - Put page after unlocking page (Alistair) - Warn on spliting a TPH which is fault page (Alistair) - Warn on dst page == fault page (Alistair) v6: - Add more verbose comment around THP (Alistair) v7: - Fix migrate_device_finalize alignment (Checkpatch) Cc: Alistair Popple <apopple@nvidia.com> Cc: Philip Yang <Philip.Yang@amd.com> Cc: Felix Kuehling <felix.kuehling@amd.com> Cc: Christian König <christian.koenig@amd.com> Cc: Andrew Morton <akpm@linux-foundation.org> Suggested-by:
Simona Vetter <simona.vetter@ffwll.ch> Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Alistair Popple <apopple@nvidia.com> Tested-by:
Alistair Popple <apopple@nvidia.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-4-matthew.brost@intel.com
-
Matthew Brost authored
Add migrate_device_pfns which prepares an array of pre-populated device pages for migration. This is needed for eviction of known set of non-contiguous devices pages to cpu pages which is a common case for SVM in DRM drivers using TTM. v2: - s/migrate_device_vma_range/migrate_device_prepopulated_range - Drop extra mmu invalidation (Vetter) v3: - s/migrate_device_prepopulated_range/migrate_device_pfns (Alistar) - Use helper to lock device pages (Alistar) - Update commit message with why this is required (Alistar) Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Alistair Popple <apopple@nvidia.com> Reviewed-by:
Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-3-matthew.brost@intel.com
-
Matthew Brost authored
TTM doesn't support fair eviction via WW locking, this mitigated in by using retry loops in exec and preempt rebind worker. Extend this retry loop to BO allocation. Once TTM supports fair eviction this patch can be reverted. v4: - Keep line break (Stuart) Signed-off-by:
Matthew Brost <matthew.brost@intel.com> Reviewed-by:
Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Reviewed-by:
Stuart Summers <stuart.summers@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250306012657.3505757-2-matthew.brost@intel.com
-
Francois Dugast authored
Use fault injection infrastructure to allow specific functions to be configured over debugfs for failing during the execution of xe_exec_queue_create_ioctl(). xe_exec_queue_destroy_ioctl() and xe_exec_queue_get_property_ioctl() are not considered as there is no unwinding code to test with fault injection. This allows more thorough testing from user space by going through code paths for error handling and unwinding which cannot be reached by simply injecting errors in IOCTL arguments. This can help increase code robustness. The corresponding IGT series is: https://patchwork.freedesktop.org/series/144138/ Reviewed-by:
Sai Teja Pottumuttu <sai.teja.pottumuttu@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250305150659.46276-1-francois.dugast@intel.com Signed-off-by:
Francois Dugast <francois.dugast@intel.com>
-
- Mar 05, 2025
-
-
Now that we have all IPs being described via struct xe_ip, where release information (version and name) is represented in a single struct type, we can extract duplicated logic from handle_pre_gmdid() and handle_gmdid() and apply it in the body of xe_info_init(). With this change, there is no point in keeping handle_pre_gmdid() anymore, so we just remove it and inline the assignment of {graphics,media}_ip. Signed-off-by:
Gustavo Sousa <gustavo.sousa@intel.com> Reviewed-by:
Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250221-xe-unify-ip-descriptors-v2-7-5bc0c6d0c13f@intel.com Signed-off-by:
Matt Roper <matthew.d.roper@intel.com>
-
Now that pre-GMDID IPs are described via struct xe_ip, it is possible to re-use the feature descriptors that have exact match with ones from previous releases. Do that. Signed-off-by:
Gustavo Sousa <gustavo.sousa@intel.com> Reviewed-by:
Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250221-xe-unify-ip-descriptors-v2-6-5bc0c6d0c13f@intel.com Signed-off-by:
Matt Roper <matthew.d.roper@intel.com>
-
We have now a struct xe_ip to fully describe an IP, but we are only using that for GMDID-based IPs. For pre-GMDID IPs, we still describe release info (version and name) via feature descriptors (struct xe_{graphics,media}_desc). Let's convert those to use struct xe_ip. With this, we have a uniform way of describing IPs in the xe driver instead of having different approaches based on whether the IPs use GMDIDs or not. A nice side-effect of this change is that now we have an easy way to lookup, in the source code, mappings between versions, names and features for all supported IPs. v2: - Store pointers to struct xe_ip instead xe_{graphics,media}_desc in struct xe_device_desc. Signed-off-by:
Gustavo Sousa <gustavo.sousa@intel.com> Reviewed-by:
Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250221-xe-unify-ip-descriptors-v2-5-5bc0c6d0c13f@intel.com Signed-off-by:
Matt Roper <matthew.d.roper@intel.com>
-
We will soon update the code so that pre-GMDID IPs are also defined with struct xe_ip. Since we will need to refer to them in instances of struct xe_device_desc, let's move up the current instances of xe_ip (GMDID-based) so that all IP descriptors are kept together. Signed-off-by:
Gustavo Sousa <gustavo.sousa@intel.com> Reviewed-by:
Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250221-xe-unify-ip-descriptors-v2-4-5bc0c6d0c13f@intel.com Signed-off-by:
Matt Roper <matthew.d.roper@intel.com>
-
If we pay closer attention to struct gmdid_map, we will realize that it is actually fully describing an IP (graphics or media): it contains "release info" and "features info". The former is comprised of fields "ver" and "name"; and the latter is done via member "ip", which is a pointer to either struct xe_graphics_desc or xe_media_desc, and can be reused across releases. As such let's: * Rename struct gmdid_map to xe_ip. * Rename the field ver to verx100 to be consistent with the naming of members using that encoding of the version. * Rename the field "ip" to "desc" to make it clear that it is a pointer to a descriptor of features for the IP, since it will not contain *all* info (i.e. features + release info). We sill have release info mapped into struct xe_{graphics,media}_desc for pre-GMDID IPs. In an upcoming change we will handle that so that we make a clear separation between "release info" and "feature info". Reviewed-by:
Matt Roper <matthew.d.roper@intel.com> Signed-off-by:
Gustavo Sousa <gustavo.sousa@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250221-xe-unify-ip-descriptors-v2-3-5bc0c6d0c13f@intel.com Signed-off-by:
Matt Roper <matthew.d.roper@intel.com>
-
The name of an IP is a function of its version. As such, given an IP version, it should be clear to identify the name of that IP release. With the current code, we keep that mapping clear for pre-GMDID IPs, but ambiguous for GMDID-based ones. That causes two types of inconveniences: 1. The end user, who might not have all the necessary mapping at hand, might be confused when seeing different possible IP names in the dmesg log. 2. It makes a developer who is not familiar with the "IP version" to "Release name" need to resort to looking at the specs to understand see what version maps to what. While the specs should be the authority on the mapping, we should make our lives easier by reflecting that mapping in the source code. Thus, since the IP name is tied to the version, let's remove the ambiguity by using a "name" field in struct gmdid_map instead of accumulating names in the descriptor instances. This does result in the code having IP name being defined in different structs (gmdid_map, xe_graphics_desc, xe_media_desc), but that will be resolved in upcoming changes. A side-effect of this change is that media_xe2 exactly matches media_xelpmp now, so we just re-use the latter. v2: - Drop media_xe2 and re-use media_xelpmp. (Matt) Reviewed-by:
Matt Roper <matthew.d.roper@intel.com> Signed-off-by:
Gustavo Sousa <gustavo.sousa@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250221-xe-unify-ip-descriptors-v2-2-5bc0c6d0c13f@intel.com Signed-off-by:
Matt Roper <matthew.d.roper@intel.com>
-
In an upcoming change, we will handle setting graphics_name and media_name differently for GMDID-based IPs. As such, let's make both handle_pre_gmdid() and handle_gmdid() functions responsible for initializing those fields. While now we have both doing essentially the same thing with respect to those fields, handle_pre_gmdid() will diverge soon. Reviewed-by:
Matt Roper <matthew.d.roper@intel.com> Signed-off-by:
Gustavo Sousa <gustavo.sousa@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250221-xe-unify-ip-descriptors-v2-1-5bc0c6d0c13f@intel.com Signed-off-by:
Matt Roper <matthew.d.roper@intel.com>
-