igt@gem_exec_schedule@pi-distinct-iova-render - dmesg-warn - WARNING: possible circular locking dependency detected, is trying to acquire lock at: rcu_barrier, task is already holding lock at:unmap_vmas
<4> [219.284314] ======================================================
<4> [219.284316] WARNING: possible circular locking dependency detected
<4> [219.284318] 5.5.0-rc1-g87c99602f2be-drmtip_414+ #1 Tainted: G U
<4> [219.284320] ------------------------------------------------------
<4> [219.284322] gem_exec_schedu/1306 is trying to acquire lock:
<4> [219.284324] ffffffff9c449c98 (rcu_state.barrier_mutex){+.+.}, at: rcu_barrier+0x23/0x190
<4> [219.284348]
but task is already holding lock:
<4> [219.284350] ffffffff9c466e00 (mmu_notifier_invalidate_range_start){+.+.}, at: unmap_vmas+0x0/0x150
<4> [219.284354]
which lock already depends on the new lock.
<4> [219.284357]
the existing dependency chain (in reverse order) is:
<4> [219.284359]
-> #3 (mmu_notifier_invalidate_range_start){+.+.}:
<4> [219.284363] __mmu_notifier_register+0x58/0x200
<4> [219.284423] i915_gem_userptr_init__mmu_notifier+0x21a/0x2d0 [i915]
<4> [219.284453] i915_gem_userptr_ioctl+0x1d6/0x3c0 [i915]
<4> [219.284456] drm_ioctl_kernel+0xa7/0xf0
<4> [219.284458] drm_ioctl+0x2e1/0x390
<4> [219.284461] do_vfs_ioctl+0x9c/0x730
<4> [219.284463] ksys_ioctl+0x35/0x60
<4> [219.284464] __x64_sys_ioctl+0x11/0x20
<4> [219.284467] do_syscall_64+0x4f/0x240
<4> [219.284469] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [219.284471]
-> #2 (fs_reclaim){+.+.}:
<4> [219.284475] fs_reclaim_acquire.part.117+0x24/0x30
<4> [219.284477] kmem_cache_alloc_trace+0x2a/0x2c0
<4> [219.284479] intel_cpuc_prepare+0x37/0x1a0
<4> [219.284482] cpuhp_invoke_callback+0x9b/0x9d0
<4> [219.284484] _cpu_up+0xa2/0x140
<4> [219.284486] do_cpu_up+0x61/0xa0
<4> [219.284489] smp_init+0x57/0x96
<4> [219.284491] kernel_init_freeable+0xac/0x1c7
<4> [219.284494] kernel_init+0x5/0x100
<4> [219.284496] ret_from_fork+0x3a/0x50
<4> [219.284497]
-> #1 (cpu_hotplug_lock.rw_sem){++++}:
<4> [219.284500] cpus_read_lock+0x34/0xd0
<4> [219.284502] rcu_barrier+0xaa/0x190
<4> [219.284505] kmem_cache_destroy+0x51/0x2b0
<4> [219.284508] intel_iommu_init+0x1009/0x12d1
<4> [219.284510] pci_iommu_init+0x11/0x3a
<4> [219.284512] do_one_initcall+0x58/0x2ff
<4> [219.284514] kernel_init_freeable+0x137/0x1c7
<4> [219.284516] kernel_init+0x5/0x100
<4> [219.284518] ret_from_fork+0x3a/0x50
<4> [219.284519]
-> #0 (rcu_state.barrier_mutex){+.+.}:
<4> [219.284523] __lock_acquire+0x1328/0x15d0
<4> [219.284525] lock_acquire+0xa7/0x1c0
<4> [219.284527] __mutex_lock+0x9a/0x9c0
<4> [219.284529] rcu_barrier+0x23/0x190
<4> [219.284558] i915_gem_object_unbind+0x277/0x430 [i915]
<4> [219.284586] userptr_mn_invalidate_range_start+0xdd/0x190 [i915]
<4> [219.284589] __mmu_notifier_invalidate_range_start+0x148/0x250
<4> [219.284591] unmap_vmas+0x13e/0x150
<4> [219.284593] unmap_region+0xa3/0x100
<4> [219.284595] __do_munmap+0x26d/0x4c0
<4> [219.284597] __vm_munmap+0x66/0xc0
<4> [219.284599] __x64_sys_munmap+0x12/0x20
<4> [219.284601] do_syscall_64+0x4f/0x240
<4> [219.284603] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [219.284605]
other info that might help us debug this:
<4> [219.284607] Chain exists of:
rcu_state.barrier_mutex --> fs_reclaim --> mmu_notifier_invalidate_range_start
<4> [219.284611] Possible unsafe locking scenario:
<4> [219.284613] CPU0 CPU1
<4> [219.284615] ---- ----
<4> [219.284616] lock(mmu_notifier_invalidate_range_start);
<4> [219.284618] lock(fs_reclaim);
<4> [219.284620] lock(mmu_notifier_invalidate_range_start);
<4> [219.284622] lock(rcu_state.barrier_mutex);
<4> [219.284624]
*** DEADLOCK ***
````