igt@gem_tiled_fence_blits@normal - dmesg-fail - WARNING: possible circular locking dependency detected, is trying to acquire lock at: unmap_mapping_pages, but task is already holding lock at: i915_vma_pin_fence
https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_432/fi-blb-e6850/igt@gem_tiled_fence_blits@normal.html
<6> [74.056168] Console: switching to colour dummy device 80x25
<6> [74.057088] [IGT] gem_tiled_fence_blits: executing
<6> [74.062968] [IGT] gem_tiled_fence_blits: starting subtest normal
<6> [74.063167] gem_tiled_fence (1021): drop_caches: 4
<4> [74.574924]
<4> [74.574931] ======================================================
<4> [74.574935] WARNING: possible circular locking dependency detected
<4> [74.574940] 5.5.0-gc53ff44eb14e-drmtip_432+ #1 Tainted: G U
<4> [74.574944] ------------------------------------------------------
<4> [74.574948] gem_tiled_fence/1021 is trying to acquire lock:
<4> [74.574952] ffff98fb606c3928 (&mapping->i_mmap_rwsem){++++}, at: unmap_mapping_pages+0x48/0x130
<4> [74.574964]
but task is already holding lock:
<4> [74.574968] ffff98fb66f59570 (&vm->mutex){+.+.}, at: i915_vma_pin_fence+0x83/0x200 [i915]
<4> [74.575075]
which lock already depends on the new lock.
<4> [74.575080]
the existing dependency chain (in reverse order) is:
<4> [74.575084]
-> #2 (&vm->mutex){+.+.}:
<4> [74.575091] __mutex_lock+0x9a/0x9c0
<4> [74.575152] i915_vma_unbind+0xe0/0x110 [i915]
<4> [74.575213] i915_gem_object_unbind+0x1dc/0x400 [i915]
<4> [74.575273] userptr_mn_invalidate_range_start+0xdd/0x190 [i915]
<4> [74.575278] __mmu_notifier_invalidate_range_start+0x148/0x250
<4> [74.575283] unmap_vmas+0x13e/0x150
<4> [74.575287] unmap_region+0xa3/0x100
<4> [74.575291] __do_munmap+0x26d/0x4c0
<4> [74.575295] __vm_munmap+0x66/0xc0
<4> [74.575299] __x64_sys_munmap+0x12/0x20
<4> [74.575303] do_syscall_64+0x4f/0x240
<4> [74.575308] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [74.575311]
-> #1 (mmu_notifier_invalidate_range_start){+.+.}:
<4> [74.575319] page_mkclean_one+0xe4/0x220
<4> [74.575322] rmap_walk_file+0x18e/0x3a0
<4> [74.575326] page_mkclean+0xb4/0xe0
<4> [74.575331] clear_page_dirty_for_io+0xd4/0x390
<4> [74.575337] mpage_submit_page+0x1a/0x70
<4> [74.575340] mpage_process_page_bufs+0xe7/0x110
<4> [74.575345] mpage_prepare_extent_to_map+0x223/0x370
<4> [74.575349] ext4_writepages+0x5ba/0x12b0
<4> [74.575353] do_writepages+0x46/0xe0
<4> [74.575357] __filemap_fdatawrite_range+0xc6/0x100
<4> [74.575361] file_write_and_wait_range+0x3c/0x90
<4> [74.575366] ext4_sync_file+0x1a4/0x540
<4> [74.575370] do_fsync+0x33/0x60
<4> [74.575374] __x64_sys_fsync+0xb/0x10
<4> [74.575377] do_syscall_64+0x4f/0x240
<4> [74.575382] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [74.575385]
-> #0 (&mapping->i_mmap_rwsem){++++}:
<4> [74.575392] __lock_acquire+0x1328/0x15d0
<4> [74.575396] lock_acquire+0xa7/0x1c0
<4> [74.575400] down_write+0x33/0x70
<4> [74.575404] unmap_mapping_pages+0x48/0x130
<4> [74.575465] i915_vma_revoke_mmap.part.39+0x66/0x190 [i915]
<4> [74.575525] fence_update+0xfd/0x2d0 [i915]
<4> [74.575585] __i915_vma_pin_fence+0x122/0x360 [i915]
<4> [74.575646] i915_vma_pin_fence+0x96/0x200 [i915]
<4> [74.575704] vm_fault_gtt+0x2de/0x990 [i915]
<4> [74.575708] __do_fault+0x45/0xf8
<4> [74.575712] __handle_mm_fault+0xab4/0x1090
<4> [74.575716] handle_mm_fault+0x154/0x350
<4> [74.575720] __do_page_fault+0x2e6/0x510
<4> [74.575724] page_fault+0x34/0x40
<4> [74.575728]
other info that might help us debug this:
<4> [74.575733] Chain exists of:
&mapping->i_mmap_rwsem --> mmu_notifier_invalidate_range_start --> &vm->mutex
<4> [74.575742] Possible unsafe locking scenario:
<4> [74.575747] CPU0 CPU1
<4> [74.575750] ---- ----
<4> [74.575753] lock(&vm->mutex);
<4> [74.575757] lock(mmu_notifier_invalidate_range_start);
<4> [74.575762] lock(&vm->mutex);
<4> [74.575767] lock(&mapping->i_mmap_rwsem);
<4> [74.575771]
*** DEADLOCK ***