igt@kms_vblank@pipe-b-wait-forked|igt@i915_pm_rpm@modeset-lpsp-stress - incomplete - is trying to acquire lock at: down_trylock, but task is already holding lock at: raw_spin_rq_lock_nested
https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_6332/shard-iclb4/pstore9-1634658127_Panic_1.txt
3>[ 480.114392] blk_update_request: I/O error, dev sda, sector 438917720 op 0x0:(READ) flags 0x80700 phys_seg 28 prio class 0
<3>[ 480.114475] sd 2:0:0:0: rejecting I/O to offline device
<3>[ 480.114576] blk_update_request: I/O error, dev sda, sector 232457816 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
<3>[ 480.115348] blk_update_request: I/O error, dev sda, sector 435296800 op 0x0:(READ) flags 0x80700 phys_seg 16 prio class 0
<3>[ 480.115372] Aborting journal on device sda2-8.
<6>[ 480.115549] ata3: EH complete
<3>[ 480.115975] blk_update_request: I/O error, dev sda, sector 438917816 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
<2>[ 480.116010] EXT4-fs error (device sda2): ext4_journal_check_start:83: comm dmesg: Detected aborted journal
<3>[ 480.116152] blk_update_request: I/O error, dev sda, sector 435296800 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
<3>[ 480.117096] blk_update_request: I/O error, dev sda, sector 438917816 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
<3>[ 480.117397] blk_update_request: I/O error, dev sda, sector 231999488 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
<6>[ 480.117641] ata3.00: detaching (SCSI 2:0:0:0)
<3>[ 480.117736] blk_update_request: I/O error, dev sda, sector 435296800 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
<3>[ 480.117820] blk_update_request: I/O error, dev sda, sector 231999488 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
<3>[ 480.117905] blk_update_request: I/O error, dev sda, sector 435314128 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
<3>[ 480.118655] Buffer I/O error on dev sda2, logical block 28868608, lost sync page write
<3>[ 480.119154] JBD2: Error -5 detected when updating journal superblock for sda2-8.
<2>[ 480.119573] EXT4-fs error (device sda2): ext4_journal_check_start:83: comm igt_runner: Detected aborted journal
<3>[ 480.121005] Buffer I/O error on dev sda2, logical block 0, lost sync page write
<3>[ 480.121279] Buffer I/O error on dev sda2, logical block 0, lost sync page write
<3>[ 480.121292] EXT4-fs (sda2): I/O error while writing superblock
<3>[ 480.121298] EXT4-fs (sda2): previous I/O error to superblock detected
<3>[ 480.121329] Buffer I/O error on dev sda2, logical block 0, lost sync page write
<0>[ 480.121433] Kernel panic - not syncing: EXT4-fs (device sda2): panic forced after error
<3>[ 480.121443] EXT4-fs (sda2): I/O error while writing superblock
<3>[ 480.121450] EXT4-fs (sda2): I/O error while writing superblock
<4>[ 480.121672] ------------[ cut here ]------------
<4>[ 480.121682]
<4>[ 480.121682] ======================================================
<4>[ 480.121683] WARNING: possible circular locking dependency detected
<4>[ 480.121683] 5.15.0-rc6-CI-CI_DRM_10759+ #1 Not tainted
<4>[ 480.121684] ------------------------------------------------------
<4>[ 480.121685] kms_vblank/11462 is trying to acquire lock:
<4>[ 480.121685] ffffffff82734b18 ((console_sem).lock){-.-.}-{2:2}, at: down_trylock+0xa/0x30
<4>[ 480.121692]
<4>[ 480.121692] but task is already holding lock:
<4>[ 480.121692] ffff88849fab7598 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x1b/0x30
<4>[ 480.121696]
<4>[ 480.121696] which lock already depends on the new lock.
<4>[ 480.121696]
<4>[ 480.121696]
<4>[ 480.121696] the existing dependency chain (in reverse order) is:
<4>[ 480.121696]
<4>[ 480.121696] -> #2 (&rq->__lock){-.-.}-{2:2}:
<4>[ 480.121698] lock_acquire+0xd3/0x310
<4>[ 480.121699] _raw_spin_lock_nested+0x2d/0x40
<4>[ 480.121702] raw_spin_rq_lock_nested+0x1b/0x30
<4>[ 480.121704] task_fork_fair+0x43/0x170
<4>[ 480.121706] sched_fork+0x138/0x240
<4>[ 480.121707] copy_process+0x7e7/0x1ee0
<4>[ 480.121708] kernel_clone+0x98/0x740
<4>[ 480.121710] kernel_thread+0x50/0x70
<4>[ 480.121711] rest_init+0x1d/0x260
<4>[ 480.121712] start_kernel+0x669/0x690
<4>[ 480.121715] secondary_startup_64_no_verify+0xb0/0xbb
<4>[ 480.121717]
<4>[ 480.121717] -> #1 (&p->pi_lock){-.-.}-{2:2}:
<4>[ 480.121718] lock_acquire+0xd3/0x310
<4>[ 480.121720] _raw_spin_lock_irqsave+0x33/0x50
<4>[ 480.121720] try_to_wake_up+0x6b/0x630
<4>[ 480.121721] up+0x3b/0x50
<4>[ 480.121722] __up_console_sem+0x58/0x70
<4>[ 480.121724] console_unlock+0x328/0x530
<4>[ 480.121726] vprintk_emit+0x227/0x330
<4>[ 480.121727] _printk+0x53/0x6a
<4>[ 480.121729] do_exit.cold.46+0x31/0xdd
<4>[ 480.121731] do_group_exit+0x42/0xb0
<4>[ 480.121734] __x64_sys_exit_group+0xf/0x10
<4>[ 480.121735] do_syscall_64+0x37/0xb0
<4>[ 480.121736] entry_SYSCALL_64_after_hwframe+0x44/0xae
<4>[ 480.121737]
<4>[ 480.121737] -> #0 ((console_sem).lock){-.-.}-{2:2}:
<4>[ 480.121739] validate_chain+0xb37/0x1e70
<4>[ 480.121740] __lock_acquire+0x5a1/0xb70
<4>[ 480.121742] lock_acquire+0xd3/0x310
<4>[ 480.121743] _raw_spin_lock_irqsave+0x33/0x50
<4>[ 480.121744] down_trylock+0xa/0x30
<4>[ 480.121745] __down_trylock_console_sem+0x25/0xa0
<4>[ 480.121746] console_trylock+0xe/0x60
<4>[ 480.121748] vprintk_emit+0x121/0x330
<4>[ 480.121749] _printk+0x53/0x6a
<4>[ 480.121751] __warn_printk+0x41/0x82
<4>[ 480.121752] native_smp_send_reschedule+0x2f/0x40
<4>[ 480.121754] check_preempt_curr+0x42/0x70
<4>[ 480.121756] ttwu_do_wakeup+0x14/0x230
<4>[ 480.121758] try_to_wake_up+0x1f1/0x630
<4>[ 480.121759] wake_page_function+0x60/0xa0
<4>[ 480.121760] __wake_up_common+0x81/0x1a0
<4>[ 480.121762] wake_up_page_bit+0xa7/0x130
<4>[ 480.121763] __read_end_io+0x132/0x210
<4>[ 480.121765] blk_update_request+0x254/0x420
<4>[ 480.121767] blk_mq_end_request+0x15/0x110
<4>[ 480.121769] blk_mq_dispatch_rq_list+0x3dc/0x810
<4>[ 480.121770] __blk_mq_do_dispatch_sched+0x15f/0x310
<4>[ 480.121772] __blk_mq_sched_dispatch_requests+0xef/0x140
<4>[ 480.121774] blk_mq_sched_dispatch_requests+0x2b/0x50
<4>[ 480.121775] __blk_mq_run_hw_queue+0x44/0x90
<4>[ 480.121777] __blk_mq_delay_run_hw_queue+0x197/0x1e0
<4>[ 480.121778] blk_mq_run_hw_queue+0x81/0xe0
<4>[ 480.121779] blk_mq_sched_insert_requests+0xbd/0x2a0
<4>[ 480.121781] blk_mq_flush_plug_list+0x134/0x270
<4>[ 480.121782] blk_flush_plug_list+0xd2/0x100
<4>[ 480.121783] blk_finish_plug+0x1c/0x30
<4>[ 480.121784] read_pages+0x179/0x390
<4>[ 480.121786] page_cache_ra_unbounded+0x152/0x270
<4>[ 480.121787] filemap_fault+0x550/0x920
<4>[ 480.121788] __do_fault+0x2c/0x100
<4>[ 480.121790] __handle_mm_fault+0xbf6/0x1350
<4>[ 480.121791] handle_mm_fault+0x150/0x3f0
<4>[ 480.121792] do_user_addr_fault+0x1e9/0x670
<4>[ 480.121794] exc_page_fault+0x62/0x230
<4>[ 480.121795] asm_exc_page_fault+0x1e/0x30
<4>[ 480.121797]
<4>[ 480.121797] other info that might help us debug this:
<4>[ 480.121797]
<4>[ 480.121797] Chain exists of:
<4>[ 480.121797] (console_sem).lock --> &p->pi_lock --> &rq->__lock
<4>[ 480.121797]
<4>[ 480.121798] Possible unsafe locking scenario:
<4>[ 480.121798]
<4>[ 480.121799] CPU0 CPU1
<4>[ 480.121799] ---- ----
<4>[ 480.121799] lock(&rq->__lock);
<4>[ 480.121800] lock(&p->pi_lock);
<4>[ 480.121800] lock(&rq->__lock);
<4>[ 480.121801] lock((console_sem).lock);
<4>[ 480.121802]
<4>[ 480.121802] *** DEADLOCK ***
Edited by LAKSHMINARAYANA VUDUM