Skip to content
Snippets Groups Projects
  1. Dec 30, 2023
  2. Dec 21, 2023
  3. Dec 01, 2023
    • Jann Horn's avatar
      locking/mutex: Document that mutex_unlock() is non-atomic · a51749ab
      Jann Horn authored and Ingo Molnar's avatar Ingo Molnar committed
      
      I have seen several cases of attempts to use mutex_unlock() to release an
      object such that the object can then be freed by another task.
      
      This is not safe because mutex_unlock(), in the
      MUTEX_FLAG_WAITERS && !MUTEX_FLAG_HANDOFF case, accesses the mutex
      structure after having marked it as unlocked; so mutex_unlock() requires
      its caller to ensure that the mutex stays alive until mutex_unlock()
      returns.
      
      If MUTEX_FLAG_WAITERS is set and there are real waiters, those waiters
      have to keep the mutex alive, but we could have a spurious
      MUTEX_FLAG_WAITERS left if an interruptible/killable waiter bailed
      between the points where __mutex_unlock_slowpath() did the cmpxchg
      reading the flags and where it acquired the wait_lock.
      
      ( With spinlocks, that kind of code pattern is allowed and, from what I
        remember, used in several places in the kernel. )
      
      Document this, such a semantic difference between mutexes and spinlocks
      is fairly unintuitive.
      
      [ mingo: Made the changelog a bit more assertive, refined the comments. ]
      
      Signed-off-by: default avatarJann Horn <jannh@google.com>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Link: https://lore.kernel.org/r/20231130204817.2031407-1-jannh@google.com
      a51749ab
  4. Nov 24, 2023
  5. Nov 23, 2023
  6. Oct 19, 2023
  7. Oct 12, 2023
  8. Oct 11, 2023
  9. Oct 03, 2023
  10. Sep 24, 2023
  11. Sep 22, 2023
  12. Sep 20, 2023
  13. Aug 21, 2023
    • Helge Deller's avatar
      lockdep: fix static memory detection even more · 0a6b58c5
      Helge Deller authored
      On the parisc architecture, lockdep reports for all static objects which
      are in the __initdata section (e.g. "setup_done" in devtmpfs,
      "kthreadd_done" in init/main.c) this warning:
      
      	INFO: trying to register non-static key.
      
      The warning itself is wrong, because those objects are in the __initdata
      section, but the section itself is on parisc outside of range from
      _stext to _end, which is why the static_obj() functions returns a wrong
      answer.
      
      While fixing this issue, I noticed that the whole existing check can
      be simplified a lot.
      Instead of checking against the _stext and _end symbols (which include
      code areas too) just check for the .data and .bss segments (since we check a
      data object). This can be done with the existing is_kernel_core_data()
      macro.
      
      In addition objects in the __initdata section can be checked with
      init_section_contains(), and is_kernel_rodata() allows keys to be in the
      _ro_after_init section.
      
      This partly reverts and simplifies commit bac59d18 ("x86/setup: Fix static
      memory detection").
      
      Link: https://lkml.kernel.org/r/ZNqrLRaOi/3wPAdp@p100
      
      
      Fixes: bac59d18 ("x86/setup: Fix static memory detection")
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: "Rafael J. Wysocki" <rafael@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      0a6b58c5
  14. Aug 14, 2023
    • Dietmar Eggemann's avatar
      torture: Add lock_torture writer_fifo module parameter · 5d248bb3
      Dietmar Eggemann authored
      
      This commit adds a module parameter that causes the locktorture writer
      to run at real-time priority.
      
      To use it:
      insmod /lib/modules/torture.ko random_shuffle=1
      insmod /lib/modules/locktorture.ko torture_type=mutex_lock rt_boost=1 rt_boost_factor=50 nested_locks=3 writer_fifo=1
      													^^^^^^^^^^^^^
      
      A predecessor to this patch has been helpful to uncover issues with the
      proxy-execution series.
      
      [ paulmck: Remove locktorture-specific code from kernel/torture.c. ]
      
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Joel Fernandes <joel@joelfernandes.org>
      Cc: Juri Lelli <juri.lelli@redhat.com>
      Cc: Valentin Schneider <vschneid@redhat.com>
      Cc: kernel-team@android.com
      Signed-off-by: default avatarDietmar Eggemann <dietmar.eggemann@arm.com>
      [jstultz: Include header change to build, reword commit message]
      Signed-off-by: default avatarJohn Stultz <jstultz@google.com>
      Acked-by: default avatarDavidlohr Bueso <dave@stgolabs.net>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      5d248bb3
  15. Aug 03, 2023
  16. Jul 17, 2023
    • Peter Zijlstra's avatar
      locking/rtmutex: Fix task->pi_waiters integrity · f7853c34
      Peter Zijlstra authored
      
      Henry reported that rt_mutex_adjust_prio_check() has an ordering
      problem and puts the lie to the comment in [7]. Sharing the sort key
      between lock->waiters and owner->pi_waiters *does* create problems,
      since unlike what the comment claims, holding [L] is insufficient.
      
      Notably, consider:
      
      	A
            /   \
           M1   M2
           |     |
           B     C
      
      That is, task A owns both M1 and M2, B and C block on them. In this
      case a concurrent chain walk (B & C) will modify their resp. sort keys
      in [7] while holding M1->wait_lock and M2->wait_lock. So holding [L]
      is meaningless, they're different Ls.
      
      This then gives rise to a race condition between [7] and [11], where
      the requeue of pi_waiters will observe an inconsistent tree order.
      
      	B				C
      
        (holds M1->wait_lock,		(holds M2->wait_lock,
         holds B->pi_lock)		 holds A->pi_lock)
      
        [7]
        waiter_update_prio();
        ...
        [8]
        raw_spin_unlock(B->pi_lock);
        ...
        [10]
        raw_spin_lock(A->pi_lock);
      
      				[11]
      				rt_mutex_enqueue_pi();
      				// observes inconsistent A->pi_waiters
      				// tree order
      
      Fixing this means either extending the range of the owner lock from
      [10-13] to [6-13], with the immediate problem that this means [6-8]
      hold both blocked and owner locks, or duplicating the sort key.
      
      Since the locking in chain walk is horrible enough without having to
      consider pi_lock nesting rules, duplicate the sort key instead.
      
      By giving each tree their own sort key, the above race becomes
      harmless, if C sees B at the old location, then B will correct things
      (if they need correcting) when it walks up the chain and reaches A.
      
      Fixes: fb00aca4 ("rtmutex: Turn the plist into an rb-tree")
      Reported-by: default avatarHenry Wu <triangletrap12@gmail.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Tested-by: default avatarHenry Wu <triangletrap12@gmail.com>
      Link: https://lkml.kernel.org/r/20230707161052.GF2883469%40hirez.programming.kicks-ass.net
      f7853c34
  17. Jun 10, 2023
    • Arnd Bergmann's avatar
      locking: add lockevent_read() prototype · ff713881
      Arnd Bergmann authored
      lockevent_read() has a __weak definition and the only caller in
      kernel/locking/lock_events.c, plus a strong definition in qspinlock_stat.h
      that overrides it, but no other declaration.  This causes a W=1 warning:
      
      kernel/locking/lock_events.c:61:16: error: no previous prototype for 'lockevent_read' [-Werror=missing-prototypes]
      
      Add shared prototype to avoid the warnings.
      
      Link: https://lkml.kernel.org/r/20230517131102.934196-7-arnd@kernel.org
      
      
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Eric Paris <eparis@redhat.com>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rafael@kernel.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Waiman Long <longman@redhat.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      ff713881
  18. May 19, 2023
    • Kent Overstreet's avatar
      lockdep: Add lock_set_cmp_fn() annotation · eb1cfd09
      Kent Overstreet authored
      
      This implements a new interface to lockdep, lock_set_cmp_fn(), for
      defining a custom ordering when taking multiple locks of the same
      class.
      
      This is an alternative to subclasses, but can not fully replace them
      since subclasses allow lock hierarchies with other clasees
      inter-twined, while this relies on pure class nesting.
      
      Specifically, if A is our nesting class then:
      
        A/0 <- B <- A/1
      
      Would be a valid lock order with subclasses (each subclass really is a
      full class from the validation PoV) but not with this annotation,
      which requires all nesting to be consecutive.
      
      Example output:
      
      | ============================================
      | WARNING: possible recursive locking detected
      | 6.2.0-rc8-00003-g7d81e591ca6a-dirty #15 Not tainted
      | --------------------------------------------
      | kworker/14:3/938 is trying to acquire lock:
      | ffff8880143218c8 (&b->lock l=0 0:2803368){++++}-{3:3}, at: bch_btree_node_get.part.0+0x81/0x2b0
      |
      | but task is already holding lock:
      | ffff8880143de8c8 (&b->lock l=1 1048575:9223372036854775807){++++}-{3:3}, at: __bch_btree_map_nodes+0xea/0x1e0
      | and the lock comparison function returns 1:
      |
      | other info that might help us debug this:
      |  Possible unsafe locking scenario:
      |
      |        CPU0
      |        ----
      |   lock(&b->lock l=1 1048575:9223372036854775807);
      |   lock(&b->lock l=0 0:2803368);
      |
      |  *** DEADLOCK ***
      |
      |  May be due to missing lock nesting notation
      |
      | 3 locks held by kworker/14:3/938:
      |  #0: ffff888005ea9d38 ((wq_completion)bcache){+.+.}-{0:0}, at: process_one_work+0x1ec/0x530
      |  #1: ffff8880098c3e70 ((work_completion)(&cl->work)#3){+.+.}-{0:0}, at: process_one_work+0x1ec/0x530
      |  #2: ffff8880143de8c8 (&b->lock l=1 1048575:9223372036854775807){++++}-{3:3}, at: __bch_btree_map_nodes+0xea/0x1e0
      
      [peterz: extended changelog]
      Signed-off-by: default avatarKent Overstreet <kent.overstreet@linux.dev>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lkml.kernel.org/r/20230509195847.1745548-1-kent.overstreet@linux.dev
      eb1cfd09
  19. May 11, 2023
  20. May 08, 2023
  21. May 02, 2023
  22. Apr 29, 2023
  23. Mar 27, 2023
  24. Mar 07, 2023
Loading