Skip to content
Snippets Groups Projects
  1. Dec 21, 2021
    • Todd Kjos's avatar
      binder: fix async_free_space accounting for empty parcels · cfd0d84b
      Todd Kjos authored
      
      In 4.13, commit 74310e06 ("android: binder: Move buffer out of area shared with user space")
      fixed a kernel structure visibility issue. As part of that patch,
      sizeof(void *) was used as the buffer size for 0-length data payloads so
      the driver could detect abusive clients sending 0-length asynchronous
      transactions to a server by enforcing limits on async_free_size.
      
      Unfortunately, on the "free" side, the accounting of async_free_space
      did not add the sizeof(void *) back. The result was that up to 8-bytes of
      async_free_space were leaked on every async transaction of 8-bytes or
      less.  These small transactions are uncommon, so this accounting issue
      has gone undetected for several years.
      
      The fix is to use "buffer_size" (the allocated buffer size) instead of
      "size" (the logical buffer size) when updating the async_free_space
      during the free operation. These are the same except for this
      corner case of asynchronous transactions with payloads < 8 bytes.
      
      Fixes: 74310e06 ("android: binder: Move buffer out of area shared with user space")
      Signed-off-by: default avatarTodd Kjos <tkjos@google.com>
      Cc: stable@vger.kernel.org # 4.14+
      Link: https://lore.kernel.org/r/20211220190150.2107077-1-tkjos@google.com
      
      
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      cfd0d84b
  2. Dec 09, 2021
  3. Nov 17, 2021
  4. Oct 19, 2021
  5. Oct 15, 2021
  6. Oct 14, 2021
  7. Sep 14, 2021
    • Todd Kjos's avatar
      binder: make sure fd closes complete · 5fdb55c1
      Todd Kjos authored
      
      During BC_FREE_BUFFER processing, the BINDER_TYPE_FDA object
      cleanup may close 1 or more fds. The close operations are
      completed using the task work mechanism -- which means the thread
      needs to return to userspace or the file object may never be
      dereferenced -- which can lead to hung processes.
      
      Force the binder thread back to userspace if an fd is closed during
      BC_FREE_BUFFER handling.
      
      Fixes: 80cd7956 ("binder: fix use-after-free due to ksys_close() during fdget()")
      Cc: stable <stable@vger.kernel.org>
      Reviewed-by: default avatarMartijn Coenen <maco@android.com>
      Acked-by: default avatarChristian Brauner <christian.brauner@ubuntu.com>
      Signed-off-by: default avatarTodd Kjos <tkjos@google.com>
      Link: https://lore.kernel.org/r/20210830195146.587206-1-tkjos@google.com
      
      
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      5fdb55c1
    • Li Li's avatar
      binder: fix freeze race · b564171a
      Li Li authored
      
      Currently cgroup freezer is used to freeze the application threads, and
      BINDER_FREEZE is used to freeze the corresponding binder interface.
      There's already a mechanism in ioctl(BINDER_FREEZE) to wait for any
      existing transactions to drain out before actually freezing the binder
      interface.
      
      But freezing an app requires 2 steps, freezing the binder interface with
      ioctl(BINDER_FREEZE) and then freezing the application main threads with
      cgroupfs. This is not an atomic operation. The following race issue
      might happen.
      
      1) Binder interface is frozen by ioctl(BINDER_FREEZE);
      2) Main thread A initiates a new sync binder transaction to process B;
      3) Main thread A is frozen by "echo 1 > cgroup.freeze";
      4) The response from process B reaches the frozen thread, which will
      unexpectedly fail.
      
      This patch provides a mechanism to check if there's any new pending
      transaction happening between ioctl(BINDER_FREEZE) and freezing the
      main thread. If there's any, the main thread freezing operation can
      be rolled back to finish the pending transaction.
      
      Furthermore, the response might reach the binder driver before the
      rollback actually happens. That will still cause failed transaction.
      
      As the other process doesn't wait for another response of the response,
      the response transaction failure can be fixed by treating the response
      transaction like an oneway/async one, allowing it to reach the frozen
      thread. And it will be consumed when the thread gets unfrozen later.
      
      NOTE: This patch reuses the existing definition of struct
      binder_frozen_status_info but expands the bit assignments of __u32
      member sync_recv.
      
      To ensure backward compatibility, bit 0 of sync_recv still indicates
      there's an outstanding sync binder transaction. This patch adds new
      information to bit 1 of sync_recv, indicating the binder transaction
      happens exactly when there's a race.
      
      If an existing userspace app runs on a new kernel, a sync binder call
      will set bit 0 of sync_recv so ioctl(BINDER_GET_FROZEN_INFO) still
      return the expected value (true). The app just doesn't check bit 1
      intentionally so it doesn't have the ability to tell if there's a race.
      This behavior is aligned with what happens on an old kernel which
      doesn't set bit 1 at all.
      
      A new userspace app can 1) check bit 0 to know if there's a sync binder
      transaction happened when being frozen - same as before; and 2) check
      bit 1 to know if that sync binder transaction happened exactly when
      there's a race - a new information for rollback decision.
      
      the same time, confirmed the pending transactions succeeded.
      
      Fixes: 432ff1e9 ("binder: BINDER_FREEZE ioctl")
      Acked-by: default avatarTodd Kjos <tkjos@google.com>
      Cc: stable <stable@vger.kernel.org>
      Signed-off-by: default avatarLi Li <dualli@google.com>
      Test: stress test with apps being frozen and initiating binder calls at
      Link: https://lore.kernel.org/r/20210910164210.2282716-2-dualli@chromium.org
      
      
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b564171a
  8. Aug 03, 2021
  9. Jul 21, 2021
  10. May 13, 2021
  11. Apr 10, 2021
  12. Mar 24, 2021
  13. Mar 22, 2021
    • Paul Moore's avatar
      lsm: separate security_task_getsecid() into subjective and objective variants · 4ebd7651
      Paul Moore authored
      
      Of the three LSMs that implement the security_task_getsecid() LSM
      hook, all three LSMs provide the task's objective security
      credentials.  This turns out to be unfortunate as most of the hook's
      callers seem to expect the task's subjective credentials, although
      a small handful of callers do correctly expect the objective
      credentials.
      
      This patch is the first step towards fixing the problem: it splits
      the existing security_task_getsecid() hook into two variants, one
      for the subjective creds, one for the objective creds.
      
        void security_task_getsecid_subj(struct task_struct *p,
      				   u32 *secid);
        void security_task_getsecid_obj(struct task_struct *p,
      				  u32 *secid);
      
      While this patch does fix all of the callers to use the correct
      variant, in order to keep this patch focused on the callers and to
      ease review, the LSMs continue to use the same implementation for
      both hooks.  The net effect is that this patch should not change
      the behavior of the kernel in any way, it will be up to the latter
      LSM specific patches in this series to change the hook
      implementations and return the correct credentials.
      
      Acked-by: Mimi Zohar <zohar@linux.ibm.com> (IMA)
      Acked-by: default avatarCasey Schaufler <casey@schaufler-ca.com>
      Reviewed-by: default avatarRichard Guy Briggs <rgb@redhat.com>
      Signed-off-by: default avatarPaul Moore <paul@paul-moore.com>
      4ebd7651
  14. Jan 24, 2021
    • Christian Brauner's avatar
      fs: make helpers idmap mount aware · 549c7297
      Christian Brauner authored
      Extend some inode methods with an additional user namespace argument. A
      filesystem that is aware of idmapped mounts will receive the user
      namespace the mount has been marked with. This can be used for
      additional permission checking and also to enable filesystems to
      translate between uids and gids if they need to. We have implemented all
      relevant helpers in earlier patches.
      
      As requested we simply extend the exisiting inode method instead of
      introducing new ones. This is a little more code churn but it's mostly
      mechanical and doesnt't leave us with additional inode methods.
      
      Link: https://lore.kernel.org/r/20210121131959.646623-25-christian.brauner@ubuntu.com
      
      
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: linux-fsdevel@vger.kernel.org
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarChristian Brauner <christian.brauner@ubuntu.com>
      549c7297
  15. Dec 10, 2020
  16. Dec 09, 2020
  17. Nov 11, 2020
  18. Nov 09, 2020
  19. Oct 17, 2020
    • Jens Axboe's avatar
      task_work: cleanup notification modes · 91989c70
      Jens Axboe authored
      
      A previous commit changed the notification mode from true/false to an
      int, allowing notify-no, notify-yes, or signal-notify. This was
      backwards compatible in the sense that any existing true/false user
      would translate to either 0 (on notification sent) or 1, the latter
      which mapped to TWA_RESUME. TWA_SIGNAL was assigned a value of 2.
      
      Clean this up properly, and define a proper enum for the notification
      mode. Now we have:
      
      - TWA_NONE. This is 0, same as before the original change, meaning no
        notification requested.
      - TWA_RESUME. This is 1, same as before the original change, meaning
        that we use TIF_NOTIFY_RESUME.
      - TWA_SIGNAL. This uses TIF_SIGPENDING/JOBCTL_TASK_WORK for the
        notification.
      
      Clean up all the callers, switching their 0/1/false/true to using the
      appropriate TWA_* mode for notifications.
      
      Fixes: e91b4816 ("task_work: teach task_work_add() to do signal_wake_up()")
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      91989c70
  20. Oct 10, 2020
    • Todd Kjos's avatar
      binder: fix UAF when releasing todo list · f3277cbf
      Todd Kjos authored
      
      When releasing a thread todo list when tearing down
      a binder_proc, the following race was possible which
      could result in a use-after-free:
      
      1.  Thread 1: enter binder_release_work from binder_thread_release
      2.  Thread 2: binder_update_ref_for_handle() -> binder_dec_node_ilocked()
      3.  Thread 2: dec nodeA --> 0 (will free node)
      4.  Thread 1: ACQ inner_proc_lock
      5.  Thread 2: block on inner_proc_lock
      6.  Thread 1: dequeue work (BINDER_WORK_NODE, part of nodeA)
      7.  Thread 1: REL inner_proc_lock
      8.  Thread 2: ACQ inner_proc_lock
      9.  Thread 2: todo list cleanup, but work was already dequeued
      10. Thread 2: free node
      11. Thread 2: REL inner_proc_lock
      12. Thread 1: deref w->type (UAF)
      
      The problem was that for a BINDER_WORK_NODE, the binder_work element
      must not be accessed after releasing the inner_proc_lock while
      processing the todo list elements since another thread might be
      handling a deref on the node containing the binder_work element
      leading to the node being freed.
      
      Signed-off-by: default avatarTodd Kjos <tkjos@google.com>
      Link: https://lore.kernel.org/r/20201009232455.4054810-1-tkjos@google.com
      
      
      Cc: <stable@vger.kernel.org> # 4.14, 4.19, 5.4, 5.8
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f3277cbf
  21. Oct 05, 2020
  22. Sep 16, 2020
  23. Sep 03, 2020
  24. Jul 29, 2020
  25. Jul 23, 2020
Loading