1. 07 Dec, 2018 7 commits
    • Gao Xiang's avatar
      staging: erofs: simplify `z_erofs_vle_submit_all' · 7146a4f0
      Gao Xiang authored
      Previously, there are too many hacked stuffs such as `__FSIO_1',
      `lstgrp_noio', `lstgrp_io' out there in `z_erofs_vle_submit_all'.
      
      Revisit the whole process by properly introducing jobqueue to
      represent each type of queued workgroups, furthermore hide all of
      crazyness behind independent separated functions.
      
      After this patch, 2 independent jobqueues exist if managed cache
      is enabled, or 1 jobqueue if disabled.
      Reviewed-by: default avatarChao Yu <yuchao0@huawei.com>
      Signed-off-by: default avatarGao Xiang <gaoxiang25@huawei.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7146a4f0
    • Gao Xiang's avatar
      staging: erofs: redefine where `owned_workgrp_t' points · 6afd227c
      Gao Xiang authored
      By design, workgroups are queued in the form of linked lists.
      
      Previously, it points to the next `z_erofs_vle_workgroup',
      which isn't flexible enough to simplify `z_erofs_vle_submit_all'.
      
      Let's fix it by pointing to the next `owned_workgrp_t' and use
      container_of to get its coresponding `z_erofs_vle_workgroup'.
      Reviewed-by: default avatarChao Yu <yuchao0@huawei.com>
      Signed-off-by: default avatarGao Xiang <gaoxiang25@huawei.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6afd227c
    • Gao Xiang's avatar
      staging: erofs: refine compressed pages preload flow · 92e6efd5
      Gao Xiang authored
      Currently, there are two kinds of compressed pages in erofs:
        1) file pages for the in-place decompression and
        2) managed pages for cached decompression.
      Both are all stored in grp->compressed_pages[].
      
      For managed pages, they could already exist or could be preloaded
      in this round, including the following cases in detail:
        1) Already valid (loaded in some previous round);
        2) PAGE_UNALLOCATED, should be allocated at the time of submission;
        3) Just found in the managed cache, and with an extra page ref.
      Currently, 1) and 3) can be distinguishable by lock_page and
      checking its PG_private, which is guaranteed by the reclaim path,
      but it's better to do a double check by using an extra tag.
      
      This patch reworks the preload flow by introducing such the tag
      by using tagged pointer, too many #ifdefs are removed as well.
      Reviewed-by: default avatarChao Yu <yuchao0@huawei.com>
      Signed-off-by: default avatarGao Xiang <gaoxiang25@huawei.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      92e6efd5
    • Gao Xiang's avatar
      staging: erofs: revisit the page submission flow · 9248fce7
      Gao Xiang authored
      Previously, the submission flow works with cached compressed pages
      reclaim path in a tricky way, and it could be buggy if the reclaim
      path changes later without such tricky restrictions. For example,
      currently one PagePrivate(page) is evaluated without taking page
      lock (it only follows a wait_for_page_locked which closes such race)
      and no handling solves the potential page truncation case.
      
      In addition, it's also full of #ifdefs in the function, which
      is hard to understand and maintain. this patch fixes them all.
      Reviewed-by: default avatarChao Yu <yuchao0@huawei.com>
      Signed-off-by: default avatarGao Xiang <gaoxiang25@huawei.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      9248fce7
    • Gao Xiang's avatar
      staging: erofs: localize UNALLOCATED_CACHED_PAGE placeholder · 672e5476
      Gao Xiang authored
      In practice, in order to do cached decompression rather than reuse
      them for in-place decompression and make full use of pages in
      page_pool instead of allocating as much as possible, an unallocated
      placeholder was introduce to mark all in compressed_pages[] and
      they will be replaced at the time of submission.
      
      Previously EROFS_UNALLOCATED_CACHED_PAGE was included in internal.h,
      which is unnecessary since it's only internally used in decompression
      subsystem, move it to unzip_vle.c and rename it to PAGE_UNALLOCATED.
      Reviewed-by: default avatarChao Yu <yuchao0@huawei.com>
      Signed-off-by: default avatarGao Xiang <gaoxiang25@huawei.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      672e5476
    • Gao Xiang's avatar
      staging: erofs: introduce MNGD_MAPPING helper · c1448fa8
      Gao Xiang authored
      This patch introduces MNGD_MAPPING to wrap up
      sbi->managed_cache->i_mapping, which will be used
      to solve too many #ifdefs in a single function.
      
      No logic changes.
      Reviewed-by: default avatarChao Yu <yuchao0@huawei.com>
      Signed-off-by: default avatarGao Xiang <gaoxiang25@huawei.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c1448fa8
    • Gao Xiang's avatar
      staging: erofs: fix use-after-free of on-stack `z_erofs_vle_unzip_io' · 848bd9ac
      Gao Xiang authored
      The root cause is the race as follows:
       Thread #0                         Thread #1
      
       z_erofs_vle_unzip_kickoff         z_erofs_submit_and_unzip
      
                                          struct z_erofs_vle_unzip_io io[]
         atomic_add_return()
                                          wait_event()
                                          [end of function]
         wake_up()
      
      Fix it by taking the waitqueue lock between atomic_add_return and
      wake_up to close such the race.
      
      kernel message:
      
      Unable to handle kernel paging request at virtual address 97f7052caa1303dc
      ...
      Workqueue: kverityd verity_work
      task: ffffffe32bcb8000 task.stack: ffffffe3298a0000
      PC is at __wake_up_common+0x48/0xa8
      LR is at __wake_up+0x3c/0x58
      ...
      Call trace:
      ...
      [<ffffff94a08ff648>] __wake_up_common+0x48/0xa8
      [<ffffff94a08ff8b8>] __wake_up+0x3c/0x58
      [<ffffff94a0c11b60>] z_erofs_vle_unzip_kickoff+0x40/0x64
      [<ffffff94a0c118e4>] z_erofs_vle_read_endio+0x94/0x134
      [<ffffff94a0c83c9c>] bio_endio+0xe4/0xf8
      [<ffffff94a1076540>] dec_pending+0x134/0x32c
      [<ffffff94a1076f28>] clone_endio+0x90/0xf4
      [<ffffff94a0c83c9c>] bio_endio+0xe4/0xf8
      [<ffffff94a1095024>] verity_work+0x210/0x368
      [<ffffff94a08c4150>] process_one_work+0x188/0x4b4
      [<ffffff94a08c45bc>] worker_thread+0x140/0x458
      [<ffffff94a08cad48>] kthread+0xec/0x108
      [<ffffff94a0883ab4>] ret_from_fork+0x10/0x1c
      Code: d1006273 54000260 f9400804 b9400019 (b85fc081)
      ---[ end trace be9dde154f677cd1 ]---
      Reviewed-by: default avatarChao Yu <yuchao0@huawei.com>
      Signed-off-by: default avatarGao Xiang <gaoxiang25@huawei.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      848bd9ac
  2. 06 Dec, 2018 15 commits
  3. 05 Dec, 2018 18 commits