1. 24 Apr, 2009 1 commit
  2. 07 Apr, 2009 1 commit
  3. 26 Mar, 2009 1 commit
  4. 06 Mar, 2009 1 commit
  5. 26 Feb, 2009 1 commit
    • Jens Axboe's avatar
      block: reduce stack footprint of blk_recount_segments() · 1e428079
      Jens Axboe authored
      blk_recalc_rq_segments() requires a request structure passed in, which
      we don't have from blk_recount_segments(). So the latter allocates one on
      the stack, using > 400 bytes of stack for that. This can cause us to spill
      over one page of stack from ext4 at least:
       0)     4560     400   blk_recount_segments+0x43/0x62
       1)     4160      32   bio_phys_segments+0x1c/0x24
       2)     4128      32   blk_rq_bio_prep+0x2a/0xf9
       3)     4096      32   init_request_from_bio+0xf9/0xfe
       4)     4064     112   __make_request+0x33c/0x3f6
       5)     3952     144   generic_make_request+0x2d1/0x321
       6)     3808      64   submit_bio+0xb9/0xc3
       7)     3744      48   submit_bh+0xea/0x10e
       8)     3696     368   ext4_mb_init_cache+0x257/0xa6a [ext4]
       9)     3328     288   ext4_mb_regular_allocator+0x421/0xcd9 [ext4]
      10)     3040     160   ext4_mb_new_blocks+0x211/0x4b4 [ext4]
      11)     2880     336   ext4_ext_get_blocks+0xb61/0xd45 [ext4]
      12)     2544      96   ext4_get_blocks_wrap+0xf2/0x200 [ext4]
      13)     2448      80   ext4_da_get_block_write+0x6e/0x16b [ext4]
      14)     2368     352   mpage_da_map_blocks+0x7e/0x4b3 [ext4]
      15)     2016     352   ext4_da_writepages+0x2ce/0x43c [ext4]
      16)     1664      32   do_writepages+0x2d/0x3c
      17)     1632     144   __writeback_single_inode+0x162/0x2cd
      18)     1488      96   generic_sync_sb_inodes+0x1e3/0x32b
      19)     1392      16   sync_sb_inodes+0xe/0x10
      20)     1376      48   writeback_inodes+0x69/0xb3
      21)     1328     208   balance_dirty_pages_ratelimited_nr+0x187/0x2f9
      22)     1120     224   generic_file_buffered_write+0x1d4/0x2c4
      23)      896     176   __generic_file_aio_write_nolock+0x35f/0x393
      24)      720      80   generic_file_aio_write+0x6c/0xc8
      25)      640      80   ext4_file_write+0xa9/0x137 [ext4]
      26)      560     320   do_sync_write+0xf0/0x137
      27)      240      48   vfs_write+0xb3/0x13c
      28)      192      64   sys_write+0x4c/0x74
      29)      128     128   system_call_fastpath+0x16/0x1b
      Split the segment counting out into a __blk_recalc_rq_segments() helper
      to avoid allocating an onstack request just for checking the physical
      segment count.
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
  6. 06 Nov, 2008 1 commit
  7. 17 Oct, 2008 1 commit
    • FUJITA Tomonori's avatar
      block: fix nr_phys_segments miscalculation bug · 86771427
      FUJITA Tomonori authored
      This fixes the bug reported by Nikanth Karthikesan <knikanth@suse.de>:
      The root cause of the bug is that blk_phys_contig_segment
      miscalculates q->max_segment_size.
      blk_phys_contig_segment checks:
      req->biotail->bi_size + next_req->bio->bi_size > q->max_segment_size
      But blk_recalc_rq_segments might expect that req->biotail and the
      previous bio in the req are supposed be merged into one
      segment. blk_recalc_rq_segments might also expect that next_req->bio
      and the next bio in the next_req are supposed be merged into one
      segment. In such case, we merge two requests that can't be merged
      here. Later, blk_rq_map_sg gives more segments than it should.
      We need to keep track of segment size in blk_recalc_rq_segments and
      use it to see if two requests can be merged. This patch implements it
      in the similar way that we used to do for hw merging (virtual
      Signed-off-by: default avatarFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
  8. 09 Oct, 2008 8 commits
    • Jens Axboe's avatar
      block: inherit CPU completion on bio->rq and rq->rq merges · ab780f1e
      Jens Axboe authored
      Somewhat incomplete, as we do allow merges of requests and bios
      that have different completion CPUs given. This is done on the
      assumption that a larger IO is still more beneficial than CPU
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
    • Tejun Heo's avatar
      block: move stats from disk to part0 · 074a7aca
      Tejun Heo authored
      Move stats related fields - stamp, in_flight, dkstats - from disk to
      part0 and unify stat handling such that...
      * part_stat_*() now updates part0 together if the specified partition
        is not part0.  ie. part_stat_*() are now essentially all_stat_*().
      * {disk|all}_stat_*() are gone.
      * part_round_stats() is updated similary.  It handles part0 stats
        automatically and disk_round_stats() is killed.
      * part_{inc|dec}_in_fligh() is implemented which automatically updates
        part0 stats for parts other than part0.
      * disk_map_sector_rcu() is updated to return part0 if no part matches.
        Combined with the above changes, this makes NULL special case
        handling in callers unnecessary.
      * Separate stats show code paths for disk are collapsed into part
        stats show code paths.
      * Rename disk_stat_lock/unlock() to part_stat_lock/unlock()
      While at it, reposition stat handling macros a bit and add missing
      parentheses around macro parameters.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
    • Tejun Heo's avatar
      block: fix diskstats access · c9959059
      Tejun Heo authored
      There are two variants of stat functions - ones prefixed with double
      underbars which don't care about preemption and ones without which
      disable preemption before manipulating per-cpu counters.  It's unclear
      whether the underbarred ones assume that preemtion is disabled on
      entry as some callers don't do that.
      This patch unifies diskstats access by implementing disk_stat_lock()
      and disk_stat_unlock() which take care of both RCU (for partition
      access) and preemption (for per-cpu counter access).  diskstats access
      should always be enclosed between the two functions.  As such, there's
      no need for the versions which disables preemption.  They're removed
      and double underbars ones are renamed to drop the underbars.  As an
      extra argument is added, there's no danger of using the old version
      disk_stat_lock() uses get_cpu() and returns the cpu index and all
      diskstat functions which access per-cpu counters now has @cpu
      argument to help RT.
      This change adds RCU or preemption operations at some places but also
      collapses several preemption ops into one at others.  Overall, the
      performance difference should be negligible as all involved ops are
      very lightweight per-cpu ones.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
    • Tejun Heo's avatar
      block: fix disk->part[] dereferencing race · e71bf0d0
      Tejun Heo authored
      disk->part[] is protected by its matching bdev's lock.  However,
      non-critical accesses like collecting stats and printing out sysfs and
      proc information used to be performed without any locking.  As
      partitions can come and go dynamically, partitions can go away
      underneath those non-critical accesses.  As some of those accesses are
      writes, this theoretically can lead to silent corruption.
      This patch fixes the race by using RCU for the partition array and dev
      reference counter to hold partitions.
      * Rename disk->part[] to disk->__part[] to make sure no one outside
        genhd layer proper accesses it directly.
      * Use RCU for disk->__part[] dereferencing.
      * Implement disk_{get|put}_part() which can be used to get and put
        partitions from gendisk respectively.
      * Iterators are implemented to help iterate through all partitions
      * Functions which require RCU readlock are marked with _rcu suffix.
      * Use disk_put_part() in __blkdev_put() instead of directly putting
        the contained kobject.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
    • Tejun Heo's avatar
      block: misc updates · 310a2c10
      Tejun Heo authored
      This patch makes the following misc updates in preparation for
      disk->part dereference fix and extended block devt support.
      * implment part_to_disk()
      * fix comment about gendisk->part indexing
      * rename get_part() to disk_map_sector()
      * don't use n which is always zero while printing disk information in
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
    • Mikulas Patocka's avatar
      drop vmerge accounting · 5df97b91
      Mikulas Patocka authored
      Remove hw_segments field from struct bio and struct request. Without virtual
      merge accounting they have no purpose.
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
    • Mikulas Patocka's avatar
      block: drop virtual merging accounting · b8b3e16c
      Mikulas Patocka authored
      Remove virtual merge accounting.
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
    • David Woodhouse's avatar
      Allow elevators to sort/merge discard requests · e17fc0a1
      David Woodhouse authored
      But blkdev_issue_discard() still emits requests which are interpreted as
      soft barriers, because naïve callers might otherwise issue subsequent
      writes to those same sectors, which might cross on the queue (if they're
      reallocated quickly enough).
      Callers still _can_ issue non-barrier discard requests, but they have to
      take care of queue ordering for themselves.
      Signed-off-by: default avatarDavid Woodhouse <David.Woodhouse@intel.com>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
  9. 03 Jul, 2008 1 commit
  10. 07 May, 2008 1 commit
  11. 29 Apr, 2008 1 commit
  12. 21 Apr, 2008 1 commit
    • FUJITA Tomonori's avatar
      block: move the padding adjustment to blk_rq_map_sg · f18573ab
      FUJITA Tomonori authored
      blk_rq_map_user adjusts bi_size of the last bio. It breaks the rule
      that req->data_len (the true data length) is equal to sum(bio). It
      broke the scsi command completion code.
      commit e97a294e
       was introduced to fix
      the above issue. However, the partial completion code doesn't work
      with it. The commit is also a layer violation (scsi mid-layer should
      not know about the block layer's padding).
      This patch moves the padding adjustment to blk_rq_map_sg (suggested by
      James). The padding works like the drain buffer. This patch breaks the
      rule that req->data_len is equal to sum(sg), however, the drain buffer
      already broke it. So this patch just restores the rule that
      req->data_len is equal to sub(bio) without breaking anything new.
      Now when a low level driver needs padding, blk_rq_map_user and
      blk_rq_map_user_iov guarantee there's enough room for padding.
      blk_rq_map_sg can safely extend the last entry of a scatter list.
      blk_rq_map_sg must extend the last entry of a scatter list only for a
      request that got through bio_copy_user_iov. This patches introduces
      new REQ_COPY_USER flag.
      Signed-off-by: default avatarFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: Tejun Heo <htejun@gmail.com>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
  13. 04 Mar, 2008 1 commit
    • FUJITA Tomonori's avatar
      block: restore the meaning of rq->data_len to the true data length · 7a85f889
      FUJITA Tomonori authored
      The meaning of rq->data_len was changed to the length of an allocated
      buffer from the true data length. It breaks SG_IO friends and
      bsg. This patch restores the meaning of rq->data_len to the true data
      length and adds rq->extra_len to store an extended length (due to
      drain buffer and padding).
      This patch also removes the code to update bio in blk_rq_map_user
      introduced by the commit 40b01b9b
      The commit adjusts bio according to memory alignment
      (queue_dma_alignment). However, memory alignment is NOT padding
      alignment. This adjustment also breaks SG_IO friends and bsg. Padding
      alignment needs to be fixed in a proper way (by a separate patch).
      Signed-off-by: default avatarFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Signed-off-by: default avatarJens Axboe <axboe@carl.home.kernel.dk>
  14. 19 Feb, 2008 3 commits
    • Tejun Heo's avatar
      block: clear drain buffer if draining for write command · db0a2e00
      Tejun Heo authored
      Clear drain buffer before chaining if the command in question is a
      Signed-off-by: default avatarTejun Heo <htejun@gmail.com>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
    • Tejun Heo's avatar
      block: implement request_queue->dma_drain_needed · 2fb98e84
      Tejun Heo authored
      Draining shouldn't be done for commands where overflow may indicate
      data integrity issues.  Add dma_drain_needed callback to
      request_queue.  Drain buffer is appened iff this function returns
      Signed-off-by: default avatarTejun Heo <htejun@gmail.com>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
    • Tejun Heo's avatar
      block: add request->raw_data_len · 6b00769f
      Tejun Heo authored
      With padding and draining moved into it, block layer now may extend
      requests as directed by queue parameters, so now a request has two
      sizes - the original request size and the extended size which matches
      the size of area pointed to by bios and later by sgs.  The latter size
      is what lower layers are primarily interested in when allocating,
      filling up DMA tables and setting up the controller.
      Both padding and draining extend the data area to accomodate
      controller characteristics.  As any controller which speaks SCSI can
      handle underflows, feeding larger data area is safe.
      So, this patch makes the primary data length field, request->data_len,
      indicate the size of full data area and add a separate length field,
      request->raw_data_len, for the unmodified request size.  The latter is
      used to report to higher layer (userland) and where the original
      request size should be fed to the controller or device.
      Signed-off-by: default avatarTejun Heo <htejun@gmail.com>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
  15. 08 Feb, 2008 1 commit
  16. 01 Feb, 2008 1 commit
  17. 29 Jan, 2008 1 commit