1. 09 Oct, 2008 2 commits
    • Tejun Heo's avatar
      block: move stats from disk to part0 · 074a7aca
      Tejun Heo authored
      Move stats related fields - stamp, in_flight, dkstats - from disk to
      part0 and unify stat handling such that...
      
      * part_stat_*() now updates part0 together if the specified partition
        is not part0.  ie. part_stat_*() are now essentially all_stat_*().
      
      * {disk|all}_stat_*() are gone.
      
      * part_round_stats() is updated similary.  It handles part0 stats
        automatically and disk_round_stats() is killed.
      
      * part_{inc|dec}_in_fligh() is implemented which automatically updates
        part0 stats for parts other than part0.
      
      * disk_map_sector_rcu() is updated to return part0 if no part matches.
        Combined with the above changes, this makes NULL special case
        handling in callers unnecessary.
      
      * Separate stats show code paths for disk are collapsed into part
        stats show code paths.
      
      * Rename disk_stat_lock/unlock() to part_stat_lock/unlock()
      
      While at it, reposition stat handling macros a bit and add missing
      parentheses around macro parameters.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      074a7aca
    • Tejun Heo's avatar
      block: fix diskstats access · c9959059
      Tejun Heo authored
      There are two variants of stat functions - ones prefixed with double
      underbars which don't care about preemption and ones without which
      disable preemption before manipulating per-cpu counters.  It's unclear
      whether the underbarred ones assume that preemtion is disabled on
      entry as some callers don't do that.
      
      This patch unifies diskstats access by implementing disk_stat_lock()
      and disk_stat_unlock() which take care of both RCU (for partition
      access) and preemption (for per-cpu counter access).  diskstats access
      should always be enclosed between the two functions.  As such, there's
      no need for the versions which disables preemption.  They're removed
      and double underbars ones are renamed to drop the underbars.  As an
      extra argument is added, there's no danger of using the old version
      unconverted.
      
      disk_stat_lock() uses get_cpu() and returns the cpu index and all
      diskstat functions which access per-cpu counters now has @cpu
      argument to help RT.
      
      This change adds RCU or preemption operations at some places but also
      collapses several preemption ops into one at others.  Overall, the
      performance difference should be negligible as all involved ops are
      very lightweight per-cpu ones.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      c9959059
  2. 21 Jul, 2008 1 commit
  3. 03 Jul, 2008 1 commit
    • Alasdair G Kergon's avatar
      Add bvec_merge_data to handle stacked devices and ->merge_bvec() · cc371e66
      Alasdair G Kergon authored
      When devices are stacked, one device's merge_bvec_fn may need to perform
      the mapping and then call one or more functions for its underlying devices.
      
      The following bio fields are used:
        bio->bi_sector
        bio->bi_bdev
        bio->bi_size
        bio->bi_rw  using bio_data_dir()
      
      This patch creates a new struct bvec_merge_data holding a copy of those
      fields to avoid having to change them directly in the struct bio when
      going down the stack only to have to change them back again on the way
      back up.  (And then when the bio gets mapped for real, the whole
      exercise gets repeated, but that's a problem for another day...)
      Signed-off-by: default avatarAlasdair G Kergon <agk@redhat.com>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Milan Broz <mbroz@redhat.com>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      cc371e66
  4. 15 May, 2008 1 commit
    • Neil Brown's avatar
      Remove blkdev warning triggered by using md · e7e72bf6
      Neil Brown authored
      As setting and clearing queue flags now requires that we hold a spinlock
      on the queue, and as blk_queue_stack_limits is called without that lock,
      get the lock inside blk_queue_stack_limits.
      
      For blk_queue_stack_limits to be able to find the right lock, each md
      personality needs to set q->queue_lock to point to the appropriate lock.
      Those personalities which didn't previously use a spin_lock, us
      q->__queue_lock.  So always initialise that lock when allocated.
      
      With this in place, setting/clearing of the QUEUE_FLAG_PLUGGED bit will no
      longer cause warnings as it will be clear that the proper lock is held.
      
      Thanks to Dan Williams for review and fixing the silly bugs.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Cc: Alistair John Strachan <alistair@devzero.co.uk>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Jacek Luczak <difrost.kernel@gmail.com>
      Cc: Prakash Punnoor <prakash@punnoor.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e7e72bf6
  5. 06 Feb, 2008 1 commit
  6. 09 Nov, 2007 1 commit
  7. 17 Oct, 2007 1 commit
  8. 16 Oct, 2007 1 commit
  9. 10 Oct, 2007 1 commit
  10. 24 Jul, 2007 1 commit
  11. 24 May, 2007 1 commit
  12. 03 Oct, 2006 1 commit
  13. 23 May, 2006 1 commit
    • NeilBrown's avatar
      [PATCH] md: fix possible oops when starting a raid0 array · 5c4c3331
      NeilBrown authored
      This loop that sets up the hash_table has problems.
      
      Careful examination will show that the last time through, everything but
      the first line is pointless.  This is because all it does is change 'cur'
      and 'size' and neither of these are used after the loop.  This should ring
      warning bells...  That last time through the loop,
      
              size += conf->strip_zone[cur].size
      
      can index off the end of the strip_zone array.  Depending on what it finds
      there, it might exit the loop cleanly, or it might spin going further and
      further beyond the array until it hits an unmapped address.
      
      This patch rearranges the code so that the last, pointless, iteration of
      the loop never happens.  i.e.  the one statement of the last loop that is
      needed is moved the the end of the previous loop - or to before the loop
      starts - and the loop counter starts from 1 instead of 0.
      
      Cc: "Don Dupuis" <dondster@gmail.com>
      Signed-off-by: default avatarNeil Brown <neilb@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      5c4c3331
  14. 03 Feb, 2006 1 commit
  15. 09 Jan, 2006 1 commit
  16. 06 Jan, 2006 4 commits
  17. 01 Nov, 2005 1 commit
    • Jens Axboe's avatar
      [BLOCK] Unify the seperate read/write io stat fields into arrays · a362357b
      Jens Axboe authored
      Instead of having ->read_sectors and ->write_sectors, combine the two
      into ->sectors[2] and similar for the other fields. This saves a branch
      several places in the io path, since we don't have to care for what the
      actual io direction is. On my x86-64 box, that's 200 bytes less text in
      just the core (not counting the various drivers).
      Signed-off-by: default avatarJens Axboe <axboe@suse.de>
      a362357b
  18. 09 Sep, 2005 1 commit
  19. 15 Jul, 2005 1 commit
  20. 22 Jun, 2005 1 commit
  21. 16 Apr, 2005 1 commit
    • Linus Torvalds's avatar
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds authored
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4