1. 16 6月, 2009 8 次提交
    • N
      md: raid0: remove ->sectors from the strip_zone structure. · 49f357a2
      NeilBrown 提交于
      storing ->sectors is redundant as is can be computed from the
      difference  z->zone_end - (z-1)->zone_end
      
      The one place where it is used, it is just as efficient to use
      a zone_end value instead.
      
      And removing it makes strip_zone smaller, so they array of these that
      is searched on every request has a better chance to say in cache.
      
      So discard the field and get the value from elsewhere.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      49f357a2
    • A
      md: raid0: Fix a memory leak when stopping a raid0 array. · fb5ab4b5
      Andre Noll 提交于
      raid0_stop() removes all references to the raid0 configuration but
      misses to free the ->devlist buffer.
      
      This patch closes this leak, removes a pointless initialization and
      fixes a coding style issue in raid0_stop().
      Signed-off-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      fb5ab4b5
    • A
      md: raid0: Allocate all buffers for the raid0 configuration in one function. · ed7b0038
      Andre Noll 提交于
      Currently the raid0 configuration is allocated in raid0_run() while
      the buffers for the strip_zone and the dev_list arrays are allocated
      in create_strip_zones(). On errors, all three buffers are freed
      in raid0_run().
      
      It's easier and more readable to do the allocation and cleanup within
      a single function. So move that code into create_strip_zones().
      Signed-off-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      ed7b0038
    • A
      md: raid0: Make raid0_run() return a proper error code. · 5568a603
      Andre Noll 提交于
      Currently raid0_run() always returns -ENOMEM on errors. This is
      incorrect as running the array might fail for other reasons, for
      example because not all component devices were available.
      
      This patch changes create_strip_zones() so that it returns a proper
      error code (either -ENOMEM or -EINVAL) rather than 1 on errors and
      makes raid0_run(), its single caller, return that value instead
      of -ENOMEM.
      Signed-off-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      5568a603
    • A
      md: raid0: Remove hash spacing and sector shift. · 8f79cfcd
      Andre Noll 提交于
      The "sector_shift" and "spacing" fields of struct raid0_private_data
      were only used for the hash table lookups. So the removal of the
      hash table allows get rid of these fields as well which simplifies
      create_strip_zones() and raid0_run() quite a bit.
      Signed-off-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      8f79cfcd
    • A
      md: raid0: Remove hash table. · 09770e0b
      Andre Noll 提交于
      The raid0 hash table has become unused due to the changes in the
      previous patch. This patch removes the hash table allocation and
      setup code and kills the hash_table field of struct raid0_private_data.
      Signed-off-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      09770e0b
    • N
      md/raid0: two cleanups in create_stripe_zones. · d27a43ab
      NeilBrown 提交于
      1/ remove current_start.  The same value is available in
           zone->dev_start and storing it separately doesn't gain anything.
      2/ rename curr_zone_start to curr_zone_end as we are now more
           focused on the 'end' of each zone.  We end up storing the
           same number though - the old name was a little confusing
           (and what does 'current' mean in this context anyway).
      Signed-off-by: NNeilBrown <neilb@suse.de>
      d27a43ab
    • A
      md: raid0: Replace hash table lookup by looping over all strip_zones. · dc582663
      Andre Noll 提交于
      The number of strip_zones of a raid0 array is bounded by the number of
      drives in the array and is in fact much smaller for typical setups. For
      example, any raid0 array containing identical disks will have only
      a single strip_zone.
      
      Therefore, the hash tables which are used for quickly finding the
      strip_zone that holds a particular sector are of questionable value
      and add quite a bit of unnecessary complexity.
      
      This patch replaces the hash table lookup by equivalent code which
      simply loops over all strip zones to find the zone that holds the
      given sector.
      
      In order to make this loop as fast as possible, the zone->start field
      of struct strip_zone has been renamed to zone_end, and it now stores
      the beginning of the next zone in sectors. This allows to save one
      addition in the loop.
      
      Subsequent cleanup patches will remove the hash table structure.
      Signed-off-by: NAndre Noll <maan@systemlinux.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      dc582663
  2. 23 5月, 2009 1 次提交
  3. 31 3月, 2009 7 次提交
  4. 09 1月, 2009 10 次提交
  5. 13 10月, 2008 1 次提交
  6. 09 10月, 2008 3 次提交
    • D
      block: mark bio_split_pool static · 6feef531
      Denis ChengRq 提交于
      Since all bio_split calls refer the same single bio_split_pool, the bio_split
      function can use bio_split_pool directly instead of the mempool_t parameter;
      
      then the mempool_t parameter can be removed from bio_split param list, and
      bio_split_pool is only referred in fs/bio.c file, can be marked static.
      Signed-off-by: NDenis ChengRq <crquan@gmail.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      6feef531
    • T
      block: move stats from disk to part0 · 074a7aca
      Tejun Heo 提交于
      Move stats related fields - stamp, in_flight, dkstats - from disk to
      part0 and unify stat handling such that...
      
      * part_stat_*() now updates part0 together if the specified partition
        is not part0.  ie. part_stat_*() are now essentially all_stat_*().
      
      * {disk|all}_stat_*() are gone.
      
      * part_round_stats() is updated similary.  It handles part0 stats
        automatically and disk_round_stats() is killed.
      
      * part_{inc|dec}_in_fligh() is implemented which automatically updates
        part0 stats for parts other than part0.
      
      * disk_map_sector_rcu() is updated to return part0 if no part matches.
        Combined with the above changes, this makes NULL special case
        handling in callers unnecessary.
      
      * Separate stats show code paths for disk are collapsed into part
        stats show code paths.
      
      * Rename disk_stat_lock/unlock() to part_stat_lock/unlock()
      
      While at it, reposition stat handling macros a bit and add missing
      parentheses around macro parameters.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      074a7aca
    • T
      block: fix diskstats access · c9959059
      Tejun Heo 提交于
      There are two variants of stat functions - ones prefixed with double
      underbars which don't care about preemption and ones without which
      disable preemption before manipulating per-cpu counters.  It's unclear
      whether the underbarred ones assume that preemtion is disabled on
      entry as some callers don't do that.
      
      This patch unifies diskstats access by implementing disk_stat_lock()
      and disk_stat_unlock() which take care of both RCU (for partition
      access) and preemption (for per-cpu counter access).  diskstats access
      should always be enclosed between the two functions.  As such, there's
      no need for the versions which disables preemption.  They're removed
      and double underbars ones are renamed to drop the underbars.  As an
      extra argument is added, there's no danger of using the old version
      unconverted.
      
      disk_stat_lock() uses get_cpu() and returns the cpu index and all
      diskstat functions which access per-cpu counters now has @cpu
      argument to help RT.
      
      This change adds RCU or preemption operations at some places but also
      collapses several preemption ops into one at others.  Overall, the
      performance difference should be negligible as all involved ops are
      very lightweight per-cpu ones.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      c9959059
  7. 21 7月, 2008 1 次提交
  8. 03 7月, 2008 1 次提交
    • A
      Add bvec_merge_data to handle stacked devices and ->merge_bvec() · cc371e66
      Alasdair G Kergon 提交于
      When devices are stacked, one device's merge_bvec_fn may need to perform
      the mapping and then call one or more functions for its underlying devices.
      
      The following bio fields are used:
        bio->bi_sector
        bio->bi_bdev
        bio->bi_size
        bio->bi_rw  using bio_data_dir()
      
      This patch creates a new struct bvec_merge_data holding a copy of those
      fields to avoid having to change them directly in the struct bio when
      going down the stack only to have to change them back again on the way
      back up.  (And then when the bio gets mapped for real, the whole
      exercise gets repeated, but that's a problem for another day...)
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Milan Broz <mbroz@redhat.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      cc371e66
  9. 15 5月, 2008 1 次提交
    • N
      Remove blkdev warning triggered by using md · e7e72bf6
      Neil Brown 提交于
      As setting and clearing queue flags now requires that we hold a spinlock
      on the queue, and as blk_queue_stack_limits is called without that lock,
      get the lock inside blk_queue_stack_limits.
      
      For blk_queue_stack_limits to be able to find the right lock, each md
      personality needs to set q->queue_lock to point to the appropriate lock.
      Those personalities which didn't previously use a spin_lock, us
      q->__queue_lock.  So always initialise that lock when allocated.
      
      With this in place, setting/clearing of the QUEUE_FLAG_PLUGGED bit will no
      longer cause warnings as it will be clear that the proper lock is held.
      
      Thanks to Dan Williams for review and fixing the silly bugs.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Cc: Alistair John Strachan <alistair@devzero.co.uk>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Jacek Luczak <difrost.kernel@gmail.com>
      Cc: Prakash Punnoor <prakash@punnoor.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e7e72bf6
  10. 07 2月, 2008 1 次提交
  11. 09 11月, 2007 1 次提交
  12. 17 10月, 2007 1 次提交
  13. 16 10月, 2007 1 次提交
  14. 10 10月, 2007 1 次提交
  15. 24 7月, 2007 1 次提交
  16. 24 5月, 2007 1 次提交
新手
引导
客服 返回
顶部