1. 06 11月, 2013 1 次提交
    • H
      dm mpath: requeue I/O during pg_init · b63349a7
      Hannes Reinecke 提交于
      When pg_init is running no I/O can be submitted to the underlying
      devices, as the path priority etc might change.  When using queue_io for
      this, requests will be piling up within multipath as the block I/O
      scheduler just sees a _very fast_ device.  All of this queued I/O has to
      be resubmitted from within multipathing once pg_init is done.
      
      This approach has the problem that it's virtually impossible to
      abort I/O when pg_init is running, and we're adding heavy load
      to the devices after pg_init since all of the queued I/O needs to be
      resubmitted _before_ any requests can be pulled off of the request queue
      and normal operation continues.
      
      This patch will requeue the I/O that triggers the pg_init call, and
      return 'busy' when pg_init is in progress.  With these changes the block
      I/O scheduler will stop submitting I/O during pg_init, resulting in a
      quicker path switch and less I/O pressure (and memory consumption) after
      pg_init.
      Signed-off-by: NHannes Reinecke <hare@suse.de>
      [patch header edited for clarity and typos by Mike Snitzer]
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      b63349a7
  2. 01 11月, 2013 2 次提交
    • S
      dm mpath: fix race condition between multipath_dtr and pg_init_done · 954a73d5
      Shiva Krishna Merla 提交于
      Whenever multipath_dtr() is happening we must prevent queueing any
      further path activation work.  Implement this by adding a new
      'pg_init_disabled' flag to the multipath structure that denotes future
      path activation work should be skipped if it is set.  By disabling
      pg_init and then re-enabling in flush_multipath_work() we also avoid the
      potential for pg_init to be initiated while suspending an mpath device.
      
      Without this patch a race condition exists that may result in a kernel
      panic:
      
      1) If after pg_init_done() decrements pg_init_in_progress to 0, a call
         to wait_for_pg_init_completion() assumes there are no more pending path
         management commands.
      2) If pg_init_required is set by pg_init_done(), due to retryable
         mode_select errors, then process_queued_ios() will again queue the
         path activation work.
      3) If free_multipath() completes before activate_path() work is called a
         NULL pointer dereference like the following can be seen when
         accessing members of the recently destructed multipath:
      
      BUG: unable to handle kernel NULL pointer dereference at 0000000000000090
      RIP: 0010:[<ffffffffa003db1b>]  [<ffffffffa003db1b>] activate_path+0x1b/0x30 [dm_multipath]
      [<ffffffff81090ac0>] worker_thread+0x170/0x2a0
      [<ffffffff81096c80>] ? autoremove_wake_function+0x0/0x40
      
      [switch to disabling pg_init in flush_multipath_work & header edits by Mike Snitzer]
      Signed-off-by: NShiva Krishna Merla <shivakrishna.merla@netapp.com>
      Reviewed-by: NKrishnasamy Somasundaram <somasundaram.krishnasamy@netapp.com>
      Tested-by: NSpeagle Andy <Andy.Speagle@netapp.com>
      Acked-by: NJunichi Nomura <j-nomura@ce.jp.nec.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      954a73d5
    • M
      dm: allocate buffer for messages with small number of arguments using GFP_NOIO · f36afb39
      Mikulas Patocka 提交于
      dm-mpath and dm-thin must process messages even if some device is
      suspended, so we allocate argv buffer with GFP_NOIO. These messages have
      a small fixed number of arguments.
      
      On the other hand, dm-switch needs to process bulk data using messages
      so excessive use of GFP_NOIO could cause trouble.
      
      The patch also lowers the default number of arguments from 64 to 8, so
      that there is smaller load on GFP_NOIO allocations.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Cc: stable@vger.kernel.org
      Acked-by: NAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      f36afb39
  3. 11 10月, 2013 1 次提交
  4. 25 9月, 2013 10 次提交
  5. 23 9月, 2013 4 次提交
    • M
      dm: add reserved_bio_based_ios module parameter · e8603136
      Mike Snitzer 提交于
      Allow user to change the number of IOs that are reserved by
      bio-based DM's mempools by writing to this file:
      /sys/module/dm_mod/parameters/reserved_bio_based_ios
      
      The default value is RESERVED_BIO_BASED_IOS (16).  The maximum allowed
      value is RESERVED_MAX_IOS (1024).
      
      Export dm_get_reserved_bio_based_ios() for use by DM targets and core
      code.  Switch to sizing dm-io's mempool and bioset using DM core's
      configurable 'reserved_bio_based_ios'.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NFrank Mayhar <fmayhar@google.com>
      e8603136
    • M
      dm: add reserved_rq_based_ios module parameter · f4790826
      Mike Snitzer 提交于
      Allow user to change the number of IOs that are reserved by
      request-based DM's mempools by writing to this file:
      /sys/module/dm_mod/parameters/reserved_rq_based_ios
      
      The default value is RESERVED_REQUEST_BASED_IOS (256).  The maximum
      allowed value is RESERVED_MAX_IOS (1024).
      
      Export dm_get_reserved_rq_based_ios() for use by DM targets and core
      code.  Switch to sizing dm-mpath's mempool using DM core's configurable
      'reserved_rq_based_ios'.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NFrank Mayhar <fmayhar@google.com>
      Acked-by: NMikulas Patocka <mpatocka@redhat.com>
      f4790826
    • M
      dm: lower bio-based mempool reservation · 6cfa5857
      Mike Snitzer 提交于
      Bio-based device mapper processing doesn't need larger mempools (like
      request-based DM does), so lower the number of reserved entries for
      bio-based operation.  16 was already used for bio-based DM's bioset
      but mistakenly wasn't used for it's _io_cache.
      
      Formalize difference between bio-based and request-based defaults by
      introducing RESERVED_BIO_BASED_IOS and RESERVED_REQUEST_BASED_IOS.
      
      (based on older code from Mikulas Patocka)
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NFrank Mayhar <fmayhar@google.com>
      Acked-by: NMikulas Patocka <mpatocka@redhat.com>
      6cfa5857
    • M
      dm thin: do not expose non-zero discard limits if discards disabled · b60ab990
      Mike Snitzer 提交于
      Fix issue where the block layer would stack the discard limits of the
      pool's data device even if the "ignore_discard" pool feature was
      specified.
      
      The pool and thin device(s) still had discards disabled because the
      QUEUE_FLAG_DISCARD request_queue flag wasn't set.  But to avoid user
      confusion when "ignore_discard" is used: both the pool device and the
      thin device(s) have zeroes for all discard limits.
      
      Also, always set discard_zeroes_data_unsupported in targets because they
      should never advertise the 'discard_zeroes_data' capability (even if the
      pool's data device supports it).
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Acked-by: NJoe Thornber <ejt@redhat.com>
      b60ab990
  6. 20 9月, 2013 3 次提交
    • M
      dm mpath: disable WRITE SAME if it fails · f84cb8a4
      Mike Snitzer 提交于
      Workaround the SCSI layer's problematic WRITE SAME heuristics by
      disabling WRITE SAME in the DM multipath device's queue_limits if an
      underlying device disabled it.
      
      The WRITE SAME heuristics, with both the original commit 5db44863
      ("[SCSI] sd: Implement support for WRITE SAME") and the updated commit
      66c28f97 ("[SCSI] sd: Update WRITE SAME heuristics"), default to enabling
      WRITE SAME(10) even without successfully determining it is supported.
      After the first failed WRITE SAME the SCSI layer will disable WRITE SAME
      for the device (by setting sdkp->device->no_write_same which results in
      'max_write_same_sectors' in device's queue_limits to be set to 0).
      
      When a device is stacked ontop of such a SCSI device any changes to that
      SCSI device's queue_limits do not automatically propagate up the stack.
      As such, a DM multipath device will not have its WRITE SAME support
      disabled.  This causes the block layer to continue to issue WRITE SAME
      requests to the mpath device which causes paths to fail and (if mpath IO
      isn't configured to queue when no paths are available) it will result in
      actual IO errors to the upper layers.
      
      This fix doesn't help configurations that have additional devices
      stacked ontop of the mpath device (e.g. LVM created linear DM devices
      ontop).  A proper fix that restacks all the queue_limits from the bottom
      of the device stack up will need to be explored if SCSI will continue to
      use this model of optimistically allowing op codes and then disabling
      them after they fail for the first time.
      
      Before this patch:
      
      EXT4-fs (dm-6): mounted filesystem with ordered data mode. Opts: (null)
      device-mapper: multipath: XXX snitm debugging: got -EREMOTEIO (-121)
      device-mapper: multipath: XXX snitm debugging: failing WRITE SAME IO with error=-121
      end_request: critical target error, dev dm-6, sector 528
      dm-6: WRITE SAME failed. Manually zeroing.
      device-mapper: multipath: Failing path 8:112.
      end_request: I/O error, dev dm-6, sector 4616
      dm-6: WRITE SAME failed. Manually zeroing.
      end_request: I/O error, dev dm-6, sector 4616
      end_request: I/O error, dev dm-6, sector 5640
      end_request: I/O error, dev dm-6, sector 6664
      end_request: I/O error, dev dm-6, sector 7688
      end_request: I/O error, dev dm-6, sector 524288
      Buffer I/O error on device dm-6, logical block 65536
      lost page write due to I/O error on dm-6
      JBD2: Error -5 detected when updating journal superblock for dm-6-8.
      end_request: I/O error, dev dm-6, sector 524296
      Aborting journal on device dm-6-8.
      end_request: I/O error, dev dm-6, sector 524288
      Buffer I/O error on device dm-6, logical block 65536
      lost page write due to I/O error on dm-6
      JBD2: Error -5 detected when updating journal superblock for dm-6-8.
      
      # cat /sys/block/sdh/queue/write_same_max_bytes
      0
      # cat /sys/block/dm-6/queue/write_same_max_bytes
      33553920
      
      After this patch:
      
      EXT4-fs (dm-6): mounted filesystem with ordered data mode. Opts: (null)
      device-mapper: multipath: XXX snitm debugging: got -EREMOTEIO (-121)
      device-mapper: multipath: XXX snitm debugging: WRITE SAME I/O failed with error=-121
      end_request: critical target error, dev dm-6, sector 528
      dm-6: WRITE SAME failed. Manually zeroing.
      
      # cat /sys/block/sdh/queue/write_same_max_bytes
      0
      # cat /sys/block/dm-6/queue/write_same_max_bytes
      0
      
      It should be noted that WRITE SAME support wasn't enabled in DM
      multipath until v3.10.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: Martin K. Petersen <martin.petersen@oracle.com>
      Cc: Hannes Reinecke <hare@suse.de>
      Cc: stable@vger.kernel.org # 3.10+
      f84cb8a4
    • M
      dm-snapshot: fix performance degradation due to small hash size · 60e356f3
      Mikulas Patocka 提交于
      LVM2, since version 2.02.96, creates origin with zero size, then loads
      the snapshot driver and then loads the origin.  Consequently, the
      snapshot driver sees the origin size zero and sets the hash size to the
      lower bound 64.  Such small hash table causes performance degradation.
      
      This patch changes it so that the hash size is determined by the size of
      snapshot volume, not minimum of origin and snapshot size.  It doesn't
      make sense to set the snapshot size significantly larger than the origin
      size, so we do not need to take origin size into account when
      calculating the hash size.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      60e356f3
    • M
      dm snapshot: workaround for a false positive lockdep warning · 5ea330a7
      Mikulas Patocka 提交于
      The kernel reports a lockdep warning if a snapshot is invalidated because
      it runs out of space.
      
      The lockdep warning was triggered by commit 0976dfc1
      ("workqueue: Catch more locking problems with flush_work()") in v3.5.
      
      The warning is false positive.  The real cause for the warning is that
      the lockdep engine treats different instances of md->lock as a single
      lock.
      
      This patch is a workaround - we use flush_workqueue instead of flush_work.
      This code path is not performance sensitive (it is called only on
      initialization or invalidation), thus it doesn't matter that we flush the
      whole workqueue.
      
      The real fix for the problem would be to teach the lockdep engine to treat
      different instances of md->lock as separate locks.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Acked-by: NAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org # 3.5+
      5ea330a7
  7. 19 9月, 2013 2 次提交
  8. 11 9月, 2013 1 次提交
    • D
      drivers: convert shrinkers to new count/scan API · 7dc19d5a
      Dave Chinner 提交于
      Convert the driver shrinkers to the new API.  Most changes are compile
      tested only because I either don't have the hardware or it's staging
      stuff.
      
      FWIW, the md and android code is pretty good, but the rest of it makes me
      want to claw my eyes out.  The amount of broken code I just encountered is
      mind boggling.  I've added comments explaining what is broken, but I fear
      that some of the code would be best dealt with by being dragged behind the
      bike shed, burying in mud up to it's neck and then run over repeatedly
      with a blunt lawn mower.
      
      Special mention goes to the zcache/zcache2 drivers.  They can't co-exist
      in the build at the same time, they are under different menu options in
      menuconfig, they only show up when you've got the right set of mm
      subsystem options configured and so even compile testing is an exercise in
      pulling teeth.  And that doesn't even take into account the horrible,
      broken code...
      
      [glommer@openvz.org: fixes for i915, android lowmem, zcache, bcache]
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NGlauber Costa <glommer@openvz.org>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Kent Overstreet <koverstreet@google.com>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Thomas Hellstrom <thellstrom@vmware.com>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
      Cc: Arve Hjønnevåg <arve@android.com>
      Cc: Carlos Maiolino <cmaiolino@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Chuck Lever <chuck.lever@oracle.com>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: J. Bruce Fields <bfields@redhat.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Kent Overstreet <koverstreet@google.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Thomas Hellstrom <thellstrom@vmware.com>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      7dc19d5a
  9. 06 9月, 2013 9 次提交
  10. 02 9月, 2013 1 次提交
    • S
      raid5: only wakeup necessary threads · bfc90cb0
      Shaohua Li 提交于
      If there are not enough stripes to handle, we'd better not always
      queue all available work_structs. If one worker can only handle small
      or even none stripes, it will impact request merge and create lock
      contention.
      
      With this patch, the number of work_struct running will depend on
      pending stripes number. Note: some statistics info used in the patch
      are accessed without locking protection. This should doesn't matter,
      we just try best to avoid queue unnecessary work_struct.
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      bfc90cb0
  11. 28 8月, 2013 6 次提交
    • N
      md/raid5: flush out all pending requests before proceeding with reshape. · 4d77e3ba
      NeilBrown 提交于
      Some requests - particularly 'discard' and 'read' are handled
      differently depending on whether a reshape is active or not.
      
      It is harmless to assume reshape is active if it isn't but wrong
      to act as though reshape is not active when it is.
      
      So when we start reshape - after making clear to all requests that
      reshape has started - use mddev_suspend/mddev_resume to flush out all
      requests.  This will ensure that no requests will be assuming the
      absence of reshape once it really starts.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      4d77e3ba
    • N
      md/raid5: use seqcount to protect access to shape in make_request. · c46501b2
      NeilBrown 提交于
      make_request() access various shape parameters (raid_disks, chunk_size
      etc) which might be changed by raid5_start_reshape().
      
      If the later is called at and awkward time during the form, the wrong
      stripe_head might be used.
      
      So introduce a 'seqcount' and after finding a stripe_head make sure
      there is no reason to expect that we got the wrong one.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      c46501b2
    • S
      raid5: sysfs entry to control worker thread number · b721420e
      Shaohua Li 提交于
      Add a sysfs entry to control running workqueue thread number. If
      group_thread_cnt is set to 0, we will disable workqueue offload handling of
      stripes.
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      b721420e
    • S
      raid5: offload stripe handle to workqueue · 851c30c9
      Shaohua Li 提交于
      This is another attempt to create multiple threads to handle raid5 stripes.
      This time I use workqueue.
      
      raid5 handles request (especially write) in stripe unit. A stripe is page size
      aligned/long and acrosses all disks. Writing to any disk sector, raid5 runs a
      state machine for the corresponding stripe, which includes reading some disks
      of the stripe, calculating parity, and writing some disks of the stripe. The
      state machine is running in raid5d thread currently. Since there is only one
      thread, it doesn't scale well for high speed storage. An obvious solution is
      multi-threading.
      
      To get better performance, we have some requirements:
      a. locality. stripe corresponding to request submitted from one cpu is better
      handled in thread in local cpu or local node. local cpu is preferred but some
      times could be a bottleneck, for example, parity calculation is too heavy.
      local node running has wide adaptability.
      b. configurablity. Different setup of raid5 array might need diffent
      configuration. Especially the thread number. More threads don't always mean
      better performance because of lock contentions.
      
      My original implementation is creating some kernel threads. There are
      interfaces to control which cpu's stripe each thread should handle. And
      userspace can set affinity of the threads. This provides biggest flexibility
      and configurability. But it's hard to use and apparently a new thread pool
      implementation is disfavor.
      
      Recent workqueue improvement is quite promising. unbound workqueue will be
      bound to numa node. If WQ_SYSFS is set in workqueue, there are sysfs option to
      do affinity setting. For example, we can only include one HT sibling in
      affinity. Since work is non-reentrant by default, and we can control running
      thread number by limiting dispatched work_struct number.
      
      In this patch, I created several stripe worker group. A group is a numa node.
      stripes from cpus of one node will be added to a group list. Workqueue thread
      of one node will only handle stripes of worker group of the node. In this way,
      stripe handling has numa node locality. And as I said, we can control thread
      number by limiting dispatched work_struct number.
      
      The work_struct callback function handles several stripes in one run. A typical
      work queue usage is to run one unit in each work_struct. In raid5 case, the
      unit is a stripe. But we can't do that:
      a. Though handling a stripe doesn't need lock because of reference accounting
      and stripe isn't in any list, queuing a work_struct for each stripe will make
      workqueue lock contended very heavily.
      b. blk_start_plug()/blk_finish_plug() should surround stripe handle, as we
      might dispatch request. If each work_struct only handles one stripe, such block
      plug is meaningless.
      
      This implementation can't do very fine grained configuration. But the numa
      binding is most popular usage model, should be enough for most workloads.
      
      Note: since we have only one stripe queue, switching to multi-thread might
      decrease request size dispatching down to low level layer. The impact depends
      on thread number, raid configuration and workload. So multi-thread raid5 might
      not be proper for all setups.
      
      Changes V1 -> V2:
      1. remove WQ_NON_REENTRANT
      2. disabling multi-threading by default
      3. Add more descriptions in changelog
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      851c30c9
    • S
      raid5: fix stripe release order · d265d9dc
      Shaohua Li 提交于
      patch "make release_stripe lockless" changes the order stripes are released.
      Originally I thought block layer can take care of request merge, but it appears
      there are still some requests not merged. It's easy to fix the order.
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      d265d9dc
    • S
      raid5: make release_stripe lockless · 773ca82f
      Shaohua Li 提交于
      release_stripe still has big lock contention. We just add the stripe to a llist
      without taking device_lock. We let the raid5d thread to do the real stripe
      release, which must hold device_lock anyway. In this way, release_stripe
      doesn't hold any locks.
      
      The side effect is the released stripes order is changed. But sounds not a big
      deal, stripes are never handled in order. And I thought block layer can already
      do nice request merge, which means order isn't that important.
      
      I kept the unplug release batch, which is unnecessary with this patch from lock
      contention avoid point of view, and actually if we delete it, the stripe_head
      release_list and lru can share storage. But the unplug release batch is also
      helpful for request merge. We probably can delay wakeup raid5d till unplug, but
      I'm still afraid of the case which raid5d is running.
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      773ca82f