1. 16 4月, 2015 12 次提交
    • D
      init: stricter checking of major:minor root= values · 283e7ad0
      Dan Ehrenberg 提交于
      In the kernel command-line, previously, root=1:2jakshflaksjdhfa would
      be accepted and interpreted just like root=1:2. This patch adds
      stricter checking so that additional characters after major:minor are
      rejected by root=.
      
      The goal of this change is to help in unifying DM's interpretation of
      its block device argument by using existing kernel code (name_to_dev_t).
      But DM rejects malformed major:minor pairs, it seems reasonable for
      root= to reject them as well.
      Signed-off-by: NDan Ehrenberg <dehrenberg@chromium.org>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      283e7ad0
    • D
      init: export name_to_dev_t and mark name argument as const · e6e20a7a
      Dan Ehrenberg 提交于
      DM will switch its device lookup code to using name_to_dev_t() so it
      must be exported.  Also, the @name argument should be marked const.
      Signed-off-by: NDan Ehrenberg <dehrenberg@chromium.org>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      e6e20a7a
    • M
      dm: add 'use_blk_mq' module param and expose in per-device ro sysfs attr · 17e149b8
      Mike Snitzer 提交于
      Request-based DM's blk-mq support defaults to off; but a user can easily
      change the default using the dm_mod.use_blk_mq module/boot option.
      
      Also, you can check what mode a given request-based DM device is using
      with: cat /sys/block/dm-X/dm/use_blk_mq
      
      This change enabled further cleanup and reduced work (e.g. the
      md->io_pool and md->rq_pool isn't created if using blk-mq).
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      17e149b8
    • M
      dm: optimize dm_mq_queue_rq to _not_ use kthread if using pure blk-mq · 02233342
      Mike Snitzer 提交于
      dm_mq_queue_rq() is in atomic context so care must be taken to not
      sleep -- as such GFP_ATOMIC is used for the md->bs bioset allocations
      and dm-mpath's call to blk_get_request().  In the future the bioset
      allocations will hopefully go away (by removing support for partial
      completions of bios in a cloned request).
      
      Also prepare for supporting DM blk-mq ontop of old-style request_fn
      device(s) if a new dm-mod 'use_blk_mq' parameter is set.  The kthread
      will still be used to queue work if blk-mq is used ontop of old-style
      request_fn device(s).
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      02233342
    • M
      dm: add full blk-mq support to request-based DM · bfebd1cd
      Mike Snitzer 提交于
      Commit e5863d9a ("dm: allocate requests in target when stacking on
      blk-mq devices") served as the first step toward fully utilizing blk-mq
      in request-based DM -- it enabled stacking an old-style (request_fn)
      request_queue ontop of the underlying blk-mq device(s).  That first step
      didn't improve performance of DM multipath ontop of fast blk-mq devices
      (e.g. NVMe) because the top-level old-style request_queue was severely
      limited by the queue_lock.
      
      The second step offered here enables stacking a blk-mq request_queue
      ontop of the underlying blk-mq device(s).  This unlocks significant
      performance gains on fast blk-mq devices, Keith Busch tested on his NVMe
      testbed and offered this really positive news:
      
       "Just providing a performance update. All my fio tests are getting
        roughly equal performance whether accessed through the raw block
        device or the multipath device mapper (~470k IOPS). I could only push
        ~20% of the raw iops through dm before this conversion, so this latest
        tree is looking really solid from a performance standpoint."
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Tested-by: NKeith Busch <keith.busch@intel.com>
      bfebd1cd
    • M
      dm: impose configurable deadline for dm_request_fn's merge heuristic · 0ce65797
      Mike Snitzer 提交于
      Otherwise, for sequential workloads, the dm_request_fn can allow
      excessive request merging at the expense of increased service time.
      
      Add a per-device sysfs attribute to allow the user to control how long a
      request, that is a reasonable merge candidate, can be queued on the
      request queue.  The resolution of this request dispatch deadline is in
      microseconds (ranging from 1 to 100000 usecs), to set a 20us deadline:
        echo 20 > /sys/block/dm-7/dm/rq_based_seq_io_merge_deadline
      
      The dm_request_fn's merge heuristic and associated extra accounting is
      disabled by default (rq_based_seq_io_merge_deadline is 0).
      
      This sysfs attribute is not applicable to bio-based DM devices so it
      will only ever report 0 for them.
      
      By allowing a request to remain on the queue it will block others
      requests on the queue.  But introducing a short dequeue delay has proven
      very effective at enabling certain sequential IO workloads on really
      fast, yet IOPS constrained, devices to build up slightly larger IOs --
      yielding 90+% throughput improvements.  Having precise control over the
      time taken to wait for larger requests to build affords control beyond
      that of waiting for certain IO sizes to accumulate (which would require
      a deadline anyway).  This knob will only ever make sense with sequential
      IO workloads and the particular value used is storage configuration
      specific.
      
      Given the expected niche use-case for when this knob is useful it has
      been deemed acceptable to expose this relatively crude method for
      crafting optimal IO on specific storage -- especially given the solution
      is simple yet effective.  In the context of DM multipath, it is
      advisable to tune this sysfs attribute to a value that offers the best
      performance for the common case (e.g. if 4 paths are expected active,
      tune for that; if paths fail then performance may be slightly reduced).
      
      Alternatives were explored to have request-based DM autotune this value
      (e.g. if/when paths fail) but they were quickly deemed too fragile and
      complex to warrant further design and development time.  If this problem
      proves more common as faster storage emerges we'll have to look at
      elevating a generic solution into the block core.
      Tested-by: NShiva Krishna Merla <shivakrishna.merla@netapp.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      0ce65797
    • M
      dm sysfs: introduce ability to add writable attributes · b898320d
      Mike Snitzer 提交于
      Add DM_ATTR_RW() macro and establish .store method in dm_sysfs_ops.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      b898320d
    • M
      dm: don't start current request if it would've merged with the previous · de3ec86d
      Mike Snitzer 提交于
      Request-based DM's dm_request_fn() is so fast to pull requests off the
      queue that steps need to be taken to promote merging by avoiding request
      processing if it makes sense.
      
      If the current request would've merged with previous request let the
      current request stay on the queue longer.
      Suggested-by: NJens Axboe <axboe@fb.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      de3ec86d
    • M
      dm: reduce the queue delay used in dm_request_fn from 100ms to 10ms · d548b34b
      Mike Snitzer 提交于
      Commit 7eaceacc ("block: remove per-queue plugging") didn't justify
      DM's use of a 100ms delay; such an extended delay is a liability when
      there is reason to re-kick the queue.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      d548b34b
    • M
      dm: don't schedule delayed run of the queue if nothing to do · 9d1deb83
      Mike Snitzer 提交于
      In request-based DM's dm_request_fn(), if blk_peek_request() returns
      NULL just return.  Avoids unnecessary blk_delay_queue().
      Reported-by: NJens Axboe <axboe@fb.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      9d1deb83
    • M
      dm: only run the queue on completion if congested or no requests pending · 9a0e609e
      Mike Snitzer 提交于
      On really fast storage it can be beneficial to delay running the
      request_queue to allow the elevator more opportunity to merge requests.
      
      Otherwise, it has been observed that requests are being sent to
      q->request_fn much quicker than is ideal on IOPS-bound backends.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      9a0e609e
    • M
      dm: remove request-based logic from make_request_fn wrapper · ff36ab34
      Mike Snitzer 提交于
      The old dm_request() method used for q->make_request_fn had a branch for
      request-based DM support but it isn't needed given that
      dm_init_request_based_queue() sets it to the standard blk_queue_bio()
      anyway.
      
      Cleanup dm_init_md_queue() to be DM device-type agnostic and have
      dm_setup_md_queue() properly finish queue setup based on DM device-type
      (bio-based vs request-based).
      
      A followup block patch can be made to remove the export for
      blk_queue_bio() now that DM no longer calls it directly.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      ff36ab34
  2. 01 4月, 2015 11 次提交
  3. 31 3月, 2015 1 次提交
    • M
      block: fix blk_stack_limits() regression due to lcm() change · e9637415
      Mike Snitzer 提交于
      Linux 3.19 commit 69c953c8 ("lib/lcm.c: lcm(n,0)=lcm(0,n) is 0, not n")
      caused blk_stack_limits() to not properly stack queue_limits for stacked
      devices (e.g. DM).
      
      Fix this regression by establishing lcm_not_zero() and switching
      blk_stack_limits() over to using it.
      
      DM uses blk_set_stacking_limits() to establish the initial top-level
      queue_limits that are then built up based on underlying devices' limits
      using blk_stack_limits().  In the case of optimal_io_size (io_opt)
      blk_set_stacking_limits() establishes a default value of 0.  With commit
      69c953c8, lcm(0, n) is no longer n, which compromises proper stacking of
      the underlying devices' io_opt.
      
      Test:
      $ modprobe scsi_debug dev_size_mb=10 num_tgts=1 opt_blks=1536
      $ cat /sys/block/sde/queue/optimal_io_size
      786432
      $ dmsetup create node --table "0 100 linear /dev/sde 0"
      
      Before this fix:
      $ cat /sys/block/dm-5/queue/optimal_io_size
      0
      
      After this fix:
      $ cat /sys/block/dm-5/queue/optimal_io_size
      786432
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org # 3.19+
      Acked-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      e9637415
  4. 30 3月, 2015 9 次提交
  5. 29 3月, 2015 7 次提交