1. 12 5月, 2012 1 次提交
  2. 29 3月, 2012 2 次提交
  3. 15 1月, 2012 1 次提交
  4. 02 8月, 2011 2 次提交
  5. 27 7月, 2011 1 次提交
  6. 29 5月, 2011 1 次提交
  7. 24 3月, 2011 2 次提交
    • M
      dm mpath: allow table load with no priority groups · a490a07a
      Mike Snitzer 提交于
      This patch adjusts the multipath target to allow a table with both 0
      priority groups and 0 for the initial priority group number.
      
      If any mpath device is held open when all paths in the last priority
      group have failed, userspace multipathd will attempt to reload the
      associated DM table to reflect the fact that the device no longer has
      any priority groups.  But the reload attempt always failed because the
      multipath target did not allow 0 priority groups.
      
      All multipath target messages related to priority group (enable_group,
      disable_group, switch_group) will handle a priority group of 0 (will
      cause error).
      
      When reloading a multipath table with 0 priority groups, userspace
      multipathd must be updated to specify an initial priority group number
      of 0 (rather than 1).
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: Babu Moger <babu.moger@lsi.com>
      Acked-by: NHannes Reinecke <hare@suse.de>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      a490a07a
    • M
      dm mpath: fail message ioctl if specified path is not valid · 19040c0b
      Mike Snitzer 提交于
      Fail the reinstate_path and fail_path message ioctl if the specified
      path is not valid.
      
      The message ioctl would succeed for the 'reinistate_path' and
      'fail_path' messages even if action was not taken because the
      specified device was not a valid path of the multipath device.
      
      Before, when /dev/vdb is not a path of mpathb:
      $ dmsetup message mpathb 0 reinstate_path /dev/vdb
      $ echo $?
      0
      
      After:
      $ dmsetup message mpathb 0 reinstate_path /dev/vdb
      device-mapper: message ioctl failed: Invalid argument
      Command failed
      $ echo $?
      1
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      19040c0b
  8. 13 2月, 2011 1 次提交
  9. 14 1月, 2011 4 次提交
  10. 12 8月, 2010 2 次提交
  11. 06 3月, 2010 7 次提交
  12. 11 12月, 2009 4 次提交
  13. 05 12月, 2009 1 次提交
  14. 23 8月, 2009 1 次提交
  15. 24 7月, 2009 1 次提交
  16. 22 6月, 2009 9 次提交
    • K
      dm mpath: change to be request based · f40c67f0
      Kiyoshi Ueda 提交于
      This patch converts dm-multipath target to request-based from bio-based.
      
      Basically, the patch just converts the I/O unit from struct bio
      to struct request.
      In the course of the conversion, it also changes the I/O queueing
      mechanism.  The change in the I/O queueing is described in details
      as follows.
      
      I/O queueing mechanism change
      -----------------------------
      In I/O submission, map_io(), there is no mechanism change from
      bio-based, since the clone request is ready for retry as it is.
      However, in I/O complition, do_end_io(), there is a mechanism change
      from bio-based, since the clone request is not ready for retry.
      
      In do_end_io() of bio-based, the clone bio has all needed memory
      for resubmission.  So the target driver can queue it and resubmit
      it later without memory allocations.
      The mechanism has almost no overhead.
      
      On the other hand, in do_end_io() of request-based, the clone request
      doesn't have clone bios, so the target driver can't resubmit it
      as it is.  To resubmit the clone request, memory allocation for
      clone bios is needed, and it takes some overheads.
      To avoid the overheads just for queueing, the target driver doesn't
      queue the clone request inside itself.
      Instead, the target driver asks dm core for queueing and remapping
      the original request of the clone request, since the overhead for
      queueing is just a freeing memory for the clone request.
      
      As a result, the target driver doesn't need to record/restore
      the information of the original request for resubmitting
      the clone request.  So dm_bio_details in dm_mpath_io is removed.
      
      multipath_busy()
      ---------------------
      The target driver returns "busy", only when the following case:
        o The target driver will map I/Os, if map() function is called
        and
        o The mapped I/Os will wait on underlying device's queue due to
          their congestions, if map() function is called now.
      
      In other cases, the target driver doesn't return "busy".
      Otherwise, dm core will keep the I/Os and the target driver can't
      do what it wants.
      (e.g. the target driver can't map I/Os now, so wants to kill I/Os.)
      Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com>
      Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com>
      Acked-by: NHannes Reinecke <hare@suse.de>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      f40c67f0
    • M
      dm target:s introduce iterate devices fn · af4874e0
      Mike Snitzer 提交于
      Add .iterate_devices to 'struct target_type' to allow a function to be
      called for all devices in a DM target.  Implemented it for all targets
      except those in dm-snap.c (origin and snapshot).
      
      (The raid1 version number jumps to 1.12 because we originally reserved
      1.1 to 1.11 for 'block_on_error' but ended up using 'handle_errors'
      instead.)
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      Cc: martin.petersen@oracle.com
      af4874e0
    • K
      dm mpath: add start_io and nr_bytes to path selectors · 02ab823f
      Kiyoshi Ueda 提交于
      This patch makes two additions to the dm path selector interface for
      dynamic load balancers:
        o a new hook, start_io()
        o a new parameter 'nr_bytes' to select_path()/start_io()/end_io()
          to pass the size of the I/O
      
      start_io() is called when a target driver actually submits I/O
      to the selected path.
      Path selectors can use it to start accounting of the I/O.
      (e.g. counting the number of in-flight I/Os.)
      The start_io hook is based on the patch posted by Stefan Bader:
      https://www.redhat.com/archives/dm-devel/2005-October/msg00050.html
      
      nr_bytes, the size of the I/O, is so path selectors can take the
      size of the I/O into account when deciding which path to use.
      dm-service-time uses it to estimate service time, for example.
      (Added the nr_bytes member to dm_mpath_io instead of using existing
       details.bi_size, since request-based dm patch deletes it.)
      Signed-off-by: NStefan Bader <stefan.bader@canonical.com>
      Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com>
      Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      02ab823f
    • M
      dm mpath: support barriers · 8627921f
      Mikulas Patocka 提交于
      Flush support for dm-multipath target.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      8627921f
    • M
      dm mpath: flush keventd queue in destructor · 53b351f9
      Mikulas Patocka 提交于
      The commit fe9cf30e moves dm table event
      submission from kmultipath queue to kernel kevent queue to avoid a
      deadlock.
      
      There is a possibility of race condition because kevent queue is not flushed
      in the multipath destructor. The scenario is:
      - some event happens and is queued to keventd
      - keventd thread is delayed due to scheuling latency or some other work
      - multipath device is destroyed
      - keventd now attempts to process work_struct that is residing in already
        released memory.
      
      The patch flushes the keventd queue in multipath constructor.
      I've already fixed similar bug in dm-raid1.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      Cc: stable@kernel.org
      53b351f9
    • C
      dm mpath: call activate fn for each path in pg_init · e54f77dd
      Chandra Seetharaman 提交于
      Fixed a problem affecting reinstatement of passive paths.
      
      Before we moved the hardware handler from dm to SCSI, it performed a pg_init
      for a path group and didn't maintain any state about each path in hardware
      handler code.
      
      But in SCSI dh, such state is now maintained, as we want to fail I/O early on a
      path if it is not the active path.
      
      All the hardware handlers have a state now and set to active or some form of
      inactive.  They have prep_fn() which uses this state to fail the I/O without
      it ever being sent to the device.
      
      So in effect when dm-multipath calls scsi_dh_activate(), activate is
      sent to only one path and the "state" of that path is changed appropriately
      to "active" while other paths in the same path group are never changed
      as they never got an "activate".
      
      In order make sure all the paths in a path group gets their state set
      properly when a pg_init happens, we need to call scsi_dh_activate() on
      all paths in a path group.
      
      Doing this at the hardware handler layer is not a good option as we
      want the multipath layer to define the relationship between path and path
      groups and not the hardware handler.
      
      Attached patch sends an "activate" on each path in a path group when a
      path group is switched. It also sends an activate when a path is reinstated.
      Signed-off-by: NChandra Seetharaman <sekharan@us.ibm.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      e54f77dd
    • H
      dm mpath: change attached scsi_dh · a0cf7ea9
      Hannes Reinecke 提交于
      When specifying a different hardware handler via multipath
      features we should be able to override the built-in defaults.
      
      The problem here is the hardware table from scsi_dh is compiled
      in and cannot be changed from userland. The multipath.conf OTOH
      is purely user-defined and, what's more, the user might have a valid
      reason for modifying it.
      (EG EMC Clariion can well be run in PNR mode even though ALUA is
      active, or the user might want to try ALUA on any as-of-yet unknown
      devices)
      
      So _not_ allowing multipath to override the device handler setting
      will just add to the confusion and makes error tracking even more
      difficult.
      Signed-off-by: NHannes Reinecke <hare@suse.de>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      a0cf7ea9
    • M
      dm mpath: validate hw_handler argument count · e094f4f1
      Mikulas Patocka 提交于
      Fix arg count parsing error in hw handlers.
      
      Cc: stable@kernel.org
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      e094f4f1
    • M
      dm mpath: validate table argument count · 0e0497c0
      Mikulas Patocka 提交于
      The parser reads the argument count as a number but doesn't check that
      sufficient arguments are supplied. This command triggers the bug:
      
      dmsetup create mpath --table "0 `blockdev --getsize /dev/mapper/cr0`
          multipath 0 0 2 1 round-robin 1000 0 1 1 /dev/mapper/cr0
          round-robin 0 1 1 /dev/mapper/cr1 1000"
      kernel BUG at drivers/md/dm-mpath.c:530!
      
      Cc: stable@kernel.org
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      0e0497c0