1. 22 12月, 2012 2 次提交
    • M
      dm: introduce per_bio_data · c0820cf5
      Mikulas Patocka 提交于
      Introduce a field per_bio_data_size in struct dm_target.
      
      Targets can set this field in the constructor. If a target sets this
      field to a non-zero value, "per_bio_data_size" bytes of auxiliary data
      are allocated for each bio submitted to the target. These data can be
      used for any purpose by the target and help us improve performance by
      removing some per-target mempools.
      
      Per-bio data is accessed with dm_per_bio_data. The
      argument data_size must be the same as the value per_bio_data_size in
      dm_target.
      
      If the target has a pointer to per_bio_data, it can get a pointer to
      the bio with dm_bio_from_per_bio_data() function (data_size must be the
      same as the value passed to dm_per_bio_data).
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      c0820cf5
    • M
      dm: prepare to support WRITE SAME · d54eaa5a
      Mike Snitzer 提交于
      Allow targets to opt in to WRITE SAME support by setting
      'num_write_same_requests' in the dm_target structure.
      
      A dm device will only advertise WRITE SAME support if all its
      targets and all its underlying devices support it.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      d54eaa5a
  2. 27 7月, 2012 6 次提交
  3. 01 11月, 2011 4 次提交
  4. 26 9月, 2011 1 次提交
    • M
      dm crypt: always disable discard_zeroes_data · 983c7db3
      Milan Broz 提交于
      If optional discard support in dm-crypt is enabled, discards requests
      bypass the crypt queue and blocks of the underlying device are discarded.
      For the read path, discarded blocks are handled the same as normal
      ciphertext blocks, thus decrypted.
      
      So if the underlying device announces discarded regions return zeroes,
      dm-crypt must disable this flag because after decryption there is just
      random noise instead of zeroes.
      Signed-off-by: NMilan Broz <mbroz@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      983c7db3
  5. 02 8月, 2011 1 次提交
  6. 29 5月, 2011 1 次提交
  7. 18 4月, 2011 1 次提交
  8. 10 3月, 2011 1 次提交
  9. 14 1月, 2011 2 次提交
  10. 12 8月, 2010 3 次提交
  11. 06 3月, 2010 1 次提交
  12. 11 12月, 2009 4 次提交
  13. 05 9月, 2009 1 次提交
  14. 24 7月, 2009 1 次提交
  15. 22 6月, 2009 5 次提交
    • K
      dm: prepare for request based option · cec47e3d
      Kiyoshi Ueda 提交于
      This patch adds core functions for request-based dm.
      
      When struct mapped device (md) is initialized, md->queue has
      an I/O scheduler and the following functions are used for
      request-based dm as the queue functions:
          make_request_fn: dm_make_request()
          pref_fn:         dm_prep_fn()
          request_fn:      dm_request_fn()
          softirq_done_fn: dm_softirq_done()
          lld_busy_fn:     dm_lld_busy()
      Actual initializations are done in another patch (PATCH 2).
      
      Below is a brief summary of how request-based dm behaves, including:
        - making request from bio
        - cloning, mapping and dispatching request
        - completing request and bio
        - suspending md
        - resuming md
      
        bio to request
        ==============
        md->queue->make_request_fn() (dm_make_request()) calls __make_request()
        for a bio submitted to the md.
        Then, the bio is kept in the queue as a new request or merged into
        another request in the queue if possible.
      
        Cloning and Mapping
        ===================
        Cloning and mapping are done in md->queue->request_fn() (dm_request_fn()),
        when requests are dispatched after they are sorted by the I/O scheduler.
      
        dm_request_fn() checks busy state of underlying devices using
        target's busy() function and stops dispatching requests to keep them
        on the dm device's queue if busy.
        It helps better I/O merging, since no merge is done for a request
        once it is dispatched to underlying devices.
      
        Actual cloning and mapping are done in dm_prep_fn() and map_request()
        called from dm_request_fn().
        dm_prep_fn() clones not only request but also bios of the request
        so that dm can hold bio completion in error cases and prevent
        the bio submitter from noticing the error.
        (See the "Completion" section below for details.)
      
        After the cloning, the clone is mapped by target's map_rq() function
          and inserted to underlying device's queue using
          blk_insert_cloned_request().
      
        Completion
        ==========
        Request completion can be hooked by rq->end_io(), but then, all bios
        in the request will have been completed even error cases, and the bio
        submitter will have noticed the error.
        To prevent the bio completion in error cases, request-based dm clones
        both bio and request and hooks both bio->bi_end_io() and rq->end_io():
            bio->bi_end_io(): end_clone_bio()
            rq->end_io():     end_clone_request()
      
        Summary of the request completion flow is below:
        blk_end_request() for a clone request
          => blk_update_request()
             => bio->bi_end_io() == end_clone_bio() for each clone bio
                => Free the clone bio
                => Success: Complete the original bio (blk_update_request())
                   Error:   Don't complete the original bio
          => blk_finish_request()
             => rq->end_io() == end_clone_request()
                => blk_complete_request()
                   => dm_softirq_done()
                      => Free the clone request
                      => Success: Complete the original request (blk_end_request())
                         Error:   Requeue the original request
      
        end_clone_bio() completes the original request on the size of
        the original bio in successful cases.
        Even if all bios in the original request are completed by that
        completion, the original request must not be completed yet to keep
        the ordering of request completion for the stacking.
        So end_clone_bio() uses blk_update_request() instead of
        blk_end_request().
        In error cases, end_clone_bio() doesn't complete the original bio.
        It just frees the cloned bio and gives over the error handling to
        end_clone_request().
      
        end_clone_request(), which is called with queue lock held, completes
        the clone request and the original request in a softirq context
        (dm_softirq_done()), which has no queue lock, to avoid a deadlock
        issue on submission of another request during the completion:
            - The submitted request may be mapped to the same device
            - Request submission requires queue lock, but the queue lock
              has been held by itself and it doesn't know that
      
        The clone request has no clone bio when dm_softirq_done() is called.
        So target drivers can't resubmit it again even error cases.
        Instead, they can ask dm core for requeueing and remapping
        the original request in that cases.
      
        suspend
        =======
        Request-based dm uses stopping md->queue as suspend of the md.
        For noflush suspend, just stops md->queue.
      
        For flush suspend, inserts a marker request to the tail of md->queue.
        And dispatches all requests in md->queue until the marker comes to
        the front of md->queue.  Then, stops dispatching request and waits
        for the all dispatched requests to complete.
        After that, completes the marker request, stops md->queue and
        wake up the waiter on the suspend queue, md->wait.
      
        resume
        ======
        Starts md->queue.
      Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com>
      Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      cec47e3d
    • M
      dm: calculate queue limits during resume not load · 754c5fc7
      Mike Snitzer 提交于
      Currently, device-mapper maintains a separate instance of 'struct
      queue_limits' for each table of each device.  When the configuration of
      a device is to be changed, first its table is loaded and this structure
      is populated, then the device is 'resumed' and the calculated
      queue_limits are applied.
      
      This places restrictions on how userspace may process related devices,
      where it is often advantageous to 'load' tables for several devices
      at once before 'resuming' them together.  As the new queue_limits
      only take effect after the 'resume', if they are changing and one
      device uses another, the latter must be 'resumed' before the former
      may be 'loaded'.
      
      This patch moves the calculation of these queue_limits out of
      the 'load' operation into 'resume'.  Since we are no longer
      pre-calculating this struct, we no longer need to maintain copies
      within our dm structs.
      
      dm_set_device_limits() now passes the 'start' of the device's
      data area (aka pe_start) as the 'offset' to blk_stack_limits().
      
      init_valid_queue_limits() is replaced by blk_set_default_limits().
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: martin.petersen@oracle.com
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      754c5fc7
    • M
      dm target:s introduce iterate devices fn · af4874e0
      Mike Snitzer 提交于
      Add .iterate_devices to 'struct target_type' to allow a function to be
      called for all devices in a DM target.  Implemented it for all targets
      except those in dm-snap.c (origin and snapshot).
      
      (The raid1 version number jumps to 1.12 because we originally reserved
      1.1 to 1.11 for 'block_on_error' but ended up using 'handle_errors'
      instead.)
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      Cc: martin.petersen@oracle.com
      af4874e0
    • M
      dm table: replace struct io_restrictions with struct queue_limits · 5ab97588
      Mike Snitzer 提交于
      Use blk_stack_limits() to stack block limits (including topology) rather
      than duplicate the equivalent within Device Mapper.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      5ab97588
    • M
      dm: introduce num_flush_requests · f9ab94ce
      Mikulas Patocka 提交于
      Introduce num_flush_requests for a target to set to say how many flush
      instructions (empty barriers) it wants to receive.  These are sent by
      __clone_and_map_empty_barrier with map_info->flush_request going from 0
      to (num_flush_requests - 1).
      
      Old targets without flush support won't receive any flush requests.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      f9ab94ce
  16. 23 5月, 2009 1 次提交
  17. 09 4月, 2009 1 次提交
  18. 03 4月, 2009 1 次提交
  19. 06 1月, 2009 3 次提交
    • A
      dm: support barriers on simple devices · ab4c1424
      Andi Kleen 提交于
      Implement barrier support for single device DM devices
      
      This patch implements barrier support in DM for the common case of dm linear
      just remapping a single underlying device. In this case we can safely
      pass the barrier through because there can be no reordering between
      devices.
      
       NB. Any DM device might cease to support barriers if it gets
           reconfigured so code must continue to allow for a possible
           -EOPNOTSUPP on every barrier bio submitted.  - agk
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      ab4c1424
    • K
      dm request: extend target interface · 7d76345d
      Kiyoshi Ueda 提交于
      This patch adds the following target interfaces for request-based dm.
      
        map_rq    : for mapping a request
      
        rq_end_io : for finishing a request
      
        busy      : for avoiding performance regression from bio-based dm.
                    Target can tell dm core not to map requests now, and
                    that may help requests in the block layer queue to be
                    bigger by I/O merging.
                    In bio-based dm, this behavior is done by device
                    drivers managing the block layer queue.
                    But in request-based dm, dm core has to do that
                    since dm core manages the block layer queue.
      Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com>
      Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      7d76345d
    • M
      dm: consolidate target deregistration error handling · 10d3bd09
      Mikulas Patocka 提交于
      Change dm_unregister_target to return void and use BUG() for error
      reporting.
      
      dm_unregister_target can only fail because of programming bug in the
      target driver. It can't fail because of user's behavior or disk errors.
      
      This patch changes unregister_target to return void and use BUG if
      someone tries to unregister non-registered target or unregister target
      that is in use.
      
      This patch removes code duplication (testing of error codes in all dm
      targets) and reports bugs in just one place, in dm_unregister_target. In
      some target drivers, these return codes were ignored, which could lead
      to a situation where bugs could be missed.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      10d3bd09