1. 09 10月, 2008 24 次提交
    • K
      block: remove end_{queued|dequeued}_request() · d00e29fd
      Kiyoshi Ueda 提交于
      This patch removes end_queued_request() and end_dequeued_request(),
      which are no longer used.
      
      As a results, users of __end_request() became only end_request().
      So the actual code in __end_request() is moved to end_request()
      and __end_request() is removed.
      Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com>
      Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      d00e29fd
    • K
      block: add lld busy state exporting interface · ef9e3fac
      Kiyoshi Ueda 提交于
      This patch adds an new interface, blk_lld_busy(), to check lld's
      busy state from the block layer.
      blk_lld_busy() calls down into low-level drivers for the checking
      if the drivers set q->lld_busy_fn() using blk_queue_lld_busy().
      
      This resolves a performance problem on request stacking devices below.
      
      Some drivers like scsi mid layer stop dispatching request when
      they detect busy state on its low-level device like host/target/device.
      It allows other requests to stay in the I/O scheduler's queue
      for a chance of merging.
      
      Request stacking drivers like request-based dm should follow
      the same logic.
      However, there is no generic interface for the stacked device
      to check if the underlying device(s) are busy.
      If the request stacking driver dispatches and submits requests to
      the busy underlying device, the requests will stay in
      the underlying device's queue without a chance of merging.
      This causes performance problem on burst I/O load.
      
      With this patch, busy state of the underlying device is exported
      via q->lld_busy_fn().  So the request stacking driver can check it
      and stop dispatching requests if busy.
      
      The underlying device driver must return the busy state appropriately:
          1: when the device driver can't process requests immediately.
          0: when the device driver can process requests immediately,
             including abnormal situations where the device driver needs
             to kill all requests.
      Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com>
      Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      ef9e3fac
    • J
      block: add queue flag for SSD/non-rotational devices · a68bbddb
      Jens Axboe 提交于
      We don't want to idle in AS/CFQ if the device doesn't have a seek
      penalty. So add a QUEUE_FLAG_NONROT to indicate a non-rotational
      device, low level drivers should set this flag upon discovery of
      an SSD or similar device type.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      a68bbddb
    • K
      block: add a queue flag for request stacking support · 4ee5eaf4
      Kiyoshi Ueda 提交于
      This patch adds a queue flag to indicate the block device can be
      used for request stacking.
      
      Request stacking drivers need to stack their devices on top of
      only devices of which q->request_fn is functional.
      Since bio stacking drivers (e.g. md, loop) basically initialize
      their queue using blk_alloc_queue() and don't set q->request_fn,
      the check of (q->request_fn == NULL) looks enough for that purpose.
      
      However, dm will become both types of stacking driver (bio-based and
      request-based).  And dm will always set q->request_fn even if the dm
      device is bio-based of which q->request_fn is not functional actually.
      So we need something else to distinguish the type of the device.
      Adding a queue flag is a solution for that.
      
      The reason why dm always sets q->request_fn is to keep
      the compatibility of dm user-space tools.
      Currently, all dm user-space tools are using bio-based dm without
      specifying the type of the dm device they use.
      To use request-based dm without changing such tools, the kernel
      must decide the type of the dm device automatically.
      The automatic type decision can't be done at the device creation time
      and needs to be deferred until such tools load a mapping table,
      since the actual type is decided by dm target type included in
      the mapping table.
      
      So a dm device has to be initialized using blk_init_queue()
      so that we can load either type of table.
      Then, all queue stuffs are set (e.g. q->request_fn) and we have
      no element to distinguish that it is bio-based or request-based,
      even after a table is loaded and the type of the device is decided.
      
      By the way, some stuffs of the queue (e.g. request_list, elevator)
      are needless when the dm device is used as bio-based.
      But the memory size is not so large (about 20[KB] per queue on ia64),
      so I hope the memory loss can be acceptable for bio-based dm users.
      Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com>
      Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      4ee5eaf4
    • K
      block: add request submission interface · 82124d60
      Kiyoshi Ueda 提交于
      This patch adds blk_insert_cloned_request(), a generic request
      submission interface for request stacking drivers.
      Request-based dm will use it to submit their clones to underlying
      devices.
      
      blk_rq_check_limits() is also added because it is possible that
      the lower queue has stronger limitations than the upper queue
      if multiple drivers are stacking at request-level.
      Not only for blk_insert_cloned_request()'s internal use, the function
      will be used by request-based dm when the queue limitation is
      modified (e.g. by replacing dm's table).
      Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com>
      Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      82124d60
    • K
      block: add request update interface · 32fab448
      Kiyoshi Ueda 提交于
      This patch adds blk_update_request(), which updates struct request
      with completing its data part, but doesn't complete the struct
      request itself.
      Though it looks like end_that_request_first() of older kernels,
      blk_update_request() should be used only by request stacking drivers.
      
      Request-based dm will use it in bio->bi_end_io callback to update
      the original request when a data part of a cloned request completes.
      Followings are additional background information of why request-based
      dm needs this interface.
      
        - Request stacking drivers can't use blk_end_request() directly from
          the lower driver's completion context (bio->bi_end_io or rq->end_io),
          because some device drivers (e.g. ide) may try to complete
          their request with queue lock held, and it may cause deadlock.
          See below for detailed description of possible deadlock:
          <http://marc.info/?l=linux-kernel&m=120311479108569&w=2>
      
        - To solve that, request-based dm offloads the completion of
          cloned struct request to softirq context (i.e. using
          blk_complete_request() from rq->end_io).
      
        - Though it is possible to use the same solution from bio->bi_end_io,
          it will delay the notification of bio completion to the original
          submitter.  Also, it will cause inefficient partial completion,
          because the lower driver can't perform the cloned request anymore
          and request-based dm needs to requeue and redispatch it to
          the lower driver again later.  That's not good.
      
        - So request-based dm needs blk_update_request() to perform the bio
          completion in the lower driver's completion context, which is more
          efficient.
      Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com>
      Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      32fab448
    • J
      block: cleanup some of the integrity stuff in blkdev.h · 9c02f2b0
      Jens Axboe 提交于
      Don't put functions that are only used in fs/bio-integrity.c in
      blkdev.h, it's much cleaner to just keep it in there. Also kill
      completely unused bdev_get_tag_size()
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      9c02f2b0
    • J
      block: add fault injection mechanism for faking request timeouts · 581d4e28
      Jens Axboe 提交于
      Only works for the generic request timer handling. Allows one to
      sporadically ignore request completions, thus exercising the timeout
      handling.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      581d4e28
    • H
      block: adjust blkdev_issue_discard for swap · 3e6053d7
      Hugh Dickins 提交于
      Two mods to blkdev_issue_discard(), thinking ahead to its use on swap:
      
      1. Add gfp_mask argument, so swap allocation can use it where GFP_KERNEL
         might deadlock but GFP_NOIO is safe.
      
      2. Enlarge nr_sects argument from unsigned to sector_t: unsigned long is
         enough to cover a whole swap area, but sector_t suits any partition.
      
      Change sb_issue_discard()'s nr_blocks to sector_t too; but no need seen
      for a gfp_mask there, just pass GFP_KERNEL down to blkdev_issue_discard().
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      3e6053d7
    • M
      11914a53
    • J
      block: unify request timeout handling · 242f9dcb
      Jens Axboe 提交于
      Right now SCSI and others do their own command timeout handling.
      Move those bits to the block layer.
      
      Instead of having a timer per command, we try to be a bit more clever
      and simply have one per-queue. This avoids the overhead of having to
      tear down and setup a timer for each command, so it will result in a lot
      less timer fiddling.
      Signed-off-by: NMike Anderson <andmike@linux.vnet.ibm.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      242f9dcb
    • F
      block: add blk_rq_aligned helper function · 87904074
      FUJITA Tomonori 提交于
      This adds blk_rq_aligned helper function to see if alignment and
      padding requirement is satisfied for DMA transfer. This also converts
      blk_rq_map_kern and __blk_rq_map_user to use the helper function.
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      87904074
    • F
      block: introduce struct rq_map_data to use reserved pages · 152e283f
      FUJITA Tomonori 提交于
      This patch introduces struct rq_map_data to enable bio_copy_use_iov()
      use reserved pages.
      
      Currently, bio_copy_user_iov allocates bounce pages but
      drivers/scsi/sg.c wants to allocate pages by itself and use
      them. struct rq_map_data can be used to pass allocated pages to
      bio_copy_user_iov.
      
      The current users of bio_copy_user_iov simply passes NULL (they don't
      want to use pre-allocated pages).
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Cc: Douglas Gilbert <dougg@torque.net>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      152e283f
    • F
      block: add gfp_mask argument to blk_rq_map_user and blk_rq_map_user_iov · a3bce90e
      FUJITA Tomonori 提交于
      Currently, blk_rq_map_user and blk_rq_map_user_iov always do
      GFP_KERNEL allocation.
      
      This adds gfp_mask argument to blk_rq_map_user and blk_rq_map_user_iov
      so sg can use it (sg always does GFP_ATOMIC allocation).
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Signed-off-by: NDouglas Gilbert <dougg@torque.net>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      a3bce90e
    • J
      block: inherit CPU completion on bio->rq and rq->rq merges · ab780f1e
      Jens Axboe 提交于
      Somewhat incomplete, as we do allow merges of requests and bios
      that have different completion CPUs given. This is done on the
      assumption that a larger IO is still more beneficial than CPU
      locality.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      ab780f1e
    • J
      block: add support for IO CPU affinity · c7c22e4d
      Jens Axboe 提交于
      This patch adds support for controlling the IO completion CPU of
      either all requests on a queue, or on a per-request basis. We export
      a sysfs variable (rq_affinity) which, if set, migrates completions
      of requests to the CPU that originally submitted it. A bio helper
      (bio_set_completion_cpu()) is also added, so that queuers can ask
      for completion on that specific CPU.
      
      In testing, this has been show to cut the system time by as much
      as 20-40% on synthetic workloads where CPU affinity is desired.
      
      This requires a little help from the architecture, so it'll only
      work as designed for archs that are using the new generic smp
      helper infrastructure.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      c7c22e4d
    • J
      block: make kblockd_schedule_work() take the queue as parameter · 18887ad9
      Jens Axboe 提交于
      Preparatory patch for checking queuing affinity.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      18887ad9
    • M
      drop vmerge accounting · 5df97b91
      Mikulas Patocka 提交于
      Remove hw_segments field from struct bio and struct request. Without virtual
      merge accounting they have no purpose.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      5df97b91
    • F
      virtio_blk: use a wrapper function to access io context information of IO requests · 766ca442
      Fernando Luis Vázquez Cao 提交于
      struct request has an ioprio member but it is never updated because
      currently bios do not hold io context information. The implication of
      this is that virtio_blk ends up passing useless information to the
      backend driver.
      
      That said, some IO schedulers such as CFQ do store io context
      information in struct request, but use private members for that, which
      means that that information cannot be directly accessed in a IO
      scheduler-independent way.
      
      This patch adds a function to obtain the ioprio of a request. We should
      avoid accessing ioprio directly and use this function instead, so that
      its users do not have to care about future changes in block layer
      structures or what the currently active IO controller is.
      
      This patch does not introduce any functional changes but paves the way
      for future clean-ups and enhancements.
      Signed-off-by: NFernando Luis Vazquez Cao <fernando@oss.ntt.co.jp>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      766ca442
    • D
      Kill REQ_TYPE_FLUSH · 1a8e2bdd
      David Woodhouse 提交于
      It was only used by ps3disk, and it should probably have been
      REQ_TYPE_LINUX_BLOCK + REQ_LB_OP_FLUSH.
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      1a8e2bdd
    • D
      Allow elevators to sort/merge discard requests · e17fc0a1
      David Woodhouse 提交于
      But blkdev_issue_discard() still emits requests which are interpreted as
      soft barriers, because naïve callers might otherwise issue subsequent
      writes to those same sectors, which might cross on the queue (if they're
      reallocated quickly enough).
      
      Callers still _can_ issue non-barrier discard requests, but they have to
      take care of queue ordering for themselves.
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      e17fc0a1
    • D
    • D
      Add 'discard' request handling · fb2dce86
      David Woodhouse 提交于
      Some block devices benefit from a hint that they can forget the contents
      of certain sectors. Add basic support for this to the block core, along
      with a 'blkdev_issue_discard()' helper function which issues such
      requests.
      
      The caller doesn't get to provide an end_io functio, since
      blkdev_issue_discard() will automatically split the request up into
      multiple bios if appropriate. Neither does the function wait for
      completion -- it's expected that callers won't care about when, or even
      _if_, the request completes. It's only a hint to the device anyway. By
      definition, the file system doesn't _care_ about these sectors any more.
      
      [With feedback from OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> and
      Jens Axboe <jens.axboe@oracle.com]
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      fb2dce86
    • D
  2. 11 9月, 2008 1 次提交
  3. 27 8月, 2008 3 次提交
  4. 02 8月, 2008 1 次提交
  5. 17 7月, 2008 1 次提交
  6. 16 7月, 2008 1 次提交
  7. 04 7月, 2008 1 次提交
  8. 03 7月, 2008 8 次提交