1. 11 1月, 2010 1 次提交
  2. 30 12月, 2009 1 次提交
  3. 29 12月, 2009 1 次提交
  4. 03 12月, 2009 1 次提交
    • M
      block: Allow devices to indicate whether discarded blocks are zeroed · 98262f27
      Martin K. Petersen 提交于
      The discard ioctl is used by mkfs utilities to clear a block device
      prior to putting metadata down.  However, not all devices return zeroed
      blocks after a discard.  Some drives return stale data, potentially
      containing old superblocks.  It is therefore important to know whether
      discarded blocks are properly zeroed.
      
      Both ATA and SCSI drives have configuration bits that indicate whether
      zeroes are returned after a discard operation.  Implement a block level
      interface that allows this information to be bubbled up the stack and
      queried via a new block device ioctl.
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      98262f27
  5. 26 11月, 2009 1 次提交
    • I
      block: add helpers to run flush_dcache_page() against a bio and a request's pages · 2d4dc890
      Ilya Loginov 提交于
      Mtdblock driver doesn't call flush_dcache_page for pages in request.  So,
      this causes problems on architectures where the icache doesn't fill from
      the dcache or with dcache aliases.  The patch fixes this.
      
      The ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE symbol was introduced to avoid
      pointless empty cache-thrashing loops on architectures for which
      flush_dcache_page() is a no-op.  Every architecture was provided with this
      flush pages on architectires where ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE is
      equal 1 or do nothing otherwise.
      
      See "fix mtd_blkdevs problem with caches on some architectures" discussion
      on LKML for more information.
      Signed-off-by: NIlya Loginov <isloginov@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Peter Horton <phorton@bitbox.co.uk>
      Cc: "Ed L. Cashin" <ecashin@coraid.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      2d4dc890
  6. 10 11月, 2009 1 次提交
  7. 29 10月, 2009 1 次提交
  8. 05 10月, 2009 1 次提交
  9. 04 10月, 2009 1 次提交
  10. 03 10月, 2009 1 次提交
  11. 02 10月, 2009 4 次提交
    • C
      block: allow large discard requests · 67efc925
      Christoph Hellwig 提交于
      Currently we set the bio size to the byte equivalent of the blocks to
      be trimmed when submitting the initial DISCARD ioctl.  That means it
      is subject to the max_hw_sectors limitation of the HBA which is
      much lower than the size of a DISCARD request we can support.
      Add a separate max_discard_sectors tunable to limit the size for discard
      requests.
      
      We limit the max discard request size in bytes to 32bit as that is the
      limit for bio->bi_size.  This could be much larger if we had a way to pass
      that information through the block layer.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      67efc925
    • C
      block: use normal I/O path for discard requests · c15227de
      Christoph Hellwig 提交于
      prepare_discard_fn() was being called in a place where memory allocation
      was effectively impossible.  This makes it inappropriate for all but
      the most trivial translations of Linux's DISCARD operation to the block
      command set.  Additionally adding a payload there makes the ownership
      of the bio backing unclear as it's now allocated by the device driver
      and not the submitter as usual.
      
      It is replaced with QUEUE_FLAG_DISCARD which is used to indicate whether
      the queue supports discard operations or not.  blkdev_issue_discard now
      allocates a one-page, sector-length payload which is the right thing
      for the common ATA and SCSI implementations.
      
      The mtd implementation of prepare_discard_fn() is replaced with simply
      checking for the request being a discard.
      
      Largely based on a previous patch from Matthew Wilcox <matthew@wil.cx>
      which did the prepare_discard_fn but not the different payload allocation
      yet.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      c15227de
    • C
      block: allow large discard requests · ca80650c
      Christoph Hellwig 提交于
      Currently we set the bio size to the byte equivalent of the blocks to
      be trimmed when submitting the initial DISCARD ioctl.  That means it
      is subject to the max_hw_sectors limitation of the HBA which is
      much lower than the size of a DISCARD request we can support.
      Add a separate max_discard_sectors tunable to limit the size for discard
      requests.
      
      We limit the max discard request size in bytes to 32bit as that is the
      limit for bio->bi_size.  This could be much larger if we had a way to pass
      that information through the block layer.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      ca80650c
    • C
      block: use normal I/O path for discard requests · 1122a26f
      Christoph Hellwig 提交于
      prepare_discard_fn() was being called in a place where memory allocation
      was effectively impossible.  This makes it inappropriate for all but
      the most trivial translations of Linux's DISCARD operation to the block
      command set.  Additionally adding a payload there makes the ownership
      of the bio backing unclear as it's now allocated by the device driver
      and not the submitter as usual.
      
      It is replaced with QUEUE_FLAG_DISCARD which is used to indicate whether
      the queue supports discard operations or not.  blkdev_issue_discard now
      allocates a one-page, sector-length payload which is the right thing
      for the common ATA and SCSI implementations.
      
      The mtd implementation of prepare_discard_fn() is replaced with simply
      checking for the request being a discard.
      
      Largely based on a previous patch from Matthew Wilcox <matthew@wil.cx>
      which did the prepare_discard_fn but not the different payload allocation
      yet.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      1122a26f
  12. 14 9月, 2009 2 次提交
  13. 11 9月, 2009 5 次提交
    • J
      block: enable rq CPU completion affinity by default · 01e97f6b
      Jens Axboe 提交于
      Test results here look good, and on big OLTP runs it has also shown
      to significantly increase cycles attributed to the database and
      cause a performance boost.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      01e97f6b
    • J
      block: improve queue_should_plug() by looking at IO depths · fb1e7538
      Jens Axboe 提交于
      Instead of just checking whether this device uses block layer
      tagging, we can improve the detection by looking at the maximum
      queue depth it has reached. If that crosses 4, then deem it a
      queuing device.
      
      This is important on high IOPS devices, since plugging hurts
      the performance there (it can be as much as 10-15% of the sys
      time).
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      fb1e7538
    • J
      bio: first step in sanitizing the bio->bi_rw flag testing · 1f98a13f
      Jens Axboe 提交于
      Get rid of any functions that test for these bits and make callers
      use bio_rw_flagged() directly. Then it is at least directly apparent
      what variable and flag they check.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      1f98a13f
    • T
      block: implement mixed merge of different failfast requests · 80a761fd
      Tejun Heo 提交于
      Failfast has characteristics from other attributes.  When issuing,
      executing and successuflly completing requests, failfast doesn't make
      any difference.  It only affects how a request is handled on failure.
      Allowing requests with different failfast settings to be merged cause
      normal IOs to fail prematurely while not allowing has performance
      penalties as failfast is used for read aheads which are likely to be
      located near in-flight or to-be-issued normal IOs.
      
      This patch introduces the concept of 'mixed merge'.  A request is a
      mixed merge if it is merge of segments which require different
      handling on failure.  Currently the only mixable attributes are
      failfast ones (or lack thereof).
      
      When a bio with different failfast settings is added to an existing
      request or requests of different failfast settings are merged, the
      merged request is marked mixed.  Each bio carries failfast settings
      and the request always tracks failfast state of the first bio.  When
      the request fails, blk_rq_err_bytes() can be used to determine how
      many bytes can be safely failed without crossing into an area which
      requires further retrials.
      
      This allows request merging regardless of failfast settings while
      keeping the failure handling correct.
      
      This patch only implements mixed merge but doesn't enable it.  The
      next one will update SCSI to make use of mixed merge.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Niel Lambrechts <niel.lambrechts@gmail.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      80a761fd
    • T
      block: use the same failfast bits for bio and request · a82afdfc
      Tejun Heo 提交于
      bio and request use the same set of failfast bits.  This patch makes
      the following changes to simplify things.
      
      * enumify BIO_RW* bits and reorder bits such that BIOS_RW_FAILFAST_*
        bits coincide with __REQ_FAILFAST_* bits.
      
      * The above pushes BIO_RW_AHEAD out of sync with __REQ_FAILFAST_DEV
        but the matching is useless anyway.  init_request_from_bio() is
        responsible for setting FAILFAST bits on FS requests and non-FS
        requests never use BIO_RW_AHEAD.  Drop the code and comment from
        blk_rq_bio_prep().
      
      * Define REQ_FAILFAST_MASK which is OR of all FAILFAST bits and
        simplify FAILFAST flags handling in init_request_from_bio().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      a82afdfc
  14. 01 8月, 2009 1 次提交
  15. 12 7月, 2009 1 次提交
  16. 11 7月, 2009 2 次提交
  17. 01 7月, 2009 1 次提交
    • J
      block: get rid of queue-private command filter · 018e0446
      Jens Axboe 提交于
      The initial patches to support this through sysfs export were broken
      and have been if 0'ed out in any release. So lets just kill the code
      and reclaim some space in struct request_queue, if anyone would later
      like to fixup the sysfs bits, the git history can easily restore
      the removed bits.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      018e0446
  18. 16 6月, 2009 1 次提交
  19. 11 6月, 2009 1 次提交
    • K
      block: add request clone interface (v2) · b0fd271d
      Kiyoshi Ueda 提交于
      This patch adds the following 2 interfaces for request-stacking drivers:
      
        - blk_rq_prep_clone(struct request *clone, struct request *orig,
      		      struct bio_set *bs, gfp_t gfp_mask,
      		      int (*bio_ctr)(struct bio *, struct bio*, void *),
      		      void *data)
            * Clones bios in the original request to the clone request
              (bio_ctr is called for each cloned bios.)
            * Copies attributes of the original request to the clone request.
              The actual data parts (e.g. ->cmd, ->buffer, ->sense) are not
              copied.
      
        - blk_rq_unprep_clone(struct request *clone)
            * Frees cloned bios from the clone request.
      
      Request stacking drivers (e.g. request-based dm) need to make a clone
      request for a submitted request and dispatch it to other devices.
      
      To allocate request for the clone, request stacking drivers may not
      be able to use blk_get_request() because the allocation may be done
      in an irq-disabled context.
      So blk_rq_prep_clone() takes a request allocated by the caller
      as an argument.
      
      For each clone bio in the clone request, request stacking drivers
      should be able to set up their own completion handler.
      So blk_rq_prep_clone() takes a callback function which is called
      for each clone bio, and a pointer for private data which is passed
      to the callback.
      
      NOTE:
      blk_rq_prep_clone() doesn't copy any actual data of the original
      request.  Pages are shared between original bios and cloned bios.
      So caller must not complete the original request before the clone
      request.
      Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com>
      Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com>
      Cc: Boaz Harrosh <bharrosh@panasas.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      b0fd271d
  20. 09 6月, 2009 1 次提交
  21. 07 6月, 2009 1 次提交
    • B
      partitions: add ->set_capacity block device method · db429e9e
      Bartlomiej Zolnierkiewicz 提交于
      * Add ->set_capacity block device method and use it in rescan_partitions()
        to attempt enabling native capacity of the device upon detecting the
        partition which exceeds device capacity.
      
      * Add GENHD_FL_NATIVE_CAPACITY flag to try limit attempts of enabling
        native capacity during partition scan.
      
      Together with the consecutive patch implementing ->set_capacity method in
      ide-gd device driver this allows automatic disabling of Host Protected Area
      (HPA) if any partitions overlapping HPA are detected.
      
      Cc: Robert Hancock <hancockrwd@gmail.com>
      Cc: Frans Pop <elendil@planet.nl>
      Cc: "Andries E. Brouwer" <Andries.Brouwer@cwi.nl>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Emphatically-Acked-by: NAlan Cox <alan@linux.intel.com>
      Signed-off-by: NBartlomiej Zolnierkiewicz <bzolnier@gmail.com>
      db429e9e
  22. 03 6月, 2009 1 次提交
  23. 23 5月, 2009 4 次提交
  24. 20 5月, 2009 1 次提交
  25. 19 5月, 2009 2 次提交
  26. 11 5月, 2009 2 次提交