1. 23 8月, 2010 1 次提交
  2. 12 8月, 2010 1 次提交
  3. 09 8月, 2010 1 次提交
  4. 08 8月, 2010 6 次提交
  5. 24 6月, 2010 1 次提交
  6. 17 6月, 2010 1 次提交
    • C
      block: fix DISCARD_BARRIER requests · fbbf0556
      Christoph Hellwig 提交于
      Filesystems assume that DISCARD_BARRIER are full barriers, so that they
      don't have to track in-progress discard operation when submitting new I/O.
      But currently we only treat them as elevator barriers, which don't
      actually do the nessecary queue drains.
      
      Also remove the unlikely around both the DISCARD and BARRIER requests -
      the happen far too often for a static mispredict.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      fbbf0556
  7. 04 6月, 2010 2 次提交
  8. 11 5月, 2010 1 次提交
    • M
      block: allow initialization of previously allocated request_queue · 01effb0d
      Mike Snitzer 提交于
      blk_init_queue() allocates the request_queue structure and then
      initializes it as needed (request_fn, elevator, etc).
      
      Split initialization out to blk_init_allocated_queue_node.
      Introduce blk_init_allocated_queue wrapper function to model existing
      blk_init_queue and blk_init_queue_node interfaces.
      
      Export elv_register_queue to allow a newly added elevator to be
      registered with sysfs.  Export elv_unregister_queue for symmetry.
      
      These changes allow DM to initialize a device's request_queue with more
      precision.  In particular, DM no longer unconditionally initializes a
      full request_queue (elevator et al).  It only does so for a
      request-based DM device.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      01effb0d
  9. 09 4月, 2010 1 次提交
    • D
      blkio: Add io_merged stat · 812d4026
      Divyesh Shah 提交于
      This includes both the number of bios merged into requests belonging to this
      cgroup as well as the number of requests merged together.
      In the past, we've observed different merging behavior across upstream kernels,
      some by design some actual bugs. This stat helps a lot in debugging such
      problems when applications report decreased throughput with a new kernel
      version.
      
      This needed adding an extra elevator function to capture bios being merged as I
      did not want to pollute elevator code with blkiocg knowledge and hence needed
      the accounting invocation to come from CFQ.
      
      Signed-off-by: Divyesh Shah<dpshah@google.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      812d4026
  10. 06 4月, 2010 1 次提交
    • M
      laptop-mode: Make flushes per-device · 31373d09
      Matthew Garrett 提交于
      One of the features of laptop-mode is that it forces a writeout of dirty
      pages if something else triggers a physical read or write from a device.
      The current implementation flushes pages on all devices, rather than only
      the one that triggered the flush. This patch alters the behaviour so that
      only the recently accessed block device is flushed, preventing other
      disks being spun up for no terribly good reason.
      Signed-off-by: NMatthew Garrett <mjg@redhat.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      31373d09
  11. 02 4月, 2010 1 次提交
  12. 26 2月, 2010 1 次提交
  13. 23 2月, 2010 2 次提交
  14. 26 11月, 2009 1 次提交
    • I
      block: add helpers to run flush_dcache_page() against a bio and a request's pages · 2d4dc890
      Ilya Loginov 提交于
      Mtdblock driver doesn't call flush_dcache_page for pages in request.  So,
      this causes problems on architectures where the icache doesn't fill from
      the dcache or with dcache aliases.  The patch fixes this.
      
      The ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE symbol was introduced to avoid
      pointless empty cache-thrashing loops on architectures for which
      flush_dcache_page() is a no-op.  Every architecture was provided with this
      flush pages on architectires where ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE is
      equal 1 or do nothing otherwise.
      
      See "fix mtd_blkdevs problem with caches on some architectures" discussion
      on LKML for more information.
      Signed-off-by: NIlya Loginov <isloginov@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Peter Horton <phorton@bitbox.co.uk>
      Cc: "Ed L. Cashin" <ecashin@coraid.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      2d4dc890
  15. 24 10月, 2009 1 次提交
  16. 07 10月, 2009 1 次提交
    • N
      block: Seperate read and write statistics of in_flight requests v2 · 316d315b
      Nikanth Karthikesan 提交于
      Commit a9327cac added seperate read
      and write statistics of in_flight requests. And exported the number
      of read and write requests in progress seperately through sysfs.
      
      But  Corrado Zoccolo <czoccolo@gmail.com> reported getting strange
      output from "iostat -kx 2". Global values for service time and
      utilization were garbage. For interval values, utilization was always
      100%, and service time is higher than normal.
      
      So this was reverted by commit 0f78ab98
      
      The problem was in part_round_stats_single(), I missed the following:
              if (now == part->stamp)
                      return;
      
      -       if (part->in_flight) {
      +       if (part_in_flight(part)) {
                      __part_stat_add(cpu, part, time_in_queue,
                                      part_in_flight(part) * (now - part->stamp));
                      __part_stat_add(cpu, part, io_ticks, (now - part->stamp));
      
      With this chunk included, the reported regression gets fixed.
      Signed-off-by: NNikanth Karthikesan <knikanth@suse.de>
      
      --
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      316d315b
  17. 05 10月, 2009 2 次提交
    • J
      block: get rid of kblock_schedule_delayed_work() · 23e018a1
      Jens Axboe 提交于
      It was briefly introduced to allow CFQ to to delayed scheduling,
      but we ended up removing that feature again. So lets kill the
      function and export, and just switch CFQ back to the normal work
      schedule since it is now passing in a '0' delay from all call
      sites.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      23e018a1
    • J
      Revert "Seperate read and write statistics of in_flight requests" · 0f78ab98
      Jens Axboe 提交于
      This reverts commit a9327cac.
      
      Corrado Zoccolo <czoccolo@gmail.com> reports:
      
      "with 2.6.32-rc1 I started getting the following strange output from
      "iostat -kx 2":
      Linux 2.6.31bisect (et2) 	04/10/2009 	_i686_	(2 CPU)
      
      avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                10,70    0,00    3,16   15,75    0,00   70,38
      
      Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
      avgrq-sz avgqu-sz   await  svctm  %util
      sda              18,22     0,00    0,67    0,01    14,77     0,02
      43,94     0,01   10,53 39043915,03 2629219,87
      sdb              60,89     9,68   50,79    3,04  1724,43    50,52
      65,95     0,70   13,06 488437,47 2629219,87
      
      avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                 2,72    0,00    0,74    0,00    0,00   96,53
      
      Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
      avgrq-sz avgqu-sz   await  svctm  %util
      sda               0,00     0,00    0,00    0,00     0,00     0,00
      0,00     0,00    0,00   0,00 100,00
      sdb               0,00     0,00    0,00    0,00     0,00     0,00
      0,00     0,00    0,00   0,00 100,00
      
      avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                 6,68    0,00    0,99    0,00    0,00   92,33
      
      Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
      avgrq-sz avgqu-sz   await  svctm  %util
      sda               0,00     0,00    0,00    0,00     0,00     0,00
      0,00     0,00    0,00   0,00 100,00
      sdb               0,00     0,00    0,00    0,00     0,00     0,00
      0,00     0,00    0,00   0,00 100,00
      
      avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                 4,40    0,00    0,73    1,47    0,00   93,40
      
      Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
      avgrq-sz avgqu-sz   await  svctm  %util
      sda               0,00     0,00    0,00    0,00     0,00     0,00
      0,00     0,00    0,00   0,00 100,00
      sdb               0,00     4,00    0,00    3,00     0,00    28,00
      18,67     0,06   19,50 333,33 100,00
      
      Global values for service time and utilization are garbage. For
      interval values, utilization is always 100%, and service time is
      higher than normal.
      
      I bisected it down to:
      [a9327cac] Seperate read and write
      statistics of in_flight requests
      and verified that reverting just that commit indeed solves the issue
      on 2.6.32-rc1."
      
      So until this is debugged, revert the bad commit.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      0f78ab98
  18. 03 10月, 2009 1 次提交
  19. 02 10月, 2009 6 次提交
    • J
      Add a tracepoint for block request remapping · b0da3f0d
      Jun'ichi Nomura 提交于
      Since 2.6.31 now has request-based device-mapper, it's useful to have
      a tracepoint for request-remapping as well as bio-remapping.
      This patch adds a tracepoint for request-remapping, trace_block_rq_remap().
      Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com>
      Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com>
      Cc: Alasdair G Kergon <agk@redhat.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      b0da3f0d
    • C
      block: allow large discard requests · 67efc925
      Christoph Hellwig 提交于
      Currently we set the bio size to the byte equivalent of the blocks to
      be trimmed when submitting the initial DISCARD ioctl.  That means it
      is subject to the max_hw_sectors limitation of the HBA which is
      much lower than the size of a DISCARD request we can support.
      Add a separate max_discard_sectors tunable to limit the size for discard
      requests.
      
      We limit the max discard request size in bytes to 32bit as that is the
      limit for bio->bi_size.  This could be much larger if we had a way to pass
      that information through the block layer.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      67efc925
    • C
      block: use normal I/O path for discard requests · c15227de
      Christoph Hellwig 提交于
      prepare_discard_fn() was being called in a place where memory allocation
      was effectively impossible.  This makes it inappropriate for all but
      the most trivial translations of Linux's DISCARD operation to the block
      command set.  Additionally adding a payload there makes the ownership
      of the bio backing unclear as it's now allocated by the device driver
      and not the submitter as usual.
      
      It is replaced with QUEUE_FLAG_DISCARD which is used to indicate whether
      the queue supports discard operations or not.  blkdev_issue_discard now
      allocates a one-page, sector-length payload which is the right thing
      for the common ATA and SCSI implementations.
      
      The mtd implementation of prepare_discard_fn() is replaced with simply
      checking for the request being a discard.
      
      Largely based on a previous patch from Matthew Wilcox <matthew@wil.cx>
      which did the prepare_discard_fn but not the different payload allocation
      yet.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      c15227de
    • J
      Add a tracepoint for block request remapping · 1a35e0f6
      Jun'ichi Nomura 提交于
      Since 2.6.31 now has request-based device-mapper, it's useful to have
      a tracepoint for request-remapping as well as bio-remapping.
      This patch adds a tracepoint for request-remapping, trace_block_rq_remap().
      Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com>
      Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com>
      Cc: Alasdair G Kergon <agk@redhat.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      1a35e0f6
    • C
      block: allow large discard requests · ca80650c
      Christoph Hellwig 提交于
      Currently we set the bio size to the byte equivalent of the blocks to
      be trimmed when submitting the initial DISCARD ioctl.  That means it
      is subject to the max_hw_sectors limitation of the HBA which is
      much lower than the size of a DISCARD request we can support.
      Add a separate max_discard_sectors tunable to limit the size for discard
      requests.
      
      We limit the max discard request size in bytes to 32bit as that is the
      limit for bio->bi_size.  This could be much larger if we had a way to pass
      that information through the block layer.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      ca80650c
    • C
      block: use normal I/O path for discard requests · 1122a26f
      Christoph Hellwig 提交于
      prepare_discard_fn() was being called in a place where memory allocation
      was effectively impossible.  This makes it inappropriate for all but
      the most trivial translations of Linux's DISCARD operation to the block
      command set.  Additionally adding a payload there makes the ownership
      of the bio backing unclear as it's now allocated by the device driver
      and not the submitter as usual.
      
      It is replaced with QUEUE_FLAG_DISCARD which is used to indicate whether
      the queue supports discard operations or not.  blkdev_issue_discard now
      allocates a one-page, sector-length payload which is the right thing
      for the common ATA and SCSI implementations.
      
      The mtd implementation of prepare_discard_fn() is replaced with simply
      checking for the request being a discard.
      
      Largely based on a previous patch from Matthew Wilcox <matthew@wil.cx>
      which did the prepare_discard_fn but not the different payload allocation
      yet.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      1122a26f
  20. 14 9月, 2009 1 次提交
  21. 11 9月, 2009 6 次提交
    • M
      block: trace bio queueing trial only when it occurs · 01edede4
      Minchan Kim 提交于
      If BIO is discarded or cross over end of device,
      BIO queueing trial doesn't occur.
      
      Actually the trace was called just before make_request at first:
      [PATCH] Block queue IO tracing support (blktrace) as of 2006-03-23
            2056a782
      
      And then 2 patches added some checks between them:
      [PATCH] md: check bio address after mapping through partitions
              5ddfe969,
      [BLOCK] Don't allow empty barriers to be passed down to
      queues that don't grok them
              51fd77bd
      
      It breaks original goal.
      Let's trace it only when it happens.
      Signed-off-by: NMinchan Kim <minchan.kim@gmail.com>
      Acked-by: NWu Fengguang <fengguang.wu@intel.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      01edede4
    • J
      block: improve queue_should_plug() by looking at IO depths · fb1e7538
      Jens Axboe 提交于
      Instead of just checking whether this device uses block layer
      tagging, we can improve the detection by looking at the maximum
      queue depth it has reached. If that crosses 4, then deem it a
      queuing device.
      
      This is important on high IOPS devices, since plugging hurts
      the performance there (it can be as much as 10-15% of the sys
      time).
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      fb1e7538
    • J
      bio: first step in sanitizing the bio->bi_rw flag testing · 1f98a13f
      Jens Axboe 提交于
      Get rid of any functions that test for these bits and make callers
      use bio_rw_flagged() directly. Then it is at least directly apparent
      what variable and flag they check.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      1f98a13f
    • T
      block: implement mixed merge of different failfast requests · 80a761fd
      Tejun Heo 提交于
      Failfast has characteristics from other attributes.  When issuing,
      executing and successuflly completing requests, failfast doesn't make
      any difference.  It only affects how a request is handled on failure.
      Allowing requests with different failfast settings to be merged cause
      normal IOs to fail prematurely while not allowing has performance
      penalties as failfast is used for read aheads which are likely to be
      located near in-flight or to-be-issued normal IOs.
      
      This patch introduces the concept of 'mixed merge'.  A request is a
      mixed merge if it is merge of segments which require different
      handling on failure.  Currently the only mixable attributes are
      failfast ones (or lack thereof).
      
      When a bio with different failfast settings is added to an existing
      request or requests of different failfast settings are merged, the
      merged request is marked mixed.  Each bio carries failfast settings
      and the request always tracks failfast state of the first bio.  When
      the request fails, blk_rq_err_bytes() can be used to determine how
      many bytes can be safely failed without crossing into an area which
      requires further retrials.
      
      This allows request merging regardless of failfast settings while
      keeping the failure handling correct.
      
      This patch only implements mixed merge but doesn't enable it.  The
      next one will update SCSI to make use of mixed merge.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Niel Lambrechts <niel.lambrechts@gmail.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      80a761fd
    • T
      block: use the same failfast bits for bio and request · a82afdfc
      Tejun Heo 提交于
      bio and request use the same set of failfast bits.  This patch makes
      the following changes to simplify things.
      
      * enumify BIO_RW* bits and reorder bits such that BIOS_RW_FAILFAST_*
        bits coincide with __REQ_FAILFAST_* bits.
      
      * The above pushes BIO_RW_AHEAD out of sync with __REQ_FAILFAST_DEV
        but the matching is useless anyway.  init_request_from_bio() is
        responsible for setting FAILFAST bits on FS requests and non-FS
        requests never use BIO_RW_AHEAD.  Drop the code and comment from
        blk_rq_bio_prep().
      
      * Define REQ_FAILFAST_MASK which is OR of all FAILFAST bits and
        simplify FAILFAST flags handling in init_request_from_bio().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      a82afdfc
    • J
      writeback: add name to backing_dev_info · d993831f
      Jens Axboe 提交于
      This enables us to track who does what and print info. Its main use
      is catching dirty inodes on the default_backing_dev_info, so we can
      fix that up.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      d993831f
  22. 29 7月, 2009 1 次提交
    • J
      block: make the end_io functions be non-GPL exports · 56ad1740
      Jens Axboe 提交于
      Prior to the change for more sane end_io functions, we exported
      the helpers with the normal EXPORT_SYMBOL(). That got changed
      to _GPL() for the new interface. Revert that particular change,
      on the basis that this is basic functionality and doesn't dip
      into internal structures. If these exports can't be non-GPL,
      then we may as well make EXPORT_SYMBOL() imply GPL for
      everything.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      56ad1740