1. 28 1月, 2017 1 次提交
  2. 18 1月, 2017 2 次提交
  3. 09 12月, 2016 1 次提交
    • C
      block: improve handling of the magic discard payload · f9d03f96
      Christoph Hellwig 提交于
      Instead of allocating a single unused biovec for discard requests, send
      them down without any payload.  Instead we allow the driver to add a
      "special" payload using a biovec embedded into struct request (unioned
      over other fields never used while in the driver), and overloading
      the number of segments for this case.
      
      This has a couple of advantages:
      
       - we don't have to allocate the bio_vec
       - the amount of special casing for discard requests in the block
         layer is significantly reduced
       - using this same scheme for other request types is trivial,
         which will be important for implementing the new WRITE_ZEROES
         op on devices where it actually requires a payload (e.g. SCSI)
       - we can get rid of playing games with the request length, as
         we'll never touch it and completions will work just fine
       - it will allow us to support ranged discard operations in the
         future by merging non-contiguous discard bios into a single
         request
       - last but not least it removes a lot of code
      
      This patch is the common base for my WIP series for ranges discards and to
      remove discard_zeroes_data in favor of always using REQ_OP_WRITE_ZEROES,
      so it would be good to get it in quickly.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      f9d03f96
  4. 05 12月, 2016 1 次提交
    • N
      block: fix unintended fallthrough in generic_make_request_checks() · 58886785
      Nicolai Stange 提交于
      Since commit e73c23ff ("block: add async variant of
      blkdev_issue_zeroout") messages like the following show up:
      
        EXT4-fs (dm-1): Delayed block allocation failed for inode 2368848 at
                        logical offset 0 with max blocks 1 with error 95
        EXT4-fs (dm-1): This should not happen!! Data will be lost
      
      Due to the following fallthrough introduced with
      commit 2d253440 ("block: Define zoned block device operations"),
      generic_make_request_checks() would accept a REQ_OP_WRITE_SAME bio only
      if the block device supports "write same" *and* is a zoned one:
      
        switch (bio_op(bio)) {
        [...]
        case REQ_OP_WRITE_SAME:
              if (!bdev_write_same(bio->bi_bdev))
                      goto not_supported;
        case REQ_OP_ZONE_REPORT:
        case REQ_OP_ZONE_RESET:
                      if (!bdev_is_zoned(bio->bi_bdev))
                              goto not_supported;
                      break;
        [...]
        }
      
      Thus, although the bio setup as done by __blkdev_issue_write_same() from
      commit e73c23ff ("block: add async variant of blkdev_issue_zeroout")
      would succeed, its actual submission would not, resulting in the
      EOPNOTSUPP == 95.
      
      Fix this by removing the fallthrough which, due to the lack of an explicit
      comment, seems to be unintended anyway.
      
      Fixes: e73c23ff ("block: add async variant of blkdev_issue_zeroout")
      Fixes: 2d253440 ("block: Define zoned block device operations")
      Signed-off-by: NNicolai Stange <nicstange@gmail.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      58886785
  5. 01 12月, 2016 1 次提交
  6. 22 11月, 2016 1 次提交
  7. 16 11月, 2016 1 次提交
    • M
      block: deal with stale req count of plug list · 0a6219a9
      Ming Lei 提交于
      In both legacy and mq path, req count of plug list is computed
      before allocating request, so the number can be stale when falling
      back to slept allocation, also the new introduced wbt can sleep
      too.
      
      This patch deals with the case by checking if plug list becomes
      empty, and fixes the KASAN report of 'BUG: KASAN: stack-out-of-bounds'
      which is introduced by Shaohua's patches of dispatching big request.
      
      Fixes: 600271d9(blk-mq: immediately dispatch big size request)
      Fixes: 50d24c34(block: immediately dispatch big size request)
      Cc: Shaohua Li <shli@fb.com>
      Signed-off-by: NMing Lei <ming.lei@canonical.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      0a6219a9
  8. 12 11月, 2016 1 次提交
  9. 11 11月, 2016 2 次提交
    • J
      block: hook up writeback throttling · 87760e5e
      Jens Axboe 提交于
      Enable throttling of buffered writeback to make it a lot
      more smooth, and has way less impact on other system activity.
      Background writeback should be, by definition, background
      activity. The fact that we flush huge bundles of it at the time
      means that it potentially has heavy impacts on foreground workloads,
      which isn't ideal. We can't easily limit the sizes of writes that
      we do, since that would impact file system layout in the presence
      of delayed allocation. So just throttle back buffered writeback,
      unless someone is waiting for it.
      
      The algorithm for when to throttle takes its inspiration in the
      CoDel networking scheduling algorithm. Like CoDel, blk-wb monitors
      the minimum latencies of requests over a window of time. In that
      window of time, if the minimum latency of any request exceeds a
      given target, then a scale count is incremented and the queue depth
      is shrunk. The next monitoring window is shrunk accordingly. Unlike
      CoDel, if we hit a window that exhibits good behavior, then we
      simply increment the scale count and re-calculate the limits for that
      scale value. This prevents us from oscillating between a
      close-to-ideal value and max all the time, instead remaining in the
      windows where we get good behavior.
      
      Unlike CoDel, blk-wb allows the scale count to to negative. This
      happens if we primarily have writes going on. Unlike positive
      scale counts, this doesn't change the size of the monitoring window.
      When the heavy writers finish, blk-bw quickly snaps back to it's
      stable state of a zero scale count.
      
      The patch registers a sysfs entry, 'wb_lat_usec'. This sets the latency
      target to me met. It defaults to 2 msec for non-rotational storage, and
      75 msec for rotational storage. Setting this value to '0' disables
      blk-wb. Generally, a user would not have to touch this setting.
      
      We don't enable WBT on devices that are managed with CFQ, and have
      a non-root block cgroup attached. If we have a proportional share setup
      on this particular disk, then the wbt throttling will interfere with
      that. We don't have a strong need for wbt for that case, since we will
      rely on CFQ doing that for us.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      87760e5e
    • J
      block: add scalable completion tracking of requests · cf43e6be
      Jens Axboe 提交于
      For legacy block, we simply track them in the request queue. For
      blk-mq, we track them on a per-sw queue basis, which we can then
      sum up through the hardware queues and finally to a per device
      state.
      
      The stats are tracked in, roughly, 0.1s interval windows.
      
      Add sysfs files to display the stats.
      
      The feature is off by default, to avoid any extra overhead. In-kernel
      users of it can turn it on by setting QUEUE_FLAG_STATS in the queue
      flags. We currently don't turn it on if someone just reads any of
      the stats files, that is something we could add as well.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      cf43e6be
  10. 04 11月, 2016 1 次提交
    • S
      block: immediately dispatch big size request · 50d24c34
      Shaohua Li 提交于
      Currently block plug holds up to 16 non-mergeable requests. This makes
      sense if the request size is small, eg, reduce lock contention. But if
      request size is big enough, we don't need to worry about lock
      contention. Holding such request makes no sense and it lows the disk
      utilization.
      
      In practice, this improves 10% throughput for my raid5 sequential write
      workload.
      
      The size (128k) is arbitrary right now, but it makes sure lock
      contention is small. This probably could be more intelligent, eg, check
      average request size holded. Since this is mainly for sequential IO,
      probably not worthy.
      
      V2: check the last request instead of the first request, so as long as
      there is one big size request we flush the plug.
      Signed-off-by: NShaohua Li <shli@fb.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      50d24c34
  11. 28 10月, 2016 2 次提交
    • C
      block: better op and flags encoding · ef295ecf
      Christoph Hellwig 提交于
      Now that we don't need the common flags to overflow outside the range
      of a 32-bit type we can encode them the same way for both the bio and
      request fields.  This in addition allows us to place the operation
      first (and make some room for more ops while we're at it) and to
      stop having to shift around the operation values.
      
      In addition this allows passing around only one value in the block layer
      instead of two (and eventuall also in the file systems, but we can do
      that later) and thus clean up a lot of code.
      
      Last but not least this allows decreasing the size of the cmd_flags
      field in struct request to 32-bits.  Various functions passing this
      value could also be updated, but I'd like to avoid the churn for now.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      ef295ecf
    • C
      block: split out request-only flags into a new namespace · e8064021
      Christoph Hellwig 提交于
      A lot of the REQ_* flags are only used on struct requests, and only of
      use to the block layer and a few drivers that dig into struct request
      internals.
      
      This patch adds a new req_flags_t rq_flags field to struct request for
      them, and thus dramatically shrinks the number of common requests.  It
      also removes the unfortunate situation where we have to fit the fields
      from the same enum into 32 bits for struct bio and 64 bits for
      struct request.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NShaun Tancheff <shaun.tancheff@seagate.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      e8064021
  12. 20 10月, 2016 1 次提交
    • A
      block: Add iocontext priority to request · 5dc8b362
      Adam Manzanares 提交于
      Patch adds an association between iocontext ioprio and the ioprio of a
      request. This is done to enable request based drivers the ability to
      act on priority information stored in the request. An example being
      ATA devices that support command priorities. If the ATA driver discovers
      that the device supports command priorities and the request has valid
      priority information indicating the request is high priority, then a high
      priority command can be sent to the device. This should improve tail
      latencies for high priority IO on any device that queues requests
      internally and can make use of the priority information stored in the
      request.
      
      The ioprio of the request is set in blk_rq_set_prio which takes the
      request and the ioc as arguments. If the ioc is valid in blk_rq_set_prio
      then the iopriority of the request is set as the iopriority of the ioc.
      In init_request_from_bio a check is made to see if the ioprio of the bio
      is valid and if so then the request prio comes from the bio.
      Signed-off-by: NAdam Manzananares <adam.manzanares@wdc.com>
      Reviewed-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      5dc8b362
  13. 19 10月, 2016 1 次提交
  14. 14 9月, 2016 1 次提交
  15. 29 8月, 2016 2 次提交
  16. 17 8月, 2016 1 次提交
  17. 08 8月, 2016 1 次提交
    • J
      block: rename bio bi_rw to bi_opf · 1eff9d32
      Jens Axboe 提交于
      Since commit 63a4cc24, bio->bi_rw contains flags in the lower
      portion and the op code in the higher portions. This means that
      old code that relies on manually setting bi_rw is most likely
      going to be broken. Instead of letting that brokeness linger,
      rename the member, to force old and out-of-tree code to break
      at compile time instead of at runtime.
      
      No intended functional changes in this commit.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      1eff9d32
  18. 21 7月, 2016 3 次提交
  19. 06 7月, 2016 1 次提交
  20. 10 6月, 2016 1 次提交
    • J
      cfq-iosched: temporarily boost queue priority for idle classes · b8269db4
      Jens Axboe 提交于
      If we're queuing REQ_PRIO IO and the task is running at an idle IO
      class, then temporarily boost the priority. This prevents livelocks
      due to priority inversion, when a low priority task is holding file
      system resources while attempting to do IO.
      
      An example of that is shown below. An ioniced idle task is holding
      the directory mutex, while a normal priority task is trying to do
      a directory lookup.
      
      [478381.198925] ------------[ cut here ]------------
      [478381.200315] INFO: task ionice:1168369 blocked for more than 120 seconds.
      [478381.201324]       Not tainted 4.0.9-38_fbk5_hotfix1_2936_g85409c6 #1
      [478381.202278] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      [478381.203462] ionice          D ffff8803692736a8     0 1168369      1 0x00000080
      [478381.203466]  ffff8803692736a8 ffff880399c21300 ffff880276adcc00 ffff880369273698
      [478381.204589]  ffff880369273fd8 0000000000000000 7fffffffffffffff 0000000000000002
      [478381.205752]  ffffffff8177d5e0 ffff8803692736c8 ffffffff8177cea7 0000000000000000
      [478381.206874] Call Trace:
      [478381.207253]  [<ffffffff8177d5e0>] ? bit_wait_io_timeout+0x80/0x80
      [478381.208175]  [<ffffffff8177cea7>] schedule+0x37/0x90
      [478381.208932]  [<ffffffff8177f5fc>] schedule_timeout+0x1dc/0x250
      [478381.209805]  [<ffffffff81421c17>] ? __blk_run_queue+0x37/0x50
      [478381.210706]  [<ffffffff810ca1c5>] ? ktime_get+0x45/0xb0
      [478381.211489]  [<ffffffff8177c407>] io_schedule_timeout+0xa7/0x110
      [478381.212402]  [<ffffffff810a8c2b>] ? prepare_to_wait+0x5b/0x90
      [478381.213280]  [<ffffffff8177d616>] bit_wait_io+0x36/0x50
      [478381.214063]  [<ffffffff8177d325>] __wait_on_bit+0x65/0x90
      [478381.214961]  [<ffffffff8177d5e0>] ? bit_wait_io_timeout+0x80/0x80
      [478381.215872]  [<ffffffff8177d47c>] out_of_line_wait_on_bit+0x7c/0x90
      [478381.216806]  [<ffffffff810a89f0>] ? wake_atomic_t_function+0x40/0x40
      [478381.217773]  [<ffffffff811f03aa>] __wait_on_buffer+0x2a/0x30
      [478381.218641]  [<ffffffff8123c557>] ext4_bread+0x57/0x70
      [478381.219425]  [<ffffffff8124498c>] __ext4_read_dirblock+0x3c/0x380
      [478381.220467]  [<ffffffff8124665d>] ext4_dx_find_entry+0x7d/0x170
      [478381.221357]  [<ffffffff8114c49e>] ? find_get_entry+0x1e/0xa0
      [478381.222208]  [<ffffffff81246bd4>] ext4_find_entry+0x484/0x510
      [478381.223090]  [<ffffffff812471a2>] ext4_lookup+0x52/0x160
      [478381.223882]  [<ffffffff811c401d>] lookup_real+0x1d/0x60
      [478381.224675]  [<ffffffff811c4698>] __lookup_hash+0x38/0x50
      [478381.225697]  [<ffffffff817745bd>] lookup_slow+0x45/0xab
      [478381.226941]  [<ffffffff811c690e>] link_path_walk+0x7ae/0x820
      [478381.227880]  [<ffffffff811c6a42>] path_init+0xc2/0x430
      [478381.228677]  [<ffffffff813e6e26>] ? security_file_alloc+0x16/0x20
      [478381.229776]  [<ffffffff811c8c57>] path_openat+0x77/0x620
      [478381.230767]  [<ffffffff81185c6e>] ? page_add_file_rmap+0x2e/0x70
      [478381.232019]  [<ffffffff811cb253>] do_filp_open+0x43/0xa0
      [478381.233016]  [<ffffffff8108c4a9>] ? creds_are_invalid+0x29/0x70
      [478381.234072]  [<ffffffff811c0cb0>] do_open_execat+0x70/0x170
      [478381.235039]  [<ffffffff811c1bf8>] do_execveat_common.isra.36+0x1b8/0x6e0
      [478381.236051]  [<ffffffff811c214c>] do_execve+0x2c/0x30
      [478381.236809]  [<ffffffff811ca392>] ? getname+0x12/0x20
      [478381.237564]  [<ffffffff811c23be>] SyS_execve+0x2e/0x40
      [478381.238338]  [<ffffffff81780a1d>] stub_execve+0x6d/0xa0
      [478381.239126] ------------[ cut here ]------------
      [478381.239915] ------------[ cut here ]------------
      [478381.240606] INFO: task python2.7:1168375 blocked for more than 120 seconds.
      [478381.242673]       Not tainted 4.0.9-38_fbk5_hotfix1_2936_g85409c6 #1
      [478381.243653] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      [478381.244902] python2.7       D ffff88005cf8fb98     0 1168375 1168248 0x00000080
      [478381.244904]  ffff88005cf8fb98 ffff88016c1f0980 ffffffff81c134c0 ffff88016c1f11a0
      [478381.246023]  ffff88005cf8ffd8 ffff880466cd0cbc ffff88016c1f0980 00000000ffffffff
      [478381.247138]  ffff880466cd0cc0 ffff88005cf8fbb8 ffffffff8177cea7 ffff88005cf8fcc8
      [478381.248252] Call Trace:
      [478381.248630]  [<ffffffff8177cea7>] schedule+0x37/0x90
      [478381.249382]  [<ffffffff8177d08e>] schedule_preempt_disabled+0xe/0x10
      [478381.250465]  [<ffffffff8177e892>] __mutex_lock_slowpath+0x92/0x100
      [478381.251409]  [<ffffffff8177e91b>] mutex_lock+0x1b/0x2f
      [478381.252199]  [<ffffffff817745ae>] lookup_slow+0x36/0xab
      [478381.253023]  [<ffffffff811c690e>] link_path_walk+0x7ae/0x820
      [478381.253877]  [<ffffffff811aeb41>] ? try_charge+0xc1/0x700
      [478381.254690]  [<ffffffff811c6a42>] path_init+0xc2/0x430
      [478381.255525]  [<ffffffff813e6e26>] ? security_file_alloc+0x16/0x20
      [478381.256450]  [<ffffffff811c8c57>] path_openat+0x77/0x620
      [478381.257256]  [<ffffffff8115b2fb>] ? lru_cache_add_active_or_unevictable+0x2b/0xa0
      [478381.258390]  [<ffffffff8117b623>] ? handle_mm_fault+0x13f3/0x1720
      [478381.259309]  [<ffffffff811cb253>] do_filp_open+0x43/0xa0
      [478381.260139]  [<ffffffff811d7ae2>] ? __alloc_fd+0x42/0x120
      [478381.260962]  [<ffffffff811b95ac>] do_sys_open+0x13c/0x230
      [478381.261779]  [<ffffffff81011393>] ? syscall_trace_enter_phase1+0x113/0x170
      [478381.262851]  [<ffffffff811b96c2>] SyS_open+0x22/0x30
      [478381.263598]  [<ffffffff81780532>] system_call_fastpath+0x12/0x17
      [478381.264551] ------------[ cut here ]------------
      [478381.265377] ------------[ cut here ]------------
      Signed-off-by: NJens Axboe <axboe@fb.com>
      Reviewed-by: NJeff Moyer <jmoyer@redhat.com>
      b8269db4
  21. 09 6月, 2016 1 次提交
  22. 08 6月, 2016 10 次提交
  23. 14 4月, 2016 1 次提交
  24. 13 4月, 2016 1 次提交
  25. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf