1. 21 4月, 2020 5 次提交
    • C
      block: pass a hd_struct to delete_partition · cddae808
      Christoph Hellwig 提交于
      All callers have the hd_struct at hand, so pass it instead of performing
      another lookup.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      cddae808
    • C
      block: refactor blkpg_ioctl · fa9156ae
      Christoph Hellwig 提交于
      Split each sub-command out into a separate helper, and move those helpers
      to block/partitions/core.c instead of having a lot of partition
      manipulation logic open coded in block/ioctl.c.
      
      Signed-off-by: Christoph Hellwig <hch@lst.de
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      fa9156ae
    • D
      blk-mq: Rerun dispatching in the case of budget contention · a0823421
      Douglas Anderson 提交于
      If ever a thread running blk-mq code tries to get budget and fails it
      immediately stops doing work and assumes that whenever budget is freed
      up that queues will be kicked and whatever work the thread was trying
      to do will be tried again.
      
      One path where budget is freed and queues are kicked in the normal
      case can be seen in scsi_finish_command().  Specifically:
      - scsi_finish_command()
        - scsi_device_unbusy()
          - # Decrement "device_busy", AKA release budget
        - scsi_io_completion()
          - scsi_end_request()
            - blk_mq_run_hw_queues()
      
      The above is all well and good.  The problem comes up when a thread
      claims the budget but then releases it without actually dispatching
      any work.  Since we didn't schedule any work we'll never run the path
      of finishing work / kicking the queues.
      
      This isn't often actually a problem which is why this issue has
      existed for a while and nobody noticed.  Specifically we only get into
      this situation when we unexpectedly found that we weren't going to do
      any work.  Code that later receives new work kicks the queues.  All
      good, right?
      
      The problem shows up, however, if timing is just wrong and we hit a
      race.  To see this race let's think about the case where we only have
      a budget of 1 (only one thread can hold budget).  Now imagine that a
      thread got budget and then decided not to dispatch work.  It's about
      to call put_budget() but then the thread gets context switched out for
      a long, long time.  While in this state, any and all kicks of the
      queue (like the when we received new work) will be no-ops because
      nobody can get budget.  Finally the thread holding budget gets to run
      again and returns.  All the normal kicks will have been no-ops and we
      have an I/O stall.
      
      As you can see from the above, you need just the right timing to see
      the race.  To start with, the only case it happens if we thought we
      had work, actually managed to get the budget, but then actually didn't
      have work.  That's pretty rare to start with.  Even then, there's
      usually a very small amount of time between realizing that there's no
      work and putting the budget.  During this small amount of time new
      work has to come in and the queue kick has to make it all the way to
      trying to get the budget and fail.  It's pretty unlikely.
      
      One case where this could have failed is illustrated by an example of
      threads running blk_mq_do_dispatch_sched():
      
      * Threads A and B both run has_work() at the same time with the same
        "hctx".  Imagine has_work() is exact.  There's no lock, so it's OK
        if Thread A and B both get back true.
      * Thread B gets interrupted for a long time right after it decides
        that there is work.  Maybe its CPU gets an interrupt and the
        interrupt handler is slow.
      * Thread A runs, get budget, dispatches work.
      * Thread A's work finishes and budget is released.
      * Thread B finally runs again and gets budget.
      * Since Thread A already took care of the work and no new work has
        come in, Thread B will get NULL from dispatch_request().  I believe
        this is specifically why dispatch_request() is allowed to return
        NULL in the first place if has_work() must be exact.
      * Thread B will now be holding the budget and is about to call
        put_budget(), but hasn't called it yet.
      * Thread B gets interrupted for a long time (again).  Dang interrupts.
      * Now Thread C (maybe with a different "hctx" but the same queue)
        comes along and runs blk_mq_do_dispatch_sched().
      * Thread C won't do anything because it can't get budget.
      * Finally Thread B will run again and put the budget without kicking
        any queues.
      
      Even though the example above is with blk_mq_do_dispatch_sched() I
      believe the race is possible any time someone is holding budget but
      doesn't do work.
      
      Unfortunately, the unlikely has become more likely if you happen to be
      using the BFQ I/O scheduler.  BFQ, by design, sometimes returns "true"
      for has_work() but then NULL for dispatch_request() and stays in this
      state for a while (currently up to 9 ms).  Suddenly you only need one
      race to hit, not two races in a row.  With my current setup this is
      easy to reproduce in reboot tests and traces have actually shown that
      we hit a race similar to the one described above.
      
      Note that we only need to fix blk_mq_do_dispatch_sched() and
      blk_mq_do_dispatch_ctx() and not the other places that put budget.  In
      other cases we know that we have work to do on at least one "hctx" and
      code already exists to kick that "hctx"'s queue.  When that work
      finally finishes all the queues will be kicked using the normal flow.
      
      One last note is that (at least in the SCSI case) budget is shared by
      all "hctx"s that have the same queue.  Thus we need to make sure to
      kick the whole queue, not just re-run dispatching on a single "hctx".
      Signed-off-by: NDouglas Anderson <dianders@chromium.org>
      Reviewed-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      a0823421
    • D
      blk-mq: Add blk_mq_delay_run_hw_queues() API call · b9151e7b
      Douglas Anderson 提交于
      We have:
      * blk_mq_run_hw_queue()
      * blk_mq_delay_run_hw_queue()
      * blk_mq_run_hw_queues()
      
      ...but not blk_mq_delay_run_hw_queues(), presumably because nobody
      needed it before now.  Since we need it for a later patch in this
      series, add it.
      Signed-off-by: NDouglas Anderson <dianders@chromium.org>
      Reviewed-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      b9151e7b
    • D
      blk-mq: In blk_mq_dispatch_rq_list() "no budget" is a reason to kick · ab3cee37
      Douglas Anderson 提交于
      In blk_mq_dispatch_rq_list(), if blk_mq_sched_needs_restart() returns
      true and the driver returns BLK_STS_RESOURCE then we'll kick the
      queue.  However, there's another case where we might need to kick it.
      If we were unable to get budget we can be in much the same state as
      when the driver returns BLK_STS_RESOURCE, so we should treat it the
      same.
      
      It should be noted that even if we add a whole bunch of extra kicking
      to the queue in other patches this patch is still important.
      Specifically any kicking that happened before we re-spliced leftover
      requests into 'hctx->dispatch' wouldn't have found any work, so we
      really need to make sure we kick ourselves after we've done the
      splicing.
      Signed-off-by: NDouglas Anderson <dianders@chromium.org>
      Reviewed-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      ab3cee37
  2. 17 4月, 2020 1 次提交
  3. 16 4月, 2020 1 次提交
  4. 10 4月, 2020 1 次提交
  5. 08 4月, 2020 1 次提交
  6. 07 4月, 2020 1 次提交
  7. 02 4月, 2020 2 次提交
    • T
      blkcg: don't offline parent blkcg first · 4308a434
      Tejun Heo 提交于
      blkcg->cgwb_refcnt is used to delay blkcg offlining so that blkgs
      don't get offlined while there are active cgwbs on them.  However, it
      ends up making offlining unordered sometimes causing parents to be
      offlined before children.
      
      Let's fix this by making child blkcgs pin the parents' online states.
      
      Note that pin/unpin names are chosen over get/put intentionally
      because css uses get/put online for something different.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      4308a434
    • T
      blkcg: rename blkcg->cgwb_refcnt to ->online_pin and always use it · d866dbf6
      Tejun Heo 提交于
      blkcg->cgwb_refcnt is used to delay blkcg offlining so that blkgs
      don't get offlined while there are active cgwbs on them.  However, it
      ends up making offlining unordered sometimes causing parents to be
      offlined before children.
      
      To fix it, we want child blkcgs to pin the parents' online states
      turning the refcnt into a more generic online pinning mechanism.
      
      In prepartion,
      
      * blkcg->cgwb_refcnt -> blkcg->online_pin
      * blkcg_cgwb_get/put() -> blkcg_pin/unpin_online()
      * Take them out of CONFIG_CGROUP_WRITEBACK
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      d866dbf6
  8. 30 3月, 2020 1 次提交
  9. 28 3月, 2020 5 次提交
  10. 27 3月, 2020 1 次提交
  11. 25 3月, 2020 12 次提交
  12. 24 3月, 2020 9 次提交