1. 16 6月, 2019 2 次提交
    • T
      blkcg: perpcu_ref init/exit should be done from blkg_alloc/free() · ef069b97
      Tejun Heo 提交于
      blkg alloc is performed as a separate step from the rest of blkg
      creation so that GFP_KERNEL allocations can be used when creating
      blkgs from configuration file writes because otherwise user actions
      may fail due to failures of opportunistic GFP_NOWAIT allocations.
      
      While making blkgs use percpu_ref, 7fcf2b03 ("blkcg: change blkg
      reference counting to use percpu_ref") incorrectly added unconditional
      opportunistic percpu_ref_init() to blkg_create() breaking this
      guarantee.
      
      This patch moves percpu_ref_init() to blkg_alloc() so makes it use
      @gfp_mask that blkg_alloc() is called with.  Also, percpu_ref_exit()
      is moved to blkg_free() for consistency.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Fixes: 7fcf2b03 ("blkcg: change blkg reference counting to use percpu_ref")
      Cc: Dennis Zhou <dennis@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      ef069b97
    • T
      blkcg: update blkcg_print_stat() to handle larger outputs · f539da82
      Tejun Heo 提交于
      Depending on the number of devices, blkcg stats can go over the
      default seqfile buf size.  seqfile normally retries with a larger
      buffer but since the ->pd_stat() addition, blkcg_print_stat() doesn't
      tell seqfile that overflow has happened and the output gets printed
      truncated.  Fix it by calling seq_commit() w/ -1 on possible
      overflows.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Fixes: 903d23f0 ("blk-cgroup: allow controllers to output their own stats")
      Cc: stable@vger.kernel.org # v4.19+
      Cc: Josef Bacik <jbacik@fb.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f539da82
  2. 05 6月, 2019 1 次提交
  3. 01 5月, 2019 1 次提交
  4. 21 3月, 2019 1 次提交
    • B
      blkcg: Fix kernel-doc warnings · 537d71b3
      Bart Van Assche 提交于
      Avoid that the following warnings are reported when building with W=1:
      
      block/blk-cgroup.c:1755: warning: Function parameter or member 'q' not described in 'blkcg_schedule_throttle'
      block/blk-cgroup.c:1755: warning: Function parameter or member 'use_memdelay' not described in 'blkcg_schedule_throttle'
      block/blk-cgroup.c:1779: warning: Function parameter or member 'blkg' not described in 'blkcg_add_delay'
      block/blk-cgroup.c:1779: warning: Function parameter or member 'now' not described in 'blkcg_add_delay'
      block/blk-cgroup.c:1779: warning: Function parameter or member 'delta' not described in 'blkcg_add_delay'
      Signed-off-by: NBart Van Assche <bvanassche@acm.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      537d71b3
  5. 10 2月, 2019 1 次提交
  6. 21 12月, 2018 1 次提交
  7. 20 12月, 2018 1 次提交
    • M
      block: save irq state in blkg_lookup_create() · 3a762de5
      Ming Lei 提交于
      blkg_lookup_create() may be called from pool_map() in which
      irq state is saved, so we have to do that in blkg_lookup_create().
      
      Otherwise, the following lockdep warning can be triggered:
      
      [  104.258537] ================================
      [  104.259129] WARNING: inconsistent lock state
      [  104.259725] 4.20.0-rc6+ #545 Not tainted
      [  104.260268] --------------------------------
      [  104.260865] inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
      [  104.261727] swapper/49/0 [HC0[0]:SC1[1]:HE0:SE0] takes:
      [  104.262444] 00000000db365b5d (&(&pool->lock)->rlock#3){+.?.}, at: thin_endio+0xcf/0x2a3 [dm_thin_pool]
      [  104.263747] {SOFTIRQ-ON-W} state was registered at:
      [  104.264417]   _raw_spin_unlock_irq+0x29/0x4c
      [  104.265014]   blkg_lookup_create+0xdc/0xe6
      [  104.265609]   bio_associate_blkg_from_css+0xd3/0x13f
      [  104.266312]   bio_associate_blkg+0x15a/0x1bb
      [  104.266913]   pool_map+0xe8/0x103 [dm_thin_pool]
      [  104.267572]   __map_bio+0x98/0x29c [dm_mod]
      [  104.268162]   __split_and_process_non_flush+0x29e/0x306 [dm_mod]
      [  104.269003]   __split_and_process_bio+0x16a/0x25b [dm_mod]
      [  104.269971]   __dm_make_request.isra.14+0xdc/0x124 [dm_mod]
      [  104.270973]   generic_make_request+0x3f5/0x68b
      [  104.271676]   process_prepared_mapping+0x166/0x1ef [dm_thin_pool]
      [  104.272531]   schedule_zero+0x239/0x273 [dm_thin_pool]
      [  104.273245]   process_cell+0x60c/0x6f1 [dm_thin_pool]
      [  104.273967]   do_worker+0x60c/0xca8 [dm_thin_pool]
      [  104.274635]   process_one_work+0x4eb/0x834
      [  104.275203]   worker_thread+0x318/0x484
      [  104.275740]   kthread+0x1d1/0x1e1
      [  104.276203]   ret_from_fork+0x3a/0x50
      [  104.276714] irq event stamp: 170003
      [  104.277201] hardirqs last  enabled at (170002): [<ffffffff81bcc33e>] _raw_spin_unlock_irqrestore+0x44/0x6b
      [  104.278535] hardirqs last disabled at (170003): [<ffffffff81bcc1ad>] _raw_spin_lock_irqsave+0x20/0x55
      [  104.280273] softirqs last  enabled at (169978): [<ffffffff810d13d4>] irq_enter+0x4c/0x73
      [  104.281617] softirqs last disabled at (169979): [<ffffffff810d1479>] irq_exit+0x7e/0x11d
      [  104.282744]
      [  104.282744] other info that might help us debug this:
      [  104.283640]  Possible unsafe locking scenario:
      [  104.283640]
      [  104.284452]        CPU0
      [  104.284803]        ----
      [  104.285150]   lock(&(&pool->lock)->rlock#3);
      [  104.285762]   <Interrupt>
      [  104.286130]     lock(&(&pool->lock)->rlock#3);
      [  104.286750]
      [  104.286750]  *** DEADLOCK ***
      [  104.286750]
      [  104.287564] no locks held by swapper/49/0.
      [  104.288129]
      [  104.288129] stack backtrace:
      [  104.288738] CPU: 49 PID: 0 Comm: swapper/49 Not tainted 4.20.0-rc6+ #545
      [  104.289700] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.10.2-2.fc27 04/01/2014
      [  104.290858] Call Trace:
      [  104.291204]  <IRQ>
      [  104.291502]  dump_stack+0x9a/0xe6
      [  104.291968]  mark_lock+0x56c/0x7a6
      [  104.292442]  ? check_usage_backwards+0x209/0x209
      [  104.293086]  __lock_acquire+0x400/0x15bf
      [  104.293662]  ? check_chain_key+0x150/0x1aa
      [  104.294236]  lock_acquire+0x1a6/0x1e3
      [  104.294768]  ? thin_endio+0xcf/0x2a3 [dm_thin_pool]
      [  104.295444]  ? _raw_spin_unlock_irqrestore+0x44/0x6b
      [  104.296143]  ? process_prepared_discard_fail+0x36/0x36 [dm_thin_pool]
      [  104.297031]  _raw_spin_lock_irqsave+0x46/0x55
      [  104.297659]  ? thin_endio+0xcf/0x2a3 [dm_thin_pool]
      [  104.298335]  thin_endio+0xcf/0x2a3 [dm_thin_pool]
      [  104.298997]  ? process_prepared_discard_fail+0x36/0x36 [dm_thin_pool]
      [  104.299886]  ? check_flags+0x20a/0x20a
      [  104.300408]  ? lock_acquire+0x1a6/0x1e3
      [  104.300954]  ? process_prepared_discard_fail+0x36/0x36 [dm_thin_pool]
      [  104.301865]  clone_endio+0x1bb/0x22d [dm_mod]
      [  104.302491]  ? disable_write_zeroes+0x20/0x20 [dm_mod]
      [  104.303200]  ? bio_disassociate_blkg+0xc6/0x15f
      [  104.303836]  ? bio_endio+0x2b2/0x2da
      [  104.304349]  clone_endio+0x1f3/0x22d [dm_mod]
      [  104.304978]  ? disable_write_zeroes+0x20/0x20 [dm_mod]
      [  104.305709]  ? bio_disassociate_blkg+0xc6/0x15f
      [  104.306333]  ? bio_endio+0x2b2/0x2da
      [  104.306853]  clone_endio+0x1f3/0x22d [dm_mod]
      [  104.307476]  ? disable_write_zeroes+0x20/0x20 [dm_mod]
      [  104.308185]  ? bio_disassociate_blkg+0xc6/0x15f
      [  104.308817]  ? bio_endio+0x2b2/0x2da
      [  104.309319]  blk_update_request+0x2de/0x4cc
      [  104.309927]  blk_mq_end_request+0x2a/0x183
      [  104.310498]  blk_done_softirq+0x16a/0x1a6
      [  104.311051]  ? blk_softirq_cpu_dead+0xe2/0xe2
      [  104.311653]  ? __lock_is_held+0x2a/0x87
      [  104.312186]  __do_softirq+0x250/0x4e8
      [  104.312705]  irq_exit+0x7e/0x11d
      [  104.313157]  call_function_single_interrupt+0xf/0x20
      [  104.313860]  </IRQ>
      [  104.314163] RIP: 0010:native_safe_halt+0x2/0x3
      [  104.314792] Code: 63 02 df f0 83 44 24 fc 00 48 89 df e8 cc 3f 7a ff 48 8b 03 a8 08 74 0b 65 81 25 9d 31 45 7e ff ff ff 7f 5b 5d 41 5c c3 fb f4 <c3> f4 c3 0f 1f 44 00 00 41 56 41 55 41 54 55 53 e8 a2 0d 5c ff e8
      [  104.317339] RSP: 0018:ffff888106c9fdc0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff04
      [  104.318390] RAX: 1ffff11020d92100 RBX: 0000000000000000 RCX: ffffffff81159ac7
      [  104.319366] RDX: 1ffffffff05d5e69 RSI: 0000000000000007 RDI: ffff888106c90d1c
      [  104.320339] RBP: 0000000000000000 R08: dffffc0000000000 R09: 0000000000000001
      [  104.321313] R10: ffffed1025d57ba0 R11: ffffed1025d57b9f R12: 1ffff11020d93fbf
      [  104.322328] R13: 0000000000000031 R14: ffff888106c90040 R15: 0000000000000000
      [  104.323307]  ? lockdep_hardirqs_on+0x26b/0x278
      [  104.323927]  default_idle+0xd9/0x1a8
      [  104.324427]  do_idle+0x162/0x2b2
      [  104.324891]  ? arch_cpu_idle_exit+0x28/0x28
      [  104.325467]  ? mark_held_locks+0x28/0x7f
      [  104.326031]  ? _raw_spin_unlock_irqrestore+0x44/0x6b
      [  104.326719]  cpu_startup_entry+0x1d/0x1f
      [  104.327261]  start_secondary+0x2cb/0x308
      [  104.327806]  ? set_cpu_sibling_map+0x8a3/0x8a3
      [  104.328421]  secondary_startup_64+0xa4/0xb0
      
      Fixes: b978962a ("blkcg: update blkg_lookup_create() to do locking")
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: Dennis Zhou <dennis@kernel.org>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      3a762de5
  8. 13 12月, 2018 1 次提交
  9. 08 12月, 2018 4 次提交
  10. 16 11月, 2018 6 次提交
  11. 08 11月, 2018 2 次提交
  12. 02 11月, 2018 1 次提交
  13. 22 9月, 2018 4 次提交
  14. 12 9月, 2018 1 次提交
  15. 01 9月, 2018 2 次提交
    • D
      blkcg: delay blkg destruction until after writeback has finished · 59b57717
      Dennis Zhou (Facebook) 提交于
      Currently, blkcg destruction relies on a sequence of events:
        1. Destruction starts. blkcg_css_offline() is called and blkgs
           release their reference to the blkcg. This immediately destroys
           the cgwbs (writeback).
        2. With blkgs giving up their reference, the blkcg ref count should
           become zero and eventually call blkcg_css_free() which finally
           frees the blkcg.
      
      Jiufei Xue reported that there is a race between blkcg_bio_issue_check()
      and cgroup_rmdir(). To remedy this, blkg destruction becomes contingent
      on the completion of all writeback associated with the blkcg. A count of
      the number of cgwbs is maintained and once that goes to zero, blkg
      destruction can follow. This should prevent premature blkg destruction
      related to writeback.
      
      The new process for blkcg cleanup is as follows:
        1. Destruction starts. blkcg_css_offline() is called which offlines
           writeback. Blkg destruction is delayed on the cgwb_refcnt count to
           avoid punting potentially large amounts of outstanding writeback
           to root while maintaining any ongoing policies. Here, the base
           cgwb_refcnt is put back.
        2. When the cgwb_refcnt becomes zero, blkcg_destroy_blkgs() is called
           and handles destruction of blkgs. This is where the css reference
           held by each blkg is released.
        3. Once the blkcg ref count goes to zero, blkcg_css_free() is called.
           This finally frees the blkg.
      
      It seems in the past blk-throttle didn't do the most understandable
      things with taking data from a blkg while associating with current. So,
      the simplification and unification of what blk-throttle is doing caused
      this.
      
      Fixes: 08e18eab ("block: add bi_blkg to the bio for cgroups")
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NDennis Zhou <dennisszhou@gmail.com>
      Cc: Jiufei Xue <jiufei.xue@linux.alibaba.com>
      Cc: Joseph Qi <joseph.qi@linux.alibaba.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Josef Bacik <josef@toxicpanda.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      59b57717
    • D
      Revert "blk-throttle: fix race between blkcg_bio_issue_check() and cgroup_rmdir()" · 6b065462
      Dennis Zhou (Facebook) 提交于
      This reverts commit 4c699480.
      
      Destroying blkgs is tricky because of the nature of the relationship. A
      blkg should go away when either a blkcg or a request_queue goes away.
      However, blkg's pin the blkcg to ensure they remain valid. To break this
      cycle, when a blkcg is offlined, blkgs put back their css ref. This
      eventually lets css_free() get called which frees the blkcg.
      
      The above commit (4c699480) breaks this order of events by trying to
      destroy blkgs in css_free(). As the blkgs still hold references to the
      blkcg, css_free() is never called.
      
      The race between blkcg_bio_issue_check() and cgroup_rmdir() will be
      addressed in the following patch by delaying destruction of a blkg until
      all writeback associated with the blkcg has been finished.
      
      Fixes: 4c699480 ("blk-throttle: fix race between blkcg_bio_issue_check() and cgroup_rmdir()")
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NDennis Zhou <dennisszhou@gmail.com>
      Cc: Jiufei Xue <jiufei.xue@linux.alibaba.com>
      Cc: Joseph Qi <joseph.qi@linux.alibaba.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      6b065462
  16. 01 8月, 2018 1 次提交
  17. 18 7月, 2018 1 次提交
  18. 09 7月, 2018 3 次提交
    • J
      block: introduce blk-iolatency io controller · d7067512
      Josef Bacik 提交于
      Current IO controllers for the block layer are less than ideal for our
      use case.  The io.max controller is great at hard limiting, but it is
      not work conserving.  This patch introduces io.latency.  You provide a
      latency target for your group and we monitor the io in short windows to
      make sure we are not exceeding those latency targets.  This makes use of
      the rq-qos infrastructure and works much like the wbt stuff.  There are
      a few differences from wbt
      
       - It's bio based, so the latency covers the whole block layer in addition to
         the actual io.
       - We will throttle all IO types that comes in here if we need to.
       - We use the mean latency over the 100ms window.  This is because writes can
         be particularly fast, which could give us a false sense of the impact of
         other workloads on our protected workload.
       - By default there's no throttling, we set the queue_depth to INT_MAX so that
         we can have as many outstanding bio's as we're allowed to.  Only at
         throttle time do we pay attention to the actual queue depth.
       - We backcharge cgroups for root cg issued IO and induce artificial
         delays in order to deal with cases like metadata only or swap heavy
         workloads.
      
      In testing this has worked out relatively well.  Protected workloads
      will throttle noisy workloads down to 1 io at time if they are doing
      normal IO on their own, or induce up to a 1 second delay per syscall if
      they are doing a lot of root issued IO (metadata/swap IO).
      
      Our testing has revolved mostly around our production web servers where
      we have hhvm (the web server application) in a protected group and
      everything else in another group.  We see slightly higher requests per
      second (RPS) on the test tier vs the control tier, and much more stable
      RPS across all machines in the test tier vs the control tier.
      
      Another test we run is a slow memory allocator in the unprotected group.
      Before this would eventually push us into swap and cause the whole box
      to die and not recover at all.  With these patches we see slight RPS
      drops (usually 10-15%) before the memory consumer is properly killed and
      things recover within seconds.
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      d7067512
    • J
      blkcg: add generic throttling mechanism · d09d8df3
      Josef Bacik 提交于
      Since IO can be issued from literally anywhere it's almost impossible to
      do throttling without having some sort of adverse effect somewhere else
      in the system because of locking or other dependencies.  The best way to
      solve this is to do the throttling when we know we aren't holding any
      other kernel resources.  Do this by tracking throttling in a per-blkg
      basis, and if we require throttling flag the task that it needs to check
      before it returns to user space and possibly sleep there.
      
      This is to address the case where a process is doing work that is
      generating IO that can't be throttled, whether that is directly with a
      lot of REQ_META IO, or indirectly by allocating so much memory that it
      is swamping the disk with REQ_SWAP.  We can't use task_add_work as we
      don't want to induce a memory allocation in the IO path, so simply
      saving the request queue in the task and flagging it to do the
      notify_resume thing achieves the same result without the overhead of a
      memory allocation.
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      d09d8df3
    • J
      blk-cgroup: allow controllers to output their own stats · 903d23f0
      Josef Bacik 提交于
      blk-iolatency has a few stats that it would like to print out, and
      instead of adding a bunch of crap to the generic code just provide a
      helper so that controllers can add stuff to the stat line if they want
      to.
      
      Hide it behind a boot option since it changes the output of io.stat from
      normal, and these stats are only interesting to developers.
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      903d23f0
  19. 19 4月, 2018 2 次提交
  20. 18 4月, 2018 1 次提交
  21. 17 3月, 2018 1 次提交
    • J
      blk-throttle: fix race between blkcg_bio_issue_check() and cgroup_rmdir() · 4c699480
      Joseph Qi 提交于
      We've triggered a WARNING in blk_throtl_bio() when throttling writeback
      io, which complains blkg->refcnt is already 0 when calling blkg_get(),
      and then kernel crashes with invalid page request.
      After investigating this issue, we've found it is caused by a race
      between blkcg_bio_issue_check() and cgroup_rmdir(), which is described
      below:
      
      writeback kworker               cgroup_rmdir
                                        cgroup_destroy_locked
                                          kill_css
                                            css_killed_ref_fn
                                              css_killed_work_fn
                                                offline_css
                                                  blkcg_css_offline
        blkcg_bio_issue_check
          rcu_read_lock
          blkg_lookup
                                                    spin_trylock(q->queue_lock)
                                                    blkg_destroy
                                                    spin_unlock(q->queue_lock)
          blk_throtl_bio
          spin_lock_irq(q->queue_lock)
          ...
          spin_unlock_irq(q->queue_lock)
        rcu_read_unlock
      
      Since rcu can only prevent blkg from releasing when it is being used,
      the blkg->refcnt can be decreased to 0 during blkg_destroy() and schedule
      blkg release.
      Then trying to blkg_get() in blk_throtl_bio() will complains the WARNING.
      And then the corresponding blkg_put() will schedule blkg release again,
      which result in double free.
      This race is introduced by commit ae118896 ("blkcg: consolidate blkg
      creation in blkcg_bio_issue_check()"). Before this commit, it will
      lookup first and then try to lookup/create again with queue_lock. Since
      revive this logic is a bit drastic, so fix it by only offlining pd during
      blkcg_css_offline(), and move the rest destruction (especially
      blkg_put()) into blkcg_css_free(), which should be the right way as
      discussed.
      
      Fixes: ae118896 ("blkcg: consolidate blkg creation in blkcg_bio_issue_check()")
      Reported-by: NJiufei Xue <jiufei.xue@linux.alibaba.com>
      Signed-off-by: NJoseph Qi <joseph.qi@linux.alibaba.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      4c699480
  22. 27 2月, 2018 1 次提交
  23. 05 11月, 2017 1 次提交