1. 07 3月, 2012 4 次提交
    • T
      blkcg: update blkg get functions take blkio_cgroup as parameter · 0a5a7d0e
      Tejun Heo 提交于
      In both blkg get functions - throtl_get_tg() and cfq_get_cfqg(),
      instead of obtaining blkcg of %current explicitly, let the caller
      specify the blkcg to use as parameter and make both functions hold on
      to the blkcg.
      
      This is part of block cgroup interface cleanup and will help making
      blkcg API more modular.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      0a5a7d0e
    • T
      blkcg: move rcu_read_lock() outside of blkio_group get functions · 2a7f1244
      Tejun Heo 提交于
      rcu_read_lock() in throtl_get_tb() and cfq_get_cfqg() holds onto
      @blkcg while looking up blkg.  For API cleanup, the next patch will
      make the caller responsible for determining @blkcg to look blkg from
      and let them specify it as a parameter.  Move rcu read locking out to
      the callers to prepare for the change.
      
      -v2: Originally this patch was described as a fix for RCU read locking
           bug around @blkg, which Vivek pointed out to be incorrect.  It
           was from misunderstanding the role of rcu locking as protecting
           @blkg not @blkcg.  Patch description updated.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      2a7f1244
    • T
      blkcg: shoot down blkio_groups on elevator switch · 72e06c25
      Tejun Heo 提交于
      Elevator switch may involve changes to blkcg policies.  Implement
      shoot down of blkio_groups.
      
      Combined with the previous bypass updates, the end goal is updating
      blkcg core such that it can ensure that blkcg's being affected become
      quiescent and don't have any per-blkg data hanging around before
      commencing any policy updates.  Until queues are made aware of the
      policies that applies to them, as an interim step, all per-policy blkg
      data will be shot down.
      
      * blk-throtl doesn't need this change as it can't be disabled for a
        live queue; however, update it anyway as the scheduled blkg
        unification requires this behavior change.  This means that
        blk-throtl configuration will be unnecessarily lost over elevator
        switch.  This oddity will be removed after blkcg learns to associate
        individual policies with request_queues.
      
      * blk-throtl dosen't shoot down root_tg.  This is to ease transition.
        Unified blkg will always have persistent root group and not shooting
        down root_tg for now eases transition to that point by avoiding
        having to update td->root_tg and is safe as blk-throtl can never be
        disabled
      
      -v2: Vivek pointed out that group list is not guaranteed to be empty
           on return from clear function if it raced cgroup removal and
           lost.  Fix it by waiting a bit and retrying.  This kludge will
           soon be removed once locking is updated such that blkg is never
           in limbo state between blkcg and request_queue locks.
      
           blk-throtl no longer shoots down root_tg to avoid breaking
           td->root_tg.
      
           Also, Nest queue_lock inside blkio_list_lock not the other way
           around to avoid introduce possible deadlock via blkcg lock.
      
      -v3: blkcg_clear_queue() repositioned and renamed to
           blkg_destroy_all() to increase consistency with later changes.
           cfq_clear_queue() updated to check q->elevator before
           dereferencing it to avoid NULL dereference on not fully
           initialized queues (used by later change).
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      72e06c25
    • T
      block: extend queue bypassing to cover blkcg policies · 6ecf23af
      Tejun Heo 提交于
      Extend queue bypassing such that dying queue is always bypassing and
      blk-throttle is drained on bypass.  With blkcg policies updated to
      test blk_queue_bypass() instead of blk_queue_dead(), this ensures that
      no bio or request is held by or going through blkcg policies on a
      bypassing queue.
      
      This will be used to implement blkg cleanup on elevator switches and
      policy changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      6ecf23af
  2. 14 12月, 2011 1 次提交
  3. 25 10月, 2011 1 次提交
  4. 19 10月, 2011 4 次提交
    • T
      block: fix request_queue lifetime handling by making blk_queue_cleanup() properly shutdown · c9a929dd
      Tejun Heo 提交于
      request_queue is refcounted but actually depdends on lifetime
      management from the queue owner - on blk_cleanup_queue(), block layer
      expects that there's no request passing through request_queue and no
      new one will.
      
      This is fundamentally broken.  The queue owner (e.g. SCSI layer)
      doesn't have a way to know whether there are other active users before
      calling blk_cleanup_queue() and other users (e.g. bsg) don't have any
      guarantee that the queue is and would stay valid while it's holding a
      reference.
      
      With delay added in blk_queue_bio() before queue_lock is grabbed, the
      following oops can be easily triggered when a device is removed with
      in-flight IOs.
      
       sd 0:0:1:0: [sdb] Stopping disk
       ata1.01: disabled
       general protection fault: 0000 [#1] PREEMPT SMP
       CPU 2
       Modules linked in:
      
       Pid: 648, comm: test_rawio Not tainted 3.1.0-rc3-work+ #56 Bochs Bochs
       RIP: 0010:[<ffffffff8137d651>]  [<ffffffff8137d651>] elv_rqhash_find+0x61/0x100
       ...
       Process test_rawio (pid: 648, threadinfo ffff880019efa000, task ffff880019ef8a80)
       ...
       Call Trace:
        [<ffffffff8137d774>] elv_merge+0x84/0xe0
        [<ffffffff81385b54>] blk_queue_bio+0xf4/0x400
        [<ffffffff813838ea>] generic_make_request+0xca/0x100
        [<ffffffff81383994>] submit_bio+0x74/0x100
        [<ffffffff811c53ec>] dio_bio_submit+0xbc/0xc0
        [<ffffffff811c610e>] __blockdev_direct_IO+0x92e/0xb40
        [<ffffffff811c39f7>] blkdev_direct_IO+0x57/0x60
        [<ffffffff8113b1c5>] generic_file_aio_read+0x6d5/0x760
        [<ffffffff8118c1ca>] do_sync_read+0xda/0x120
        [<ffffffff8118ce55>] vfs_read+0xc5/0x180
        [<ffffffff8118cfaa>] sys_pread64+0x9a/0xb0
        [<ffffffff81afaf6b>] system_call_fastpath+0x16/0x1b
      
      This happens because blk_queue_cleanup() destroys the queue and
      elevator whether IOs are in progress or not and DEAD tests are
      sprinkled in the request processing path without proper
      synchronization.
      
      Similar problem exists for blk-throtl.  On queue cleanup, blk-throtl
      is shutdown whether it has requests in it or not.  Depending on
      timing, it either oopses or throttled bios are lost putting tasks
      which are waiting for bio completion into eternal D state.
      
      The way it should work is having the usual clear distinction between
      shutdown and release.  Shutdown drains all currently pending requests,
      marks the queue dead, and performs partial teardown of the now
      unnecessary part of the queue.  Even after shutdown is complete,
      reference holders are still allowed to issue requests to the queue
      although they will be immmediately failed.  The rest of teardown
      happens on release.
      
      This patch makes the following changes to make blk_queue_cleanup()
      behave as proper shutdown.
      
      * QUEUE_FLAG_DEAD is now set while holding both q->exit_mutex and
        queue_lock.
      
      * Unsynchronized DEAD check in generic_make_request_checks() removed.
        This couldn't make any meaningful difference as the queue could die
        after the check.
      
      * blk_drain_queue() updated such that it can drain all requests and is
        now called during cleanup.
      
      * blk_throtl updated such that it checks DEAD on grabbing queue_lock,
        drains all throttled bios during cleanup and free td when queue is
        released.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      c9a929dd
    • T
      block: reorganize throtl_get_tg() and blk_throtl_bio() · bc16a4f9
      Tejun Heo 提交于
      blk_throtl_bio() and throtl_get_tg() have rather unusual interface.
      
      * throtl_get_tg() returns pointer to a valid tg or ERR_PTR(-ENODEV),
        and drops queue_lock in the latter case.  Different locking context
        depending on return value is error-prone and DEAD state is scheduled
        to be protected by queue_lock anyway.  Move DEAD check inside
        queue_lock and return valid tg or NULL.
      
      * blk_throtl_bio() indicates return status both with its return value
        and in/out param **@bio.  The former is used to indicate whether
        queue is found to be dead during throtl processing.  The latter
        whether the bio is throttled.
      
        There's no point in returning DEAD check result from
        blk_throtl_bio().  The queue can die after blk_throtl_bio() is
        finished but before make_request_fn() grabs queue lock.
      
        Make it take *@bio instead and return boolean result indicating
        whether the request is throttled or not.
      
      This patch doesn't cause any visible functional difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      bc16a4f9
    • T
      block: drop unnecessary blk_get/put_queue() in scsi_cmd_ioctl() and blk_get_tg() · 315fceee
      Tejun Heo 提交于
      blk_get/put_queue() in scsi_cmd_ioctl() and throtl_get_tg() are
      completely bogus.  The caller must have a reference to the queue on
      entry and taking an extra reference doesn't change anything.
      
      For scsi_cmd_ioctl(), the only effect is that it ends up checking
      QUEUE_FLAG_DEAD on entry; however, this is bogus as queue can die
      right after blk_get_queue().  Dead queue should be and is handled in
      request issue path (it's somewhat broken now but that's a separate
      problem and doesn't affect this one much).
      
      throtl_get_tg() incorrectly assumes that q is rcu freed.  Also, it
      doesn't check return value of blk_get_queue().  If the queue is
      already dead, it ends up doing an extra put.
      
      Drop them.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      315fceee
    • T
      block: move blk_throtl prototypes to block/blk.h · bc9fcbf9
      Tejun Heo 提交于
      blk_throtl interface is block internal and there's no reason to have
      them in linux/blkdev.h.  Move them to block/blk.h.
      
      This patch doesn't introduce any functional change.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      bc9fcbf9
  5. 01 8月, 2011 1 次提交
  6. 14 6月, 2011 2 次提交
  7. 13 6月, 2011 1 次提交
  8. 21 5月, 2011 8 次提交
  9. 16 5月, 2011 1 次提交
    • V
      blk-throttle: Use task_subsys_state() to determine a task's blkio_cgroup · 70087dc3
      Vivek Goyal 提交于
      Currentlly we first map the task to cgroup and then cgroup to
      blkio_cgroup. There is a more direct way to get to blkio_cgroup
      from task using task_subsys_state(). Use that.
      
      The real reason for the fix is that it also avoids a race in generic
      cgroup code. During remount/umount rebind_subsystems() is called and
      it can do following with and rcu protection.
      
      cgrp->subsys[i] = NULL;
      
      That means if somebody got hold of cgroup under rcu and then it tried
      to do cgroup->subsys[] to get to blkio_cgroup, it would get NULL which
      is wrong. I was running into this race condition with ltp running on a
      upstream derived kernel and that lead to crash.
      
      So ideally we should also fix cgroup generic code to wait for rcu
      grace period before setting pointer to NULL. Li Zefan is not very keen
      on introducing synchronize_wait() as he thinks it will slow
      down moun/remount/umount operations.
      
      So for the time being atleast fix the kernel crash by taking a more
      direct route to blkio_cgroup.
      
      One tester had reported a crash while running LTP on a derived kernel
      and with this fix crash is no more seen while the test has been
      running for over 6 days.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Reviewed-by: NLi Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      70087dc3
  10. 06 4月, 2011 1 次提交
  11. 31 3月, 2011 1 次提交
  12. 23 3月, 2011 1 次提交
  13. 10 3月, 2011 2 次提交
  14. 08 3月, 2011 2 次提交
  15. 03 3月, 2011 1 次提交
    • V
      block: Move blk_throtl_exit() call to blk_cleanup_queue() · da527770
      Vivek Goyal 提交于
      Move blk_throtl_exit() in blk_cleanup_queue() as blk_throtl_exit() is
      written in such a way that it needs queue lock. In blk_release_queue()
      there is no gurantee that ->queue_lock is still around.
      
      Initially blk_throtl_exit() was in blk_cleanup_queue() but Ingo reported
      one problem.
      
        https://lkml.org/lkml/2010/10/23/86
      
        And a quick fix moved blk_throtl_exit() to blk_release_queue().
      
              commit 7ad58c02
              Author: Jens Axboe <jaxboe@fusionio.com>
              Date:   Sat Oct 23 20:40:26 2010 +0200
      
              block: fix use-after-free bug in blk throttle code
      
      This patch reverts above change and does not try to shutdown the
      throtl work in blk_sync_queue(). By avoiding call to
      throtl_shutdown_timer_wq() from blk_sync_queue(), we should also avoid
      the problem reported by Ingo.
      
      blk_sync_queue() seems to be used only by md driver and it seems to be
      using it to make sure q->unplug_fn is not called as md registers its
      own unplug functions and it is about to free up the data structures
      used by unplug_fn(). Block throttle does not call back into unplug_fn()
      or into md. So there is no need to cancel blk throttle work.
      
      In fact I think cancelling block throttle work is bad because it might
      happen that some bios are throttled and scheduled to be dispatched later
      with the help of pending work and if work is cancelled, these bios might
      never be dispatched.
      
      Block layer also uses blk_sync_queue() during blk_cleanup_queue() and
      blk_release_queue() time. That should be safe as we are also calling
      blk_throtl_exit() which should make sure all the throttling related
      data structures are cleaned up.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      da527770
  16. 02 3月, 2011 1 次提交
    • V
      blk-throttle: Do not use kblockd workqueue for throtl work · 450adcbe
      Vivek Goyal 提交于
      o Dominik Klein reported a system hang issue while doing some blkio
        throttling testing.
      
        https://lkml.org/lkml/2011/2/24/173
      
      o Some tracing revealed that CFQ was not dispatching any more jobs as
        queue unplug was not happening. And queue unplug was not happening
        because unplug work was not being called as there was one throttling
        work on same cpu which as not finished yet. And throttling work had not
        finished as it was tyring to dispatch a bio to CFQ but all the request
        descriptors were consume to it was put to sleep.
      
      o So basically it is a cyclic dependecny between CFQ unplug work and
        throtl dispatch work. Tejun suggested that use separate workqueue for
        such cases.
      
      o This patch uses a separate workqueue for throttle related work and
        does not rely on kblockd workqueue anymore.
      
      Cc: stable@kernel.org
      Reported-by: NDominik Klein <dk@in-telegence.net>
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      450adcbe
  17. 19 1月, 2011 1 次提交
  18. 02 12月, 2010 2 次提交
    • V
      blk-throttle: Correct the placement of smp_rmb() · 04a6b516
      Vivek Goyal 提交于
      o I was discussing what are the variable being updated without spin lock and
        why do we need barriers and Oleg pointed out that location of smp_rmb()
        should be between read of td->limits_changed and tg->limits_changed. This
        patch fixes it.
      
      o Following is one possible sequence of events. Say cpu0 is executing
        throtl_update_blkio_group_read_bps() and cpu1 is executing
        throtl_process_limit_change().
      
       cpu0                                                cpu1
      
       tg->limits_changed = true;
       smp_mb__before_atomic_inc();
       atomic_inc(&td->limits_changed);
      
                                           if (!atomic_read(&td->limits_changed))
                                                   return;
      
                                           if (tg->limits_changed)
                                                   do_something;
      
       If cpu0 has updated tg->limits_changed and td->limits_changed, we want to
       make sure that if update to td->limits_changed is visible on cpu1, then
       update to tg->limits_changed should also be visible.
      
       Oleg pointed out to ensure that we need to insert an smp_rmb() between
       td->limits_changed read and tg->limits_changed read.
      
      o I had erroneously put smp_rmb() before atomic_read(&td->limits_changed).
        This patch fixes it.
      Reported-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      04a6b516
    • V
      blk-throttle: Trim/adjust slice_end once a bio has been dispatched · d1ae8ffd
      Vivek Goyal 提交于
      o During some testing I did following and noticed throttling stops working.
      
              - Put a very low limit on a cgroup, say 1 byte per second.
              - Start some reads, this will set slice_end to a very high value.
              - Change the limit to higher value say 1MB/s
              - Now IO unthrottles and finishes as expected.
              - Try to do the read again but IO is not limited to 1MB/s as expected.
      
      o What is happening.
              - Initially low value of limit sets slice_end to a very high value.
              - During updation of limit, slice_end is not being truncated.
              - Very high value of slice_end leads to keeping the existing slice
                valid for a very long time and new slice does not start.
              - tg_may_dispatch() is called in blk_throtle_bio(), and trim_slice()
                is not called in this path. So slice_start is some old value and
                practically we are able to do huge amount of IO.
      
      o There are many ways it can be fixed. I have fixed it by trying to
        adjust/cleanup slice_end in trim_slice(). Generally we extend slices if bio
        is big and can't be dispatched in one slice. After dispatch of bio, readjust
        the slice_end to make sure we don't end up with huge values.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      d1ae8ffd
  19. 16 11月, 2010 1 次提交
  20. 02 10月, 2010 2 次提交
  21. 01 10月, 2010 2 次提交
    • V
      blkio-throttle: Fix link failure failure on i386 · 3aad5d3e
      Vivek Goyal 提交于
      o Randy Dunlap reported following linux-next failure. This patch fixes it.
      
      on i386:
      
      blk-throttle.c:(.text+0x1abb8): undefined reference to `__udivdi3'
      blk-throttle.c:(.text+0x1b1dc): undefined reference to `__udivdi3'
      
      o bytes_per_second interface is 64bit and I was continuing to do 64 bit
        division even on 32bit platform without help of special macros/functions
        hence the failure.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Reported-by: NRandy Dunlap <randy.dunlap@oracle.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      3aad5d3e
    • V
      blkio: Recalculate the throttled bio dispatch time upon throttle limit change · fe071437
      Vivek Goyal 提交于
      o Currently any cgroup throttle limit changes are processed asynchronousy and
        the change does not take affect till a new bio is dispatched from same group.
      
      o It might happen that a user sets a redicuously low limit on throttling.
        Say 1 bytes per second on reads. In such cases simple operations like mount
        a disk can wait for a very long time.
      
      o Once bio is throttled, there is no easy way to come out of that wait even if
        user increases the read limit later.
      
      o This patch fixes it. Now if a user changes the cgroup limits, we recalculate
        the bio dispatch time according to new limits.
      
      o Can't take queueu lock under blkcg_lock, hence after the change I wake
        up the dispatch thread again which recalculates the time. So there are some
        variables being synchronized across two threads without lock and I had to
        make use of barriers. Hoping I have used barriers correctly. Any review of
        memory barrier code especially will help.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      fe071437