1. 20 3月, 2012 2 次提交
    • T
      blkcg: don't use percpu for merged stats · 5fe224d2
      Tejun Heo 提交于
      With recent plug merge updates, merged stats are no longer called for
      plug merges and now only updated while holding queue_lock.  As
      stats_lock is scheduled to be removed, there's no reason to use percpu
      for merged stats.  Don't use percpu for merged stats.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      5fe224d2
    • V
      blkcg: alloc per cpu stats from worker thread in a delayed manner · 1cd9e039
      Vivek Goyal 提交于
      Current per cpu stat allocation assumes GFP_KERNEL allocation flag. But in
      IO path there are times when we want GFP_NOIO semantics. As there is no
      way to pass the allocation flags to alloc_percpu(), this patch delays the
      allocation of stats using a worker thread.
      
      v2-> tejun suggested following changes. Changed the patch accordingly.
      	- move alloc_node location in structure
      	- reduce the size of names of some of the fields
      	- Reduce the scope of locking of alloc_list_lock
      	- Simplified stat_alloc_fn() by allocating stats for all
      	  policies in one go and then assigning these to a group.
      
      v3 -> Andrew suggested to put some comments in the code. Also raised
            concerns about trying to allocate infinitely in case of allocation
            failure. I have changed the logic to sleep for 10ms before retrying.
            That should take care of non-preemptible UP kernels.
      
      v4 -> Tejun had more suggestions.
      	- drop list_for_each_entry_all()
      	- instead of msleep() use queue_delayed_work()
      	- Some cleanups realted to more compact coding.
      
      v5-> tejun suggested more cleanups leading to more compact code.
      
      tj: - Relocated pcpu_stats into blkio_stat_alloc_fn().
          - Minor comment update.
          - This also fixes suspicious RCU usage warning caused by invoking
            cgroup_path() from blkg_alloc() without holding RCU read lock.
            Now that blkg_alloc() doesn't require sleepable context, RCU
            read lock from blkg_lookup_create() is maintained throughout
            blkg_alloc().
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      1cd9e039
  2. 07 3月, 2012 20 次提交
    • T
      block: make block cgroup policies follow bio task association · 4f85cb96
      Tejun Heo 提交于
      Implement bio_blkio_cgroup() which returns the blkcg associated with
      the bio if exists or %current's blkcg, and use it in blk-throttle and
      cfq-iosched propio.  This makes both cgroup policies honor task
      association for the bio instead of always assuming %current.
      
      As nobody is using bio_set_task() yet, this doesn't introduce any
      behavior change.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      4f85cb96
    • T
      blkcg: drop unnecessary RCU locking · c875f4d0
      Tejun Heo 提交于
      Now that blkg additions / removals are always done under both q and
      blkcg locks, the only places RCU locking is necessary are
      blkg_lookup[_create]() for lookup w/o blkcg lock.  This patch drops
      unncessary RCU locking replacing it with plain blkcg locking as
      necessary.
      
      * blkiocg_pre_destroy() already perform proper locking and don't need
        RCU.  Dropped.
      
      * blkio_read_blkg_stats() now uses blkcg->lock instead of RCU read
        lock.  This isn't a hot path.
      
      * Now unnecessary synchronize_rcu() from queue exit paths removed.
        This makes q->nr_blkgs unnecessary.  Dropped.
      
      * RCU annotation on blkg->q removed.
      
      -v2: Vivek pointed out that blkg_lookup_create() still needs to be
           called under rcu_read_lock().  Updated.
      
      -v3: After the update, stats_lock locking in blkio_read_blkg_stats()
           shouldn't be using _irq variant as it otherwise ends up enabling
           irq while blkcg->lock is locked.  Fixed.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      c875f4d0
    • T
      blkcg: use double locking instead of RCU for blkg synchronization · 9f13ef67
      Tejun Heo 提交于
      blkgs are chained from both blkcgs and request_queues and thus
      subjected to two locks - blkcg->lock and q->queue_lock.  As both blkcg
      and q can go away anytime, locking during removal is tricky.  It's
      currently solved by wrapping removal inside RCU, which makes the
      synchronization complex.  There are three locks to worry about - the
      outer RCU, q lock and blkcg lock, and it leads to nasty subtle
      complications like conditional synchronize_rcu() on queue exit paths.
      
      For all other paths, blkcg lock is naturally nested inside q lock and
      the only exception is blkcg removal path, which is a very cold path
      and can be implemented as clumsy but conceptually-simple reverse
      double lock dancing.
      
      This patch updates blkg removal path such that blkgs are removed while
      holding both q and blkcg locks, which is trivial for request queue
      exit path - blkg_destroy_all().  The blkcg removal path,
      blkiocg_pre_destroy(), implements reverse double lock dancing
      essentially identical to ioc_release_fn().
      
      This simplifies blkg locking - no half-dead blkgs to worry about.  Now
      unnecessary RCU annotations will be removed by the next patch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      9f13ef67
    • T
      blkcg: unify blkg's for blkcg policies · e8989fae
      Tejun Heo 提交于
      Currently, blkg is per cgroup-queue-policy combination.  This is
      unnatural and leads to various convolutions in partially used
      duplicate fields in blkg, config / stat access, and general management
      of blkgs.
      
      This patch make blkg's per cgroup-queue and let them serve all
      policies.  blkgs are now created and destroyed by blkcg core proper.
      This will allow further consolidation of common management logic into
      blkcg core and API with better defined semantics and layering.
      
      As a transitional step to untangle blkg management, elvswitch and
      policy [de]registration, all blkgs except the root blkg are being shot
      down during elvswitch and bypass.  This patch adds blkg_root_update()
      to update root blkg in place on policy change.  This is hacky and racy
      but should be good enough as interim step until we get locking
      simplified and switch over to proper in-place update for all blkgs.
      
      -v2: Root blkgs need to be updated on elvswitch too and blkg_alloc()
           comment wasn't updated according to the function change.  Fixed.
           Both pointed out by Vivek.
      
      -v3: v2 updated blkg_destroy_all() to invoke update_root_blkg_pd() for
           all policies.  This freed root pd during elvswitch before the
           last queue finished exiting and led to oops.  Directly invoke
           update_root_blkg_pd() only on BLKIO_POLICY_PROP from
           cfq_exit_queue().  This also is closer to what will be done with
           proper in-place blkg update.  Reported by Vivek.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      e8989fae
    • T
      blkcg: let blkcg core manage per-queue blkg list and counter · 03aa264a
      Tejun Heo 提交于
      With the previous patch to move blkg list heads and counters to
      request_queue and blkg, logic to manage them in both policies are
      almost identical and can be moved to blkcg core.
      
      This patch moves blkg link logic into blkg_lookup_create(), implements
      common blkg unlink code in blkg_destroy(), and updates
      blkg_destory_all() so that it's policy specific and can skip root
      group.  The updated blkg_destroy_all() is now used to both clear queue
      for bypassing and elv switching, and release all blkgs on q exit.
      
      This patch introduces a race window where policy [de]registration may
      race against queue blkg clearing.  This can only be a problem on cfq
      unload and shouldn't be a real problem in practice (and we have many
      other places where this race already exists).  Future patches will
      remove these unlikely races.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      03aa264a
    • T
      blkcg: move per-queue blkg list heads and counters to queue and blkg · 4eef3049
      Tejun Heo 提交于
      Currently, specific policy implementations are responsible for
      maintaining list and number of blkgs.  This duplicates code
      unnecessarily, and hinders factoring common code and providing blkcg
      API with better defined semantics.
      
      After this patch, request_queue hosts list heads and counters and blkg
      has list nodes for both policies.  This patch only relocates the
      necessary fields and the next patch will actually move management code
      into blkcg core.
      
      Note that request_queue->blkg_list[] and ->nr_blkgs[] are hardcoded to
      have 2 elements.  This is to avoid include dependency and will be
      removed by the next patch.
      
      This patch doesn't introduce any behavior change.
      
      -v2: Now unnecessary conditional on CONFIG_BLK_CGROUP_MODULE removed
           as pointed out by Vivek.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      4eef3049
    • T
      blkcg: don't use blkg->plid in stat related functions · c1768268
      Tejun Heo 提交于
      blkg is scheduled to be unified for all policies and thus there won't
      be one-to-one mapping from blkg to policy.  Update stat related
      functions to take explicit @pol or @plid arguments and not use
      blkg->plid.
      
      This is painful for now but most of specific stat interface functions
      will be replaced with a handful of generic helpers.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      c1768268
    • T
      blkcg: make blkg->pd an array and move configuration and stats into it · 549d3aa8
      Tejun Heo 提交于
      To prepare for unifying blkgs for different policies, make blkg->pd an
      array with BLKIO_NR_POLICIES elements and move blkg->conf, ->stats,
      and ->stats_cpu into blkg_policy_data.
      
      This patch doesn't introduce any functional difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      549d3aa8
    • T
      blkcg: move refcnt to blkcg core · 1adaf3dd
      Tejun Heo 提交于
      Currently, blkcg policy implementations manage blkg refcnt duplicating
      mostly identical code in both policies.  This patch moves refcnt to
      blkg and let blkcg core handle refcnt and freeing of blkgs.
      
      * cfq blkgs now also get freed via RCU.
      
      * cfq blkgs lose RB_EMPTY_ROOT() sanity check on blkg free.  If
        necessary, we can add blkio_exit_group_fn() to resurrect this.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      1adaf3dd
    • T
      blkcg: let blkcg core handle policy private data allocation · 0381411e
      Tejun Heo 提交于
      Currently, blkg's are embedded in private data blkcg policy private
      data structure and thus allocated and freed by policies.  This leads
      to duplicate codes in policies, hinders implementing common part in
      blkcg core with strong semantics, and forces duplicate blkg's for the
      same cgroup-q association.
      
      This patch introduces struct blkg_policy_data which is a separate data
      structure chained from blkg.  Policies specifies the amount of private
      data it needs in its blkio_policy_type->pdata_size and blkcg core
      takes care of allocating them along with blkg which can be accessed
      using blkg_to_pdata().  blkg can be determined from pdata using
      pdata_to_blkg().  blkio_alloc_group_fn() method is accordingly updated
      to blkio_init_group_fn().
      
      For consistency, tg_of_blkg() and cfqg_of_blkg() are replaced with
      blkg_to_tg() and blkg_to_cfqg() respectively, and functions to map in
      the reverse direction are added.
      
      Except that policy specific data now lives in a separate data
      structure from blkg, this patch doesn't introduce any functional
      difference.
      
      This will be used to unify blkg's for different policies.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      0381411e
    • T
      blkcg: add blkcg_{init|drain|exit}_queue() · 5efd6113
      Tejun Heo 提交于
      Currently block core calls directly into blk-throttle for init, drain
      and exit.  This patch adds blkcg_{init|drain|exit}_queue() which wraps
      the blk-throttle functions.  This is to give more control and
      visiblity to blkcg core layer for proper layering.  Further patches
      will add logic common to blkcg policies to the functions.
      
      While at it, collapse blk_throtl_release() into blk_throtl_exit().
      There's no reason to keep them separate.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      5efd6113
    • T
      blkcg: let blkio_group point to blkio_cgroup directly · 7ee9c562
      Tejun Heo 提交于
      Currently, blkg points to the associated blkcg via its css_id.  This
      unnecessarily complicates dereferencing blkcg.  Let blkg hold a
      reference to the associated blkcg and point directly to it and disable
      css_id on blkio_subsys.
      
      This change requires splitting blkiocg_destroy() into
      blkiocg_pre_destroy() and blkiocg_destroy() so that all blkg's can be
      destroyed and all the blkcg references held by them dropped during
      cgroup removal.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      7ee9c562
    • T
      blkcg: kill the mind-bending blkg->dev · 7a4dd281
      Tejun Heo 提交于
      blkg->dev is dev_t recording the device number of the block device for
      the associated request_queue.  It is used to identify the associated
      block device when printing out configuration or stats.
      
      This is redundant to begin with.  A blkg is an association between a
      cgroup and a request_queue and it of course is possible to reach
      request_queue from blkg and synchronization conventions are in place
      for safe q dereferencing, so this shouldn't be necessary from the
      beginning.  Furthermore, it's initialized by sscanf()ing the device
      name of backing_dev_info.  The mind boggles.
      
      Anyways, if blkg is visible under rcu lock, we *know* that the
      associated request_queue hasn't gone away yet and its bdi is
      registered and alive - blkg can't be created for request_queue which
      hasn't been fully initialized and it can't go away before blkg is
      removed.
      
      Let stat and conf read functions get device name from
      blkg->q->backing_dev_info.dev and pass it down to printing functions
      and remove blkg->dev.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      7a4dd281
    • T
      blkcg: kill blkio_policy_node · 4bfd482e
      Tejun Heo 提交于
      Now that blkcg configuration lives in blkg's, blkio_policy_node is no
      longer necessary.  Kill it.
      
      blkio_policy_parse_and_set() now fails if invoked for missing device
      and functions to print out configurations are updated to print from
      blkg's.
      
      cftype_blkg_same_policy() is dropped along with other policy functions
      for consistency.  Its one line is open coded in the only user -
      blkio_read_blkg_stats().
      
      -v2: Update to reflect the retry-on-bypass logic change of the
           previous patch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      4bfd482e
    • T
      blkcg: don't allow or retain configuration of missing devices · e56da7e2
      Tejun Heo 提交于
      blkcg is very peculiar in that it allows setting and remembering
      configurations for non-existent devices by maintaining separate data
      structures for configuration.
      
      This behavior is completely out of the usual norms and outright
      confusing; furthermore, it uses dev_t number to match the
      configuration to devices, which is unpredictable to begin with and
      becomes completely unuseable if EXT_DEVT is fully used.
      
      It is wholely unnecessary - we already have fully functional userland
      mechanism to program devices being hotplugged which has full access to
      device identification, connection topology and filesystem information.
      
      Add a new struct blkio_group_conf which contains all blkcg
      configurations to blkio_group and let blkio_group, which can be
      created iff the associated device exists and is removed when the
      associated device goes away, carry all configurations.
      
      Note that, after this patch, all newly created blkg's will always have
      the default configuration (unlimited for throttling and blkcg's weight
      for propio).
      
      This patch makes blkio_policy_node meaningless but doesn't remove it.
      The next patch will.
      
      -v2: Updated to retry after short sleep if blkg lookup/creation failed
           due to the queue being temporarily bypassed as indicated by
           -EBUSY return.  Pointed out by Vivek.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Kay Sievers <kay.sievers@vrfy.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      e56da7e2
    • T
      blkcg: factor out blkio_group creation · cd1604fa
      Tejun Heo 提交于
      Currently both blk-throttle and cfq-iosched implement their own
      blkio_group creation code in throtl_get_tg() and cfq_get_cfqg().  This
      patch factors out the common code into blkg_lookup_create(), which
      returns ERR_PTR value so that transitional failures due to queue
      bypass can be distinguished from other failures.
      
      * New plkio_policy_ops methods blkio_alloc_group_fn() and
        blkio_link_group_fn added.  Both are transitional and will be
        removed once the blkg management code is fully moved into
        blk-cgroup.c.
      
      * blkio_alloc_group_fn() allocates policy-specific blkg which is
        usually a larger data structure with blkg as the first entry and
        intiailizes it.  Note that initialization of blkg proper, including
        percpu stats, is responsibility of blk-cgroup proper.
      
        Note that default config (weight, bps...) initialization is done
        from this method; otherwise, we end up violating locking order
        between blkcg and q locks via blkcg_get_CONF() functions.
      
      * blkio_link_group_fn() is called under queue_lock and responsible for
        linking the blkg to the queue.  blkcg side is handled by blk-cgroup
        proper.
      
      * The common blkg creation function is named blkg_lookup_create() and
        blkiocg_lookup_group() is renamed to blkg_lookup() for consistency.
        Also, throtl / cfq related functions are similarly [re]named for
        consistency.
      
      This simplifies blkcg policy implementations and enables further
      cleanup.
      
      -v2: Vivek noticed that blkg_lookup_create() incorrectly tested
           blk_queue_dead() instead of blk_queue_bypass() leading a user of
           the function ending up creating a new blkg on bypassing queue.
           This is a bug introduced while relocating bypass patches before
           this one.  Fixed.
      
      -v3: ERR_PTR patch folded into this one.  @for_root added to
           blkg_lookup_create() to allow creating root group on a bypassed
           queue during elevator switch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      cd1604fa
    • T
      blkcg: add blkio_policy[] array and allow one policy per policy ID · 035d10b2
      Tejun Heo 提交于
      Block cgroup policies are maintained in a linked list and,
      theoretically, multiple policies sharing the same policy ID are
      allowed.
      
      This patch temporarily restricts one policy per plid and adds
      blkio_policy[] array which indexes registered policy types by plid.
      Both the restriction and blkio_policy[] array are transitional and
      will be removed once API cleanup is complete.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      035d10b2
    • T
      blkcg: use q and plid instead of opaque void * for blkio_group association · ca32aefc
      Tejun Heo 提交于
      blkgio_group is association between a block cgroup and a queue for a
      given policy.  Using opaque void * for association makes things
      confusing and hinders factoring of common code.  Use request_queue *
      and, if necessary, policy id instead.
      
      This will help block cgroup API cleanup.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      ca32aefc
    • T
      blkcg: shoot down blkio_groups on elevator switch · 72e06c25
      Tejun Heo 提交于
      Elevator switch may involve changes to blkcg policies.  Implement
      shoot down of blkio_groups.
      
      Combined with the previous bypass updates, the end goal is updating
      blkcg core such that it can ensure that blkcg's being affected become
      quiescent and don't have any per-blkg data hanging around before
      commencing any policy updates.  Until queues are made aware of the
      policies that applies to them, as an interim step, all per-policy blkg
      data will be shot down.
      
      * blk-throtl doesn't need this change as it can't be disabled for a
        live queue; however, update it anyway as the scheduled blkg
        unification requires this behavior change.  This means that
        blk-throtl configuration will be unnecessarily lost over elevator
        switch.  This oddity will be removed after blkcg learns to associate
        individual policies with request_queues.
      
      * blk-throtl dosen't shoot down root_tg.  This is to ease transition.
        Unified blkg will always have persistent root group and not shooting
        down root_tg for now eases transition to that point by avoiding
        having to update td->root_tg and is safe as blk-throtl can never be
        disabled
      
      -v2: Vivek pointed out that group list is not guaranteed to be empty
           on return from clear function if it raced cgroup removal and
           lost.  Fix it by waiting a bit and retrying.  This kludge will
           soon be removed once locking is updated such that blkg is never
           in limbo state between blkcg and request_queue locks.
      
           blk-throtl no longer shoots down root_tg to avoid breaking
           td->root_tg.
      
           Also, Nest queue_lock inside blkio_list_lock not the other way
           around to avoid introduce possible deadlock via blkcg lock.
      
      -v3: blkcg_clear_queue() repositioned and renamed to
           blkg_destroy_all() to increase consistency with later changes.
           cfq_clear_queue() updated to check q->elevator before
           dereferencing it to avoid NULL dereference on not fully
           initialized queues (used by later change).
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      72e06c25
    • T
      blkcg: make CONFIG_BLK_CGROUP bool · 32e380ae
      Tejun Heo 提交于
      Block cgroup core can be built as module; however, it isn't too useful
      as blk-throttle can only be built-in and cfq-iosched is usually the
      default built-in scheduler.  Scheduled blkcg cleanup requires calling
      into blkcg from block core.  To simplify that, disallow building blkcg
      as module by making CONFIG_BLK_CGROUP bool.
      
      If building blkcg core as module really matters, which I doubt, we can
      revisit it after blkcg API cleanup.
      
      -v2: Vivek pointed out that IOSCHED_CFQ was incorrectly updated to
           depend on BLK_CGROUP.  Fixed.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      32e380ae
  3. 24 10月, 2011 1 次提交
  4. 23 5月, 2011 1 次提交
  5. 21 5月, 2011 3 次提交
  6. 16 5月, 2011 1 次提交
    • V
      blk-throttle: Use task_subsys_state() to determine a task's blkio_cgroup · 70087dc3
      Vivek Goyal 提交于
      Currentlly we first map the task to cgroup and then cgroup to
      blkio_cgroup. There is a more direct way to get to blkio_cgroup
      from task using task_subsys_state(). Use that.
      
      The real reason for the fix is that it also avoids a race in generic
      cgroup code. During remount/umount rebind_subsystems() is called and
      it can do following with and rcu protection.
      
      cgrp->subsys[i] = NULL;
      
      That means if somebody got hold of cgroup under rcu and then it tried
      to do cgroup->subsys[] to get to blkio_cgroup, it would get NULL which
      is wrong. I was running into this race condition with ltp running on a
      upstream derived kernel and that lead to crash.
      
      So ideally we should also fix cgroup generic code to wait for rcu
      grace period before setting pointer to NULL. Li Zefan is not very keen
      on introducing synchronize_wait() as he thinks it will slow
      down moun/remount/umount operations.
      
      So for the time being atleast fix the kernel crash by taking a more
      direct route to blkio_cgroup.
      
      One tester had reported a crash while running LTP on a derived kernel
      and with this fix crash is no more seen while the test has been
      running for over 6 days.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Reviewed-by: NLi Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      70087dc3
  7. 12 3月, 2011 1 次提交
  8. 09 3月, 2011 1 次提交
  9. 02 10月, 2010 1 次提交
  10. 01 10月, 2010 1 次提交
    • V
      blkio: Recalculate the throttled bio dispatch time upon throttle limit change · fe071437
      Vivek Goyal 提交于
      o Currently any cgroup throttle limit changes are processed asynchronousy and
        the change does not take affect till a new bio is dispatched from same group.
      
      o It might happen that a user sets a redicuously low limit on throttling.
        Say 1 bytes per second on reads. In such cases simple operations like mount
        a disk can wait for a very long time.
      
      o Once bio is throttled, there is no easy way to come out of that wait even if
        user increases the read limit later.
      
      o This patch fixes it. Now if a user changes the cgroup limits, we recalculate
        the bio dispatch time according to new limits.
      
      o Can't take queueu lock under blkcg_lock, hence after the change I wake
        up the dispatch thread again which recalculates the time. So there are some
        variables being synchronized across two threads without lock and I had to
        make use of barriers. Hoping I have used barriers correctly. Any review of
        memory barrier code especially will help.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      fe071437
  11. 16 9月, 2010 3 次提交
  12. 27 4月, 2010 2 次提交
    • V
      blk-cgroup: config options re-arrangement · afc24d49
      Vivek Goyal 提交于
      This patch fixes few usability and configurability issues.
      
      o All the cgroup based controller options are configurable from
        "Genral Setup/Control Group Support/" menu. blkio is the only exception.
        Hence make this option visible in above menu and make it configurable from
        there to bring it inline with rest of the cgroup based controllers.
      
      o Get rid of CONFIG_DEBUG_CFQ_IOSCHED.
      
        This option currently does two things.
      
        - Enable printing of cgroup paths in blktrace
        - Enables CONFIG_DEBUG_BLK_CGROUP, which in turn displays additional stat
          files in cgroup.
      
        If we are using group scheduling, blktrace data is of not really much use
        if cgroup information is not present. To get this data, currently one has to
        also enable CONFIG_DEBUG_CFQ_IOSCHED, which in turn brings the overhead of
        all the additional debug stat files which is not desired.
      
        Hence, this patch moves printing of cgroup paths under
        CONFIG_CFQ_GROUP_IOSCHED.
      
        This allows us to get rid of CONFIG_DEBUG_CFQ_IOSCHED completely. Now all
        the debug stat files are controlled only by CONFIG_DEBUG_BLK_CGROUP which
        can be enabled through config menu.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Acked-by: NDivyesh Shah <dpshah@google.com>
      Reviewed-by: NGui Jianfeng <guijianfeng@cn.fujitsu.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      afc24d49
    • V
      blkio: Fix another BUG_ON() crash due to cfqq movement across groups · e5ff082e
      Vivek Goyal 提交于
      o Once in a while, I was hitting a BUG_ON() in blkio code. empty_time was
        assuming that upon slice expiry, group can't be marked empty already (except
        forced dispatch).
      
        But this assumption is broken if cfqq can move (group_isolation=0) across
        groups after receiving a request.
      
        I think most likely in this case we got a request in a cfqq and accounted
        the rq in one group, later while adding the cfqq to tree, we moved the queue
        to a different group which was already marked empty and after dispatch from
        slice we found group already marked empty and raised alarm.
      
        This patch does not error out if group is already marked empty. This can
        introduce some empty_time stat error only in case of group_isolation=0. This
        is better than crashing. In case of group_isolation=1 we should still get
        same stats as before this patch.
      
      [  222.308546] ------------[ cut here ]------------
      [  222.309311] kernel BUG at block/blk-cgroup.c:236!
      [  222.309311] invalid opcode: 0000 [#1] SMP
      [  222.309311] last sysfs file: /sys/devices/virtual/block/dm-3/queue/scheduler
      [  222.309311] CPU 1
      [  222.309311] Modules linked in: dm_round_robin dm_multipath qla2xxx scsi_transport_fc dm_zero dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]
      [  222.309311]
      [  222.309311] Pid: 4780, comm: fio Not tainted 2.6.34-rc4-blkio-config #68 0A98h/HP xw8600 Workstation
      [  222.309311] RIP: 0010:[<ffffffff8121ad88>]  [<ffffffff8121ad88>] blkiocg_set_start_empty_time+0x50/0x83
      [  222.309311] RSP: 0018:ffff8800ba6e79f8  EFLAGS: 00010002
      [  222.309311] RAX: 0000000000000082 RBX: ffff8800a13b7990 RCX: ffff8800a13b7808
      [  222.309311] RDX: 0000000000002121 RSI: 0000000000000082 RDI: ffff8800a13b7a30
      [  222.309311] RBP: ffff8800ba6e7a18 R08: 0000000000000000 R09: 0000000000000001
      [  222.309311] R10: 000000000002f8c8 R11: ffff8800ba6e7ad8 R12: ffff8800a13b78ff
      [  222.309311] R13: ffff8800a13b7990 R14: 0000000000000001 R15: ffff8800a13b7808
      [  222.309311] FS:  00007f3beec476f0(0000) GS:ffff880001e40000(0000) knlGS:0000000000000000
      [  222.309311] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [  222.309311] CR2: 000000000040e7f0 CR3: 00000000a12d5000 CR4: 00000000000006e0
      [  222.309311] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [  222.309311] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      [  222.309311] Process fio (pid: 4780, threadinfo ffff8800ba6e6000, task ffff8800b3d6bf00)
      [  222.309311] Stack:
      [  222.309311]  0000000000000001 ffff8800bab17a48 ffff8800bab17a48 ffff8800a13b7800
      [  222.309311] <0> ffff8800ba6e7a68 ffffffff8121da35 ffff880000000001 00ff8800ba5c5698
      [  222.309311] <0> ffff8800ba6e7a68 ffff8800a13b7800 0000000000000000 ffff8800bab17a48
      [  222.309311] Call Trace:
      [  222.309311]  [<ffffffff8121da35>] __cfq_slice_expired+0x2af/0x3ec
      [  222.309311]  [<ffffffff8121fd7b>] cfq_dispatch_requests+0x2c8/0x8e8
      [  222.309311]  [<ffffffff8120f1cd>] ? spin_unlock_irqrestore+0xe/0x10
      [  222.309311]  [<ffffffff8120fb1a>] ? blk_insert_cloned_request+0x70/0x7b
      [  222.309311]  [<ffffffff81210461>] blk_peek_request+0x191/0x1a7
      [  222.309311]  [<ffffffffa0002799>] dm_request_fn+0x38/0x14c [dm_mod]
      [  222.309311]  [<ffffffff810ae61f>] ? sync_page_killable+0x0/0x35
      [  222.309311]  [<ffffffff81210fd4>] __generic_unplug_device+0x32/0x37
      [  222.309311]  [<ffffffff81211274>] generic_unplug_device+0x2e/0x3c
      [  222.309311]  [<ffffffffa00011a6>] dm_unplug_all+0x42/0x5b [dm_mod]
      [  222.309311]  [<ffffffff8120ca37>] blk_unplug+0x29/0x2d
      [  222.309311]  [<ffffffff8120ca4d>] blk_backing_dev_unplug+0x12/0x14
      [  222.309311]  [<ffffffff81109a7a>] block_sync_page+0x35/0x39
      [  222.309311]  [<ffffffff810ae616>] sync_page+0x41/0x4a
      [  222.309311]  [<ffffffff810ae62d>] sync_page_killable+0xe/0x35
      [  222.309311]  [<ffffffff8158aa59>] __wait_on_bit_lock+0x46/0x8f
      [  222.309311]  [<ffffffff810ae4f5>] __lock_page_killable+0x66/0x6d
      [  222.309311]  [<ffffffff81056f9c>] ? wake_bit_function+0x0/0x33
      [  222.309311]  [<ffffffff810ae528>] lock_page_killable+0x2c/0x2e
      [  222.309311]  [<ffffffff810afbc5>] generic_file_aio_read+0x361/0x4f0
      [  222.309311]  [<ffffffff810ea044>] do_sync_read+0xcb/0x108
      [  222.309311]  [<ffffffff811e42f7>] ? security_file_permission+0x16/0x18
      [  222.309311]  [<ffffffff810ea6ab>] vfs_read+0xab/0x108
      [  222.309311]  [<ffffffff810ea7c8>] sys_read+0x4a/0x6e
      [  222.309311]  [<ffffffff81002b5b>] system_call_fastpath+0x16/0x1b
      [  222.309311] Code: 58 01 00 00 00 48 89 c6 75 0a 48 83 bb 60 01 00 00 00 74 09 48 8d bb a0 00 00 00 eb 35 41 fe cc 74 0d f6 83 c0 01 00 00 04 74 04 <0f> 0b eb fe 48 89 75 e8 e8 be e0 de ff 66 83 8b c0 01 00 00 04
      [  222.309311] RIP  [<ffffffff8121ad88>] blkiocg_set_start_empty_time+0x50/0x83
      [  222.309311]  RSP <ffff8800ba6e79f8>
      [  222.309311] ---[ end trace 32b4f71dffc15712 ]---
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Acked-by: NDivyesh Shah <dpshah@google.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      e5ff082e
  13. 16 4月, 2010 1 次提交
  14. 14 4月, 2010 1 次提交
    • D
      block: Update to io-controller stats · a11cdaa7
      Divyesh Shah 提交于
      Changelog from v1:
      o Call blkiocg_update_idle_time_stats() at cfq_rq_enqueued() instead of at
        dispatch time.
      
      Changelog from original patchset: (in response to Vivek Goyal's comments)
      o group blkiocg_update_blkio_group_dequeue_stats() with other DEBUG functions
      o rename blkiocg_update_set_active_queue_stats() to
        blkiocg_update_avg_queue_size_stats()
      o s/request/io/ in blkiocg_update_request_add_stats() and
        blkiocg_update_request_remove_stats()
      o Call cfq_del_timer() at request dispatch() instead of
        blkiocg_update_idle_time_stats()
      
      Signed-off-by: Divyesh Shah<dpshah@google.com>
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      a11cdaa7
  15. 13 4月, 2010 1 次提交
    • G
      io-controller: Add a new interface "weight_device" for IO-Controller · 34d0f179
      Gui Jianfeng 提交于
      Currently, IO Controller makes use of blkio.weight to assign weight for
      all devices. Here a new user interface "blkio.weight_device" is introduced to
      assign different weights for different devices. blkio.weight becomes the
      default value for devices which are not configured by "blkio.weight_device"
      
      You can use the following format to assigned specific weight for a given
      device:
      #echo "major:minor weight" > blkio.weight_device
      
      major:minor represents device number.
      
      And you can remove weight for a given device as following:
      #echo "major:minor 0" > blkio.weight_device
      
      V1->V2 changes:
      - use user interface "weight_device" instead of "policy" suggested by Vivek
      - rename some struct suggested by Vivek
      - rebase to 2.6-block "for-linus" branch
      - remove an useless list_empty check pointed out by Li Zefan
      - some trivial typo fix
      
      V2->V3 changes:
      - Move policy_*_node() functions up to get rid of forward declarations
      - rename related functions by adding prefix "blkio_"
      Signed-off-by: NGui Jianfeng <guijianfeng@cn.fujitsu.com>
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      34d0f179