1. 30 6月, 2013 1 次提交
    • T
      cgroup: CGRP_ROOT_SUBSYS_BOUND should also be ignored when mounting an existing hierarchy · c7ba8287
      Tejun Heo 提交于
      0ce6cba3 ("cgroup: CGRP_ROOT_SUBSYS_BOUND should be ignored when
      comparing mount options") only updated the remount path but
      CGRP_ROOT_SUBSYS_BOUND should also be ignored when comparing options
      while mounting an existing hierarchy.  As option mismatch triggers a
      warning but doesn't fail the mount without sane_behavior, this only
      triggers a spurious warning message.
      
      Fix it by only comparing CGRP_ROOT_OPTION_MASK bits when comparing new
      and existing root options.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      c7ba8287
  2. 28 6月, 2013 2 次提交
    • T
      cgroup: CGRP_ROOT_SUBSYS_BOUND should be ignored when comparing mount options · 0ce6cba3
      Tejun Heo 提交于
      1672d040 ("cgroup: fix cgroupfs_root early destruction path")
      introduced CGRP_ROOT_SUBSYS_BOUND which is used to mark completion of
      subsys binding on a new root; however, this broke remounts.
      cgroup_remount() doesn't allow changing root options via remount and
      CGRP_ROOT_SUBSYS_BOUND, which is set on all fully initialized roots,
      makes the function reject all remounts.
      
      Fix it by putting the options part in the lower 16 bits of root->flags
      and masking the comparions.  While at it, make cgroup_remount() emit
      an error message explaining why it's rejecting a remount request, so
      that it's less of a mystery.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      0ce6cba3
    • T
      cgroup: fix deadlock on cgroup_mutex via drop_parsed_module_refcounts() · e2bd416f
      Tejun Heo 提交于
      eb178d06 ("cgroup: grab cgroup_mutex in
      drop_parsed_module_refcounts()") made drop_parsed_module_refcounts()
      grab cgroup_mutex to make lockdep assertion in for_each_subsys()
      happy.  Unfortunately, cgroup_remount() calls the function while
      holding cgroup_mutex in its failure path leading to the following
      deadlock.
      
      # mount -t cgroup -o remount,memory,blkio cgroup blkio
      
       cgroup: option changes via remount are deprecated (pid=525 comm=mount)
      
       =============================================
       [ INFO: possible recursive locking detected ]
       3.10.0-rc4-work+ #1 Not tainted
       ---------------------------------------------
       mount/525 is trying to acquire lock:
        (cgroup_mutex){+.+.+.}, at: [<ffffffff8110a3e1>] drop_parsed_module_refcounts+0x21/0xb0
      
       but task is already holding lock:
        (cgroup_mutex){+.+.+.}, at: [<ffffffff8110e4e1>] cgroup_remount+0x51/0x200
      
       other info that might help us debug this:
        Possible unsafe locking scenario:
      
      	CPU0
      	----
         lock(cgroup_mutex);
         lock(cgroup_mutex);
      
        *** DEADLOCK ***
      
        May be due to missing lock nesting notation
      
       4 locks held by mount/525:
        #0:  (&type->s_umount_key#30){+.+...}, at: [<ffffffff811e9a0d>] do_mount+0x2bd/0xa30
        #1:  (&sb->s_type->i_mutex_key#9){+.+.+.}, at: [<ffffffff8110e4d3>] cgroup_remount+0x43/0x200
        #2:  (cgroup_mutex){+.+.+.}, at: [<ffffffff8110e4e1>] cgroup_remount+0x51/0x200
        #3:  (cgroup_root_mutex){+.+.+.}, at: [<ffffffff8110e4ef>] cgroup_remount+0x5f/0x200
      
       stack backtrace:
       CPU: 2 PID: 525 Comm: mount Not tainted 3.10.0-rc4-work+ #1
       Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
        ffffffff829651f0 ffff88000ec2fc28 ffffffff81c24bb1 ffff88000ec2fce8
        ffffffff810f420d 0000000000000006 0000000000000001 0000000000000056
        ffff8800153b4640 ffff880000000000 ffffffff81c2e468 ffff8800153b4640
       Call Trace:
        [<ffffffff81c24bb1>] dump_stack+0x19/0x1b
        [<ffffffff810f420d>] __lock_acquire+0x15dd/0x1e60
        [<ffffffff810f531c>] lock_acquire+0x9c/0x1f0
        [<ffffffff81c2a805>] mutex_lock_nested+0x65/0x410
        [<ffffffff8110a3e1>] drop_parsed_module_refcounts+0x21/0xb0
        [<ffffffff8110e63e>] cgroup_remount+0x1ae/0x200
        [<ffffffff811c9bb2>] do_remount_sb+0x82/0x190
        [<ffffffff811e9d41>] do_mount+0x5f1/0xa30
        [<ffffffff811ea203>] SyS_mount+0x83/0xc0
        [<ffffffff81c2fb82>] system_call_fastpath+0x16/0x1b
      
      Fix it by moving the drop_parsed_module_refcounts() invocation outside
      cgroup_mutex.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      e2bd416f
  3. 27 6月, 2013 4 次提交
    • T
      cgroup: always use RCU accessors for protected accesses · a4ea1cc9
      Tejun Heo 提交于
      kernel/cgroup.c still has places where a RCU pointer is set and
      accessed directly without going through RCU_INIT_POINTER() or
      rcu_dereference_protected().  They're all properly protected accesses
      so nothing is broken but it leads to spurious sparse RCU address space
      warnings.
      
      Substitute direct accesses with RCU_INIT_POINTER() and
      rcu_dereference_protected().  Note that %true is specified as the
      extra condition for all derference updates.  This isn't ideal as all
      it does is suppressing warning without actually policing
      synchronization rules; however, most are scheduled to be removed
      pretty soon along with css_id itself, so no reason to be more
      elaborate.
      
      Combined with the previous changes, this removes all RCU related
      sparse warnings from cgroup.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NFengguang Wu <fengguang.wu@intel.com>
      Acked-by; Li Zefan <lizefan@huawei.com>
      a4ea1cc9
    • T
      cgroup: fix RCU accesses around task->cgroups · a8ad805c
      Tejun Heo 提交于
      There are several places in kernel/cgroup.c where task->cgroups is
      accessed and modified without going through proper RCU accessors.
      None is broken as they're all lock protected accesses; however, this
      still triggers sparse RCU address space warnings.
      
      * Consistently use task_css_set() for task->cgroups dereferencing.
      
      * Use RCU_INIT_POINTER() to clear task->cgroups to &init_css_set on
        exit.
      
      * Remove unnecessary rcu_dereference_raw() from cset->subsys[]
        dereference in cgroup_exit().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NFengguang Wu <fengguang.wu@intel.com>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      a8ad805c
    • T
      cgroup: grab cgroup_mutex in drop_parsed_module_refcounts() · eb178d06
      Tejun Heo 提交于
      This isn't strictly necessary as all subsystems specified in
      @subsys_mask are guaranteed to be pinned; however, it does spuriously
      trigger lockdep warning.  Let's grab cgroup_mutex around it.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      eb178d06
    • T
      cgroup: fix cgroupfs_root early destruction path · 1672d040
      Tejun Heo 提交于
      cgroupfs_root used to have ->actual_subsys_mask in addition to
      ->subsys_mask.  a8a648c4 ("cgroup: remove
      cgroup->actual_subsys_mask") removed it noting that the subsys_mask is
      essentially temporary and doesn't belong in cgroupfs_root; however,
      the patch made it impossible to tell whether a cgroupfs_root actually
      has the subsystems bound or just have the bits set leading to the
      following BUG when trying to mount with subsystems which are already
      mounted elsewhere.
      
       kernel BUG at kernel/cgroup.c:1038!
       invalid opcode: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
       ...
       CPU: 1 PID: 7973 Comm: mount Tainted: G        W    3.10.0-rc7-next-20130625-sasha-00011-g1c1dc0e #1105
       task: ffff880fc0ae8000 ti: ffff880fc0b9a000 task.ti: ffff880fc0b9a000
       RIP: 0010:[<ffffffff81249b29>]  [<ffffffff81249b29>] rebind_subsystems+0x409/0x5f0
       ...
       Call Trace:
        [<ffffffff8124bd4f>] cgroup_kill_sb+0xff/0x210
        [<ffffffff813d21af>] deactivate_locked_super+0x4f/0x90
        [<ffffffff8124f3b3>] cgroup_mount+0x673/0x6e0
        [<ffffffff81257169>] cpuset_mount+0xd9/0x110
        [<ffffffff813d2580>] mount_fs+0xb0/0x2d0
        [<ffffffff81404afd>] vfs_kern_mount+0xbd/0x180
        [<ffffffff814070b5>] do_new_mount+0x145/0x2c0
        [<ffffffff814085d6>] do_mount+0x356/0x3c0
        [<ffffffff8140873d>] SyS_mount+0xfd/0x140
        [<ffffffff854eb600>] tracesys+0xdd/0xe2
      
      We still want rebind_subsystems() to take added/removed masks, so
      let's fix it by marking whether a cgroupfs_root has finished binding
      or not.  Also, document what's going on around ->subsys_mask
      initialization so that similar mistakes aren't repeated.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NSasha Levin <sasha.levin@oracle.com>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      1672d040
  4. 26 6月, 2013 3 次提交
    • T
      cgroup: reserve ID 0 for dummy_root and 1 for unified hierarchy · fc76df70
      Tejun Heo 提交于
      Before 1a574231 ("cgroup: make hierarchy_id use cyclic idr"),
      hierarchy IDs were allocated from 0.  As the dummy hierarchy was
      always the one first initialized, it got assigned 0 and all other
      hierarchies from 1.  The patch accidentally changed the minimum
      useable ID to 2.
      
      Let's restore ID 0 for dummy_root and while at it reserve 1 for
      unified hierarchy.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      Cc: stable@vger.kernel.org
      fc76df70
    • T
      cgroup: implement for_each_[builtin_]subsys() · 30159ec7
      Tejun Heo 提交于
      There are quite a few places where all loaded [builtin] subsys are
      iterated.  Implement for_each_[builtin_]subsys() and replace manual
      iterations with those to simplify those places a bit.  The new
      iterators automatically skip NULL subsystems.  This shouldn't cause
      any functional difference.
      
      Iteration loops which scan all subsystems and then skipping modular
      ones explicitly are converted to use for_each_builtin_subsys().
      
      While at it, reorder variable declarations and adjust whitespaces a
      bit in the affected functions.
      
      v2: Add lockdep_assert_held() in for_each_subsys() and add comments
          about synchronization as suggested by Li.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      30159ec7
    • T
      cgroup: move init_css_set initialization inside cgroup_mutex · 82fe9b0d
      Tejun Heo 提交于
      cgroup_init() was doing init_css_set initialization outside
      cgroup_mutex, which is fine but we want to add lockdep annotation on
      subsystem iterations and cgroup_init() will trigger it spuriously.
      Move init_css_set initialization inside cgroup_mutex.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      82fe9b0d
  5. 25 6月, 2013 4 次提交
    • T
      cgroup: s/for_each_subsys()/for_each_root_subsys()/ · 5549c497
      Tejun Heo 提交于
      for_each_subsys() walks over subsystems attached to a hierarchy and
      we're gonna add iterators which walk over all available subsystems.
      Rename for_each_subsys() to for_each_root_subsys() so that it's more
      appropriately named and for_each_subsys() can be used to iterate all
      subsystems.
      
      While at it, remove unnecessary underbar prefix from macro arguments,
      put them inside parentheses, and adjust indentation for the two
      for_each_*() macros.
      
      This patch is purely cosmetic.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      5549c497
    • T
      cgroup: clean up find_css_set() and friends · b326f9d0
      Tejun Heo 提交于
      find_css_set() passes uninitialized on-stack template[] array to
      find_existing_css_set() which sets the entries for all subsystems.
      Passing around an uninitialized array is a bit icky and we want to
      introduce an iterator which only iterates loaded subsystems.  Let's
      initialize it on definition.
      
      While at it, also make the following cosmetic cleanups.
      
      * Convert to proper /** comments.
      
      * Reorder variable declarations.
      
      * Replace comment on synchronization with lockdep_assert_held().
      
      This patch doesn't make any functional differences.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      b326f9d0
    • T
      cgroup: remove cgroup->actual_subsys_mask · a8a648c4
      Tejun Heo 提交于
      cgroup curiously has two subsystem masks, ->subsys_mask and
      ->actual_subsys_mask.  The latter only exists because the new target
      subsys_mask is passed into rebind_subsystems() via @root>subsys_mask.
      rebind_subsystems() needs to know what the current mask is to decide
      how to reach the target mask so ->actual_subsys_mask is used as the
      temp location to remember the current state.
      
      Adding a temporary field to a permanent data structure is rather silly
      and can be misleading.  Update rebind_subsystems() to take @added_mask
      and @removed_mask instead and remove @root->actual_subsys_mask.
      
      This patch shouldn't introduce any behavior changes.
      
      v2: Comment and description updated as suggested by Li.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      a8a648c4
    • T
      cgroup: prefix global variables with "cgroup_" · 9871bf95
      Tejun Heo 提交于
      Global variable names in kernel/cgroup.c are asking for trouble -
      subsys, roots, rootnode and so on.  Rename them to have "cgroup_"
      prefix.
      
      * s/subsys/cgroup_subsys/
      
      * s/rootnode/cgroup_dummy_root/
      
      * s/dummytop/cgroup_cummy_top/
      
      * s/roots/cgroup_roots/
      
      * s/root_count/cgroup_root_count/
      
      This patch is purely cosmetic.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      9871bf95
  6. 19 6月, 2013 7 次提交
  7. 18 6月, 2013 1 次提交
    • T
      cgroup: disallow rename(2) if sane_behavior · 6db8e85c
      Tejun Heo 提交于
      cgroup's rename(2) isn't a proper migration implementation - it can't
      move the cgroup to a different parent in the hierarchy.  All it can do
      is swapping the name string for that cgroup.  This isn't useful and
      can mislead users to think that cgroup supports proper cgroup-level
      migration.  Disallow rename(2) if sane_behavior.
      
      v2: Fail with -EPERM instead of -EINVAL so that it matches the vfs
          return value when ->rename is not implemented as suggested by Li.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      6db8e85c
  8. 14 6月, 2013 10 次提交
    • T
      cgroup: use percpu refcnt for cgroup_subsys_states · d3daf28d
      Tejun Heo 提交于
      A css (cgroup_subsys_state) is how each cgroup is represented to a
      controller.  As such, it can be used in hot paths across the various
      subsystems different controllers are associated with.
      
      One of the common operations is reference counting, which up until now
      has been implemented using a global atomic counter and can have
      significant adverse impact on scalability.  For example, css refcnt
      can be gotten and put multiple times by blkcg for each IO request.
      For highops configurations which try to do as much per-cpu as
      possible, the global frequent refcnting can be very expensive.
      
      In general, given the various and hugely diverse paths css's end up
      being used from, we need to make it cheap and highly scalable.  In its
      usage, css refcnting isn't very different from module refcnting.
      
      This patch converts css refcnting to use the recently added
      percpu_ref.  css_get/tryget/put() directly maps to the matching
      percpu_ref operations and the deactivation logic is no longer
      necessary as percpu_ref already has refcnt killing.
      
      The only complication is that as the refcnt is per-cpu,
      percpu_ref_kill() in itself doesn't ensure that further tryget
      operations will fail, which we need to guarantee before invoking
      ->css_offline()'s.  This is resolved collecting kill confirmation
      using percpu_ref_kill_and_confirm() and initiating the offline phase
      of destruction after all css refcnt's are confirmed to be seen as
      killed on all CPUs.  The previous patches already splitted destruction
      into two phases, so percpu_ref_kill_and_confirm() can be hooked up
      easily.
      
      This patch removes css_refcnt() which is used for rcu dereference
      sanity check in css_id().  While we can add a percpu refcnt API to ask
      the same question, css_id() itself is scheduled to be removed fairly
      soon, so let's not bother with it.  Just drop the sanity check and use
      rcu_dereference_raw() instead.
      
      v2: - init_cgroup_css() was calling percpu_ref_init() without checking
            the return value.  This causes two problems - the obvious lack
            of error handling and percpu_ref_init() being called from
            cgroup_init_subsys() before the allocators are up, which
            triggers warnings but doesn't cause actual problems as the
            refcnt isn't used for roots anyway.  Fix both by moving
            percpu_ref_init() to cgroup_create().
      
          - The base references were put too early by
            percpu_ref_kill_and_confirm() and cgroup_offline_fn() put the
            refs one extra time.  This wasn't noticeable because css's go
            through another RCU grace period before being freed.  Update
            cgroup_destroy_locked() to grab an extra reference before
            killing the refcnts.  This problem was noticed by Kent.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NKent Overstreet <koverstreet@google.com>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: "Alasdair G. Kergon" <agk@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Mikulas Patocka <mpatocka@redhat.com>
      Cc: Glauber Costa <glommer@gmail.com>
      d3daf28d
    • T
      cgroup: split cgroup destruction into two steps · ea15f8cc
      Tejun Heo 提交于
      Split cgroup_destroy_locked() into two steps and put the latter half
      into cgroup_offline_fn() which is executed from a work item.  The
      latter half is responsible for offlining the css's, removing the
      cgroup from internal lists, and propagating release notification to
      the parent.  The separation is to allow using percpu refcnt for css.
      
      Note that this allows for other cgroup operations to happen between
      the first and second halves of destruction, including creating a new
      cgroup with the same name.  As the target cgroup is marked DEAD in the
      first half and cgroup internals don't care about the names of cgroups,
      this should be fine.  A comment explaining this will be added by the
      next patch which implements the actual percpu refcnting.
      
      As RCU freeing is guaranteed to happen after the second step of
      destruction, we can use the same work item for both.  This patch
      renames cgroup->free_work to ->destroy_work and uses it for both
      purposes.  INIT_WORK() is now performed right before queueing the work
      item.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      ea15f8cc
    • T
      cgroup: reorder the operations in cgroup_destroy_locked() · 455050d2
      Tejun Heo 提交于
      This patch reorders the operations in cgroup_destroy_locked() such
      that the userland visible parts happen before css offlining and
      removal from the ->sibling list.  This will be used to make css use
      percpu refcnt.
      
      While at it, split out CGRP_DEAD related comment from the refcnt
      deactivation one and correct / clarify how different guarantees are
      met.
      
      While this patch changes the specific order of operations, it
      shouldn't cause any noticeable behavior difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      455050d2
    • T
      cgroup: remove cgroup->count and use · 6f3d828f
      Tejun Heo 提交于
      cgroup->count tracks the number of css_sets associated with the cgroup
      and used only to verify that no css_set is associated when the cgroup
      is being destroyed.  It's superflous as the destruction path can
      simply check whether cgroup->cset_links is empty instead.
      
      Drop cgroup->count and check ->cset_links directly from
      cgroup_destroy_locked().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      6f3d828f
    • T
      cgroup: drop unnecessary RCU dancing from __put_css_set() · ddd69148
      Tejun Heo 提交于
      __put_css_set() does RCU read access on @cgrp across dropping
      @cgrp->count so that it can continue accessing @cgrp even if the count
      reached zero and destruction of the cgroup commenced.  Given that both
      sides - __css_put() and cgroup_destroy_locked() - are cold paths, this
      is unnecessary.  Just making cgroup_destroy_locked() grab css_set_lock
      while checking @cgrp->count is enough.
      
      Remove the RCU read locking from __put_css_set() and make
      cgroup_destroy_locked() read-lock css_set_lock when checking
      @cgrp->count.  This will also allow removing @cgrp->count.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      ddd69148
    • T
      cgroup: rename CGRP_REMOVED to CGRP_DEAD · 54766d4a
      Tejun Heo 提交于
      We will add another flag indicating that the cgroup is in the process
      of being killed.  REMOVING / REMOVED is more difficult to distinguish
      and cgroup_is_removing()/cgroup_is_removed() are a bit awkward.  Also,
      later percpu_ref usage will involve "kill"ing the refcnt.
      
       s/CGRP_REMOVED/CGRP_DEAD/
       s/cgroup_is_removed()/cgroup_is_dead()
      
      This patch is purely cosmetic.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      54766d4a
    • T
      cgroup: use kzalloc() instead of kmalloc() · f4f4be2b
      Tejun Heo 提交于
      There's no point in using kmalloc() instead of the clearing variant
      for trivial stuff.  We can live dangerously elsewhere.  Use kzalloc()
      instead and drop 0 inits.
      
      While at it, do trivial code reorganization in cgroup_file_open().
      
      This patch doesn't introduce any functional changes.
      
      v2: I was caught in the very distant past where list_del() didn't
          poison and the initial version converted list_del()s to
          list_del_init()s too.  Li and Kent took me out of the stasis
          chamber.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Kent Overstreet <koverstreet@google.com>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      f4f4be2b
    • T
      cgroup: bring some sanity to naming around cg_cgroup_link · 69d0206c
      Tejun Heo 提交于
      cgroups and css_sets are mapped M:N and this M:N mapping is
      represented by struct cg_cgroup_link which forms linked lists on both
      sides.  The naming around this mapping is already confusing and struct
      cg_cgroup_link exacerbates the situation quite a bit.
      
      >From cgroup side, it starts off ->css_sets and runs through
      ->cgrp_link_list.  From css_set side, it starts off ->cg_links and
      runs through ->cg_link_list.  This is rather reversed as
      cgrp_link_list is used to iterate css_sets and cg_link_list cgroups.
      Also, this is the only place which is still using the confusing "cg"
      for css_sets.  This patch cleans it up a bit.
      
      * s/cgroup->css_sets/cgroup->cset_links/
        s/css_set->cg_links/css_set->cgrp_links/
        s/cgroup_iter->cg_link/cgroup_iter->cset_link/
      
      * s/cg_cgroup_link/cgrp_cset_link/
      
      * s/cgrp_cset_link->cg/cgrp_cset_link->cset/
        s/cgrp_cset_link->cgrp_link_list/cgrp_cset_link->cset_link/
        s/cgrp_cset_link->cg_link_list/cgrp_cset_link->cgrp_link/
      
      * s/init_css_set_link/init_cgrp_cset_link/
        s/free_cg_links/free_cgrp_cset_links/
        s/allocate_cg_links/allocate_cgrp_cset_links/
      
      * s/cgl[12]/link[12]/ in compare_css_sets()
      
      * s/saved_link/tmp_link/ s/tmp/tmp_links/ and a couple similar
        adustments.
      
      * Comment and whiteline adjustments.
      
      After the changes, we have
      
      	list_for_each_entry(link, &cont->cset_links, cset_link) {
      		struct css_set *cset = link->cset;
      
      instead of
      
      	list_for_each_entry(link, &cont->css_sets, cgrp_link_list) {
      		struct css_set *cset = link->cg;
      
      This patch is purely cosmetic.
      
      v2: Fix broken sentences in the patch description.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      69d0206c
    • T
      cgroup: consistently use @cset for struct css_set variables · 5abb8855
      Tejun Heo 提交于
      cgroup.c uses @cg for most struct css_set variables, which in itself
      could be a bit confusing, but made much worse by the fact that there
      are places which use @cg for struct cgroup variables.
      compare_css_sets() epitomizes this confusion - @[old_]cg are struct
      css_set while @cg[12] are struct cgroup.
      
      It's not like the whole deal with cgroup, css_set and cg_cgroup_link
      isn't already confusing enough.  Let's give it some sanity by
      uniformly using @cset for all struct css_set variables.
      
      * s/cg/cset/ for all css_set variables.
      
      * s/oldcg/old_cset/ s/oldcgrp/old_cgrp/.  The same for the ones
        prefixed with "new".
      
      * s/cg/cgrp/ for cgroup variables in compare_css_sets().
      
      * s/css/cset/ for the cgroup variable in task_cgroup_from_root().
      
      * Whiteline adjustments.
      
      This patch is purely cosmetic.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      5abb8855
    • T
      cgroup: remove now unused css_depth() · 3fc3db9a
      Tejun Heo 提交于
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      3fc3db9a
  9. 06 6月, 2013 3 次提交
    • T
      cgroup: clean up the cftype array for the base cgroup files · d5c56ced
      Tejun Heo 提交于
      * Rename it from files[] (really?) to cgroup_base_files[].
      
      * Drop CGROUP_FILE_GENERIC_PREFIX which was defined as "cgroup." and
        used inconsistently.  Just use "cgroup." directly.
      
      * Collect insane files at the end.  Note that only the insane ones are
        missing "cgroup." prefix.
      
      This patch doesn't introduce any functional changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      d5c56ced
    • T
      cgroup: mark "notify_on_release" and "release_agent" cgroup files insane · cc5943a7
      Tejun Heo 提交于
      The empty cgroup notification mechanism currently implemented in
      cgroup is tragically outdated.  Forking and execing userland process
      stopped being a viable notification mechanism more than a decade ago.
      We're gonna have a saner mechanism.  Let's make it clear that this
      abomination is going away.
      
      Mark "notify_on_release" and "release_agent" with CFTYPE_INSANE.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      cc5943a7
    • T
      cgroup: mark "tasks" cgroup file as insane · f12dc020
      Tejun Heo 提交于
      Some resources controlled by cgroup aren't per-task and cgroup core
      allowing threads of a single thread_group to be in different cgroups
      forced memcg do explicitly find the group leader and use it.  This is
      gonna be nasty when transitioning to unified hierarchy and in general
      we don't want and won't support granularity finer than processes.
      
      Mark "tasks" with CFTYPE_INSANE.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: cgroups@vger.kernel.org
      Cc: Vivek Goyal <vgoyal@redhat.com>
      f12dc020
  10. 29 5月, 2013 1 次提交
    • J
      cgroup: warn about mismatching options of a new mount of an existing hierarchy · 2a0ff3fb
      Jeff Liu 提交于
      With the new __DEVEL__sane_behavior mount option was introduced,
      if the root cgroup is alive with no xattr function, to mount a
      new cgroup with xattr will be rejected in terms of design which
      just fine.  However, if the root cgroup does not mounted with
      __DEVEL__sane_hehavior, to create a new cgroup with xattr option
      will succeed although after that the EA function does not works
      as expected but will get ENOTSUPP for setting up attributes under
      either cgroup. e.g.
      
      setfattr: /cgroup2/test: Operation not supported
      
      Instead of keeping silence in this case, it's better to drop a log
      entry in warning level.  That would be helpful to understand the
      reason behind the scene from the user's perspective, and this is
      essentially an improvement does not break the backward compatibilities.
      
      With this fix, above mount attemption will keep up works as usual but
      the following line cound be found at the system log:
      
      [ ...] cgroup: new mount options do not match the existing superblock
      
      tj: minor formatting / message updates.
      Signed-off-by: NJie Liu <jeff.liu@oracle.com>
      Reported-by: NAlexey Kodanev <alexey.kodanev@oracle.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: stable@vger.kernel.org
      2a0ff3fb
  11. 24 5月, 2013 4 次提交
    • T
      cgroup: update iterators to use cgroup_next_sibling() · 75501a6d
      Tejun Heo 提交于
      This patch converts cgroup_for_each_child(),
      cgroup_next_descendant_pre/post() and thus
      cgroup_for_each_descendant_pre/post() to use cgroup_next_sibling()
      instead of manually dereferencing ->sibling.next.
      
      The only reason the iterators couldn't allow dropping RCU read lock
      while iteration is in progress was because they couldn't determine the
      next sibling safely once RCU read lock is dropped.  Using
      cgroup_next_sibling() removes that problem and enables all iterators
      to allow dropping RCU read lock in the middle.  Comments are updated
      accordingly.
      
      This makes the iterators easier to use and will simplify controllers.
      
      Note that @cgroup argument is renamed to @cgrp in
      cgroup_for_each_child() because it conflicts with "struct cgroup" used
      in the new macro body.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NSerge E. Hallyn <serge.hallyn@ubuntu.com>
      Reviewed-by: NMichal Hocko <mhocko@suse.cz>
      75501a6d
    • T
      cgroup: add cgroup->serial_nr and implement cgroup_next_sibling() · 53fa5261
      Tejun Heo 提交于
      Currently, there's no easy way to find out the next sibling cgroup
      unless it's known that the current cgroup is accessed from the
      parent's children list in a single RCU critical section.  This in turn
      forces all iterators to require whole iteration to be enclosed in a
      single RCU critical section, which sometimes is too restrictive.  This
      patch implements cgroup_next_sibling() which can reliably determine
      the next sibling regardless of the state of the current cgroup as long
      as it's accessible.
      
      It currently is impossible to determine the next sibling after
      dropping RCU read lock because the cgroup being iterated could be
      removed anytime and if RCU read lock is dropped, nothing guarantess
      its ->sibling.next pointer is accessible.  A removed cgroup would
      continue to point to its next sibling for RCU accesses but stop
      receiving updates from the sibling.  IOW, the next sibling could be
      removed and then complete its grace period while RCU read lock is
      dropped, making it unsafe to dereference ->sibling.next after dropping
      and re-acquiring RCU read lock.
      
      This can be solved by adding a way to traverse to the next sibling
      without dereferencing ->sibling.next.  This patch adds a monotonically
      increasing cgroup serial number, cgroup->serial_nr, which guarantees
      that all cgroup->children lists are kept in increasing serial_nr
      order.  A new function, cgroup_next_sibling(), is implemented, which,
      if CGRP_REMOVED is not set on the current cgroup, follows
      ->sibling.next; otherwise, traverses the parent's ->children list
      until it sees a sibling with higher ->serial_nr.
      
      This allows the function to always return the next sibling regardless
      of the state of the current cgroup without adding overhead in the fast
      path.
      
      Further patches will update the iterators to use cgroup_next_sibling()
      so that they allow dropping RCU read lock and blocking while iteration
      is in progress which in turn will be used to simplify controllers.
      
      v2: Typo fix as per Serge.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NSerge E. Hallyn <serge.hallyn@ubuntu.com>
      53fa5261
    • T
      cgroup: make cgroup_is_removed() static · bdc7119f
      Tejun Heo 提交于
      cgroup_is_removed() no longer has external users and it shouldn't grow
      any - controllers should deal with cgroup_subsys_state on/offline
      state instead of cgroup removal state.  Make it static.
      
      While at it, make it return bool.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      bdc7119f
    • T
      cgroup: fix a subtle bug in descendant pre-order walk · 7805d000
      Tejun Heo 提交于
      When cgroup_next_descendant_pre() initiates a walk, it checks whether
      the subtree root doesn't have any children and if not returns NULL.
      Later code assumes that the subtree isn't empty.  This is broken
      because the subtree may become empty inbetween, which can lead to the
      traversal escaping the subtree by walking to the sibling of the
      subtree root.
      
      There's no reason to have the early exit path.  Remove it along with
      the later assumption that the subtree isn't empty.  This simplifies
      the code a bit and fixes the subtle bug.
      
      While at it, fix the comment of cgroup_for_each_descendant_pre() which
      was incorrectly referring to ->css_offline() instead of
      ->css_online().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NMichal Hocko <mhocko@suse.cz>
      Cc: stable@vger.kernel.org
      7805d000