1. 19 8月, 2015 1 次提交
    • T
      cgroup: don't print subsystems for the default hierarchy · d98817d4
      Tejun Heo 提交于
      It doesn't make sense to print subsystems on mount option or
      /proc/PID/cgroup for the default hierarchy.
      
      * cgroup.controllers file at the root of the default hierarchy lists
        the currently attached controllers.
      
      * The default hierarchy is catch-all for unmounted subsystems.
      
      * The default hierarchy doesn't accept any mount options.
      
      Suppress subsystem printing on mount options and /proc/PID/cgroup for
      the default hierarchy.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: cgroups@vger.kernel.org
      d98817d4
  2. 06 8月, 2015 1 次提交
  3. 01 7月, 2015 1 次提交
  4. 19 6月, 2015 2 次提交
    • T
      cgroup: require write perm on common ancestor when moving processes on the default hierarchy · 187fe840
      Tejun Heo 提交于
      On traditional hierarchies, if a task has write access to "tasks" or
      "cgroup.procs" file of a cgroup and its euid agrees with the target,
      it can move the target to the cgroup; however, consider the following
      scenario.  The owner of each cgroup is in the parentheses.
      
       R (root) - 0 (root) - 00 (user1) - 000 (user1)
                |                       \ 001 (user1)
                \ 1 (root) - 10 (user1)
      
      The subtrees of 00 and 10 are delegated to user1; however, while both
      subtrees may belong to the same user, it is clear that the two
      subtrees are to be isolated - they're under completely separate
      resource limits imposed by 0 and 1, respectively.  Note that 0 and 1
      aren't strictly necessary but added to ease illustrating the issue.
      
      If user1 is allowed to move processes between the two subtrees, the
      intention of the hierarchy - keeping a given group of processes under
      a subtree with certain resource restrictions while delegating
      management of the subtree - can be circumvented by user1.
      
      This happens because migration permission check doesn't consider the
      hierarchical nature of cgroups.  To fix the issue, this patch adds an
      extra permission requirement when userland tries to migrate a process
      in the default hierarchy - the issuing task must have write access to
      the common ancestor of "cgroup.procs" file of the ancestor in addition
      to the destination's.
      
      Conceptually, the issuer must be able to move the target process from
      the source cgroup to the common ancestor of source and destination
      cgroups and then to the destination.  As long as delegation is done in
      a proper top-down way, this guarantees that a delegatee can't smuggle
      processes across disjoint delegation domains.
      
      The next patch will add documentation on the delegation model on the
      default hierarchy.
      
      v2: Fixed missing !ret test.  Spotted by Li Zefan.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Li Zefan <lizefan@huawei.com>
      187fe840
    • T
      cgroup: separate out cgroup_procs_write_permission() from __cgroup_procs_write() · dedf22e9
      Tejun Heo 提交于
      Separate out task / process migration permission check from
      __cgroup_procs_write() into cgroup_procs_write_permission().
      
      * Permission check is moved right above the actual migration and no
        longer performed while holding rcu_read_lock().
        cgroup_procs_write_permission() uses get_task_cred() / put_cred()
        instead of __task_cred().  Also, !root trying to migrate kthreadd or
        PF_NO_SETAFFINITY tasks will now fail with -EINVAL rather than
        -EACCES which should be fine.
      
      * The same permission check is now performed even when moving self by
        specifying 0 as pid.  This always succeeds so there's no functional
        difference.  We'll add more permission checks later and the benefits
        of keeping both cases consistent outweigh the minute overhead of
        doing perm checks on pid 0 case.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      dedf22e9
  5. 10 6月, 2015 1 次提交
  6. 08 6月, 2015 2 次提交
  7. 27 5月, 2015 3 次提交
    • T
      cgroup: simplify threadgroup locking · b5ba75b5
      Tejun Heo 提交于
      Now that threadgroup locking is made global, code paths around it can
      be simplified.
      
      * lock-verify-unlock-retry dancing removed from __cgroup_procs_write().
      
      * Race protection against de_thread() removed from
        cgroup_update_dfl_csses().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      b5ba75b5
    • T
      sched, cgroup: replace signal_struct->group_rwsem with a global percpu_rwsem · d59cfc09
      Tejun Heo 提交于
      The cgroup side of threadgroup locking uses signal_struct->group_rwsem
      to synchronize against threadgroup changes.  This per-process rwsem
      adds small overhead to thread creation, exit and exec paths, forces
      cgroup code paths to do lock-verify-unlock-retry dance in a couple
      places and makes it impossible to atomically perform operations across
      multiple processes.
      
      This patch replaces signal_struct->group_rwsem with a global
      percpu_rwsem cgroup_threadgroup_rwsem which is cheaper on the reader
      side and contained in cgroups proper.  This patch converts one-to-one.
      
      This does make writer side heavier and lower the granularity; however,
      cgroup process migration is a fairly cold path, we do want to optimize
      thread operations over it and cgroup migration operations don't take
      enough time for the lower granularity to matter.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      d59cfc09
    • T
      sched, cgroup: reorganize threadgroup locking · 7d7efec3
      Tejun Heo 提交于
      threadgroup_change_begin/end() are used to mark the beginning and end
      of threadgroup modifying operations to allow code paths which require
      a threadgroup to stay stable across blocking operations to synchronize
      against those sections using threadgroup_lock/unlock().
      
      It's currently implemented as a general mechanism in sched.h using
      per-signal_struct rwsem; however, this never grew non-cgroup use cases
      and becomes noop if !CONFIG_CGROUPS.  It turns out that cgroups is
      gonna be better served with a different sycnrhonization scheme and is
      a bit silly to keep cgroups specific details as a general mechanism.
      
      What's general here is identifying the places where threadgroups are
      modified.  This patch restructures threadgroup locking so that
      threadgroup_change_begin/end() become a place where subsystems which
      need to sycnhronize against threadgroup changes can hook into.
      
      cgroup_threadgroup_change_begin/end() which operate on the
      per-signal_struct rwsem are created and threadgroup_lock/unlock() are
      moved to cgroup.c and made static.
      
      This is pure reorganization which doesn't cause any functional
      changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      7d7efec3
  8. 19 5月, 2015 1 次提交
  9. 23 4月, 2015 1 次提交
  10. 16 4月, 2015 2 次提交
  11. 03 3月, 2015 2 次提交
  12. 14 2月, 2015 1 次提交
    • T
      kernfs: remove KERNFS_STATIC_NAME · dfeb0750
      Tejun Heo 提交于
      When a new kernfs node is created, KERNFS_STATIC_NAME is used to avoid
      making a separate copy of its name.  It's currently only used for sysfs
      attributes whose filenames are required to stay accessible and unchanged.
      There are rare exceptions where these names are allocated and formatted
      dynamically but for the vast majority of cases they're consts in the
      rodata section.
      
      Now that kernfs is converted to use kstrdup_const() and kfree_const(),
      there's little point in keeping KERNFS_STATIC_NAME around.  Remove it.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Andrzej Hajda <a.hajda@samsung.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dfeb0750
  13. 13 2月, 2015 1 次提交
    • V
      cgroup: release css->id after css_free · 01e58659
      Vladimir Davydov 提交于
      Currently, we release css->id in css_release_work_fn, right before calling
      css_free callback, so that when css_free is called, the id may have
      already been reused for a new cgroup.
      
      I am going to use css->id to create unique names for per memcg kmem
      caches.  Since kmem caches are destroyed only on css_free, I need css->id
      to be freed after css_free was called to avoid name clashes.  This patch
      therefore moves css->id removal to css_free_work_fn.  To prevent
      css_from_id from returning a pointer to a stale css, it makes
      css_release_work_fn replace the css ptr at css_idr:css->id with NULL.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      01e58659
  14. 22 1月, 2015 1 次提交
    • J
      cgroup: prevent mount hang due to memory controller lifetime · 3c606d35
      Johannes Weiner 提交于
      Since b2052564 ("mm: memcontrol: continue cache reclaim from
      offlined groups"), re-mounting the memory controller after using it is
      very likely to hang.
      
      The cgroup core assumes that any remaining references after deleting a
      cgroup are temporary in nature, and synchroneously waits for them, but
      the above-mentioned commit has left-over page cache pin its css until
      it is reclaimed naturally.  That being said, swap entries and charged
      kernel memory have been doing the same indefinite pinning forever, the
      bug is just more likely to trigger with left-over page cache.
      
      Reparenting kernel memory is highly impractical, which leaves changing
      the cgroup assumptions to reflect this: once a controller has been
      mounted and used, it has internal state that is independent from mount
      and cgroup lifetime.  It can be unmounted and remounted, but it can't
      be reconfigured during subsequent mounts.
      
      Don't offline the controller root as long as there are any children,
      dead or alive.  A remount will no longer wait for these old references
      to drain, it will simply mount the persistent controller state again.
      Reported-by: N"Suzuki K. Poulose" <Suzuki.Poulose@arm.com>
      Reported-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      3c606d35
  15. 18 11月, 2014 6 次提交
    • T
      cgroup: implement cgroup_get_e_css() · eeecbd19
      Tejun Heo 提交于
      Implement cgroup_get_e_css() which finds and gets the effective css
      for the specified cgroup and subsystem combination.  This function
      always returns a valid pinned css.  This will be used by cgroup
      writeback support.
      
      While at it, add comment to cgroup_e_css() to explain why that
      function is different from cgroup_get_e_css() and has to test
      cgrp->child_subsys_mask instead of cgroup_css(cgrp, ss).
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NZefan Li <lizefan@huawei.com>
      eeecbd19
    • T
      cgroup: add cgroup_subsys->css_e_css_changed() · 56c807ba
      Tejun Heo 提交于
      Add a new cgroup_subsys operatoin ->css_e_css_changed().  This is
      invoked if any of the effective csses seen from the css's cgroup may
      have changed.  This will be used to implement cgroup writeback
      support.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NZefan Li <lizefan@huawei.com>
      56c807ba
    • T
      cgroup: add cgroup_subsys->css_released() · 7d172cc8
      Tejun Heo 提交于
      Add a new cgroup subsys callback css_released().  This is called when
      the reference count of the css (cgroup_subsys_state) reaches zero
      before RCU scheduling free.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NZefan Li <lizefan@huawei.com>
      7d172cc8
    • T
      cgroup: fix the async css offline wait logic in cgroup_subtree_control_write() · db6e3053
      Tejun Heo 提交于
      When a subsystem is offlined, its entry on @cgrp->subsys[] is cleared
      asynchronously.  If cgroup_subtree_control_write() is requested to
      enable the subsystem again before the entry is cleared, it has to wait
      for the previous offlining to finish and clear the @cgrp->subsys[]
      entry before trying to enable the subsystem again.
      
      This is currently done while verifying the input enable / disable
      parameters.  This used to be correct but f63070d3 ("cgroup: make
      interface files visible iff enabled on cgroup->subtree_control")
      breaks it.  The commit is one of the commits implementing subsystem
      dependency.
      
      Through subsystem dependency, some subsystems may be enabled and
      disabled implicitly in addition to the explicitly requested ones.  The
      actual subsystems to be enabled and disabled are determined during
      @css_enable/disable calculation.  The current offline wait logic skips
      the ones which are already implicitly enabled and then waits for
      subsystems in @enable; however, this misses the subsystems which may
      be implicitly enabled through dependency from @enable.  If such
      implicitly subsystem hasn't yet finished offlining yet, the function
      ends up trying to create a css when its @cgrp->subsys[] slot is
      already occupied triggering BUG_ON() in init_and_link_css().
      
      Fix it by moving the wait logic after @css_enable is calculated and
      waiting for all the subsystems in @css_enable.  This fixes the above
      bug as the mask contains all subsystems which are to be enabled
      including the ones enabled through dependencies.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Fixes: f63070d3 ("cgroup: make interface files visible iff enabled on cgroup->subtree_control")
      Acked-by: NZefan Li <lizefan@huawei.com>
      db6e3053
    • T
      cgroup: restructure child_subsys_mask handling in cgroup_subtree_control_write() · 755bf5ee
      Tejun Heo 提交于
      Make cgroup_subtree_control_write() first calculate new
      subtree_control (new_sc), child_subsys_mask (new_ss) and
      css_enable/disable masks before applying them to the cgroup.  Also,
      store the original subtree_control (old_sc) and child_subsys_mask
      (old_ss) and use them to restore the orignal state after failure.
      
      This patch shouldn't cause any behavior changes.  This prepares for a
      fix for a bug in the async css offline wait logic.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NZefan Li <lizefan@huawei.com>
      755bf5ee
    • T
      cgroup: separate out cgroup_calc_child_subsys_mask() from cgroup_refresh_child_subsys_mask() · 0f060deb
      Tejun Heo 提交于
      cgroup_refresh_child_subsys_mask() calculates and updates the
      effective @cgrp->child_subsys_maks according to the current
      @cgrp->subtree_control.  Separate out the calculation part into
      cgroup_calc_child_subsys_mask().  This will be used to fix a bug in
      the async css offline wait logic.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NZefan Li <lizefan@huawei.com>
      0f060deb
  16. 26 9月, 2014 1 次提交
    • Z
      Revert "cgroup: remove redundant variable in cgroup_mount()" · e756c7b6
      Zefan Li 提交于
      This reverts commit 0c7bf3e8.
      
      If there are child cgroups in the cgroupfs and then we umount it,
      the superblock will be destroyed but the cgroup_root will be kept
      around. When we mount it again, cgroup_mount() will find this
      cgroup_root and allocate a new sb for it.
      
      So with this commit we will be trapped in a dead loop in the case
      described above, because kernfs_pin_sb() keeps returning NULL.
      
      Currently I don't see how we can avoid using both pinned_sb and
      new_sb, so just revert it.
      
      Cc: Al Viro <viro@ZenIV.linux.org.uk>
      Reported-by: NAndrey Wagin <avagin@gmail.com>
      Signed-off-by: NZefan Li <lizefan@huawei.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      e756c7b6
  17. 25 9月, 2014 1 次提交
    • T
      percpu_ref: add PERCPU_REF_INIT_* flags · 2aad2a86
      Tejun Heo 提交于
      With the recent addition of percpu_ref_reinit(), percpu_ref now can be
      used as a persistent switch which can be turned on and off repeatedly
      where turning off maps to killing the ref and waiting for it to drain;
      however, there currently isn't a way to initialize a percpu_ref in its
      off (killed and drained) state, which can be inconvenient for certain
      persistent switch use cases.
      
      Similarly, percpu_ref_switch_to_atomic/percpu() allow dynamic
      selection of operation mode; however, currently a newly initialized
      percpu_ref is always in percpu mode making it impossible to avoid the
      latency overhead of switching to atomic mode.
      
      This patch adds @flags to percpu_ref_init() and implements the
      following flags.
      
      * PERCPU_REF_INIT_ATOMIC	: start ref in atomic mode
      * PERCPU_REF_INIT_DEAD		: start ref killed and drained
      
      These flags should be able to serve the above two use cases.
      
      v2: target_core_tpg.c conversion was missing.  Fixed.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NKent Overstreet <kmo@daterainc.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      2aad2a86
  18. 21 9月, 2014 2 次提交
  19. 19 9月, 2014 4 次提交
    • Z
      cgroup: remove CGRP_RELEASABLE flag · a25eb52e
      Zefan Li 提交于
      We call put_css_set() after setting CGRP_RELEASABLE flag in
      cgroup_task_migrate(), but in other places we call it without setting
      the flag. I don't see the necessity of this flag.
      
      Moreover once the flag is set, it will never be cleared, unless writing
      to the notify_on_release control file, so it can be quite confusing
      if we look at the output of debug.releasable.
      
        # mount -t cgroup -o debug xxx /cgroup
        # mkdir /cgroup/child
        # cat /cgroup/child/debug.releasable
        0   <-- shows 0 though the cgroup is empty
        # echo $$ > /cgroup/child/tasks
        # cat /cgroup/child/debug.releasable
        0
        # echo $$ > /cgroup/tasks && echo $$ > /cgroup/child/tasks
        # cat /proc/child/debug.releasable
        1   <-- shows 1 though the cgroup is not empty
      
      This patch removes the flag, and now debug.releasable shows if the
      cgroup is empty or not.
      Signed-off-by: NZefan Li <lizefan@huawei.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      a25eb52e
    • Z
      cgroup: simplify proc_cgroup_show() · 006f4ac4
      Zefan Li 提交于
      Use the ONE macro instead of REG, and we can simplify proc_cgroup_show().
      Signed-off-by: NZefan Li <lizefan@huawei.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      006f4ac4
    • Z
      cgroup: use a per-cgroup work for release agent · 971ff493
      Zefan Li 提交于
      Instead of using a global work to schedule release agent on removable
      cgroups, we change to use a per-cgroup work to do this, which makes
      the code much simpler.
      
      v2: use a dedicated work instead of reusing css->destroy_work. (Tejun)
      Signed-off-by: NZefan Li <lizefan@huawei.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      971ff493
    • Z
      cgroup: fix unbalanced locking · eb4aec84
      Zefan Li 提交于
      cgroup_pidlist_start() holds cgrp->pidlist_mutex and then calls
      pidlist_array_load(), and cgroup_pidlist_stop() releases the mutex.
      
      It is wrong that we release the mutex in the failure path in
      pidlist_array_load(), because cgroup_pidlist_stop() will be called
      no matter if cgroup_pidlist_start() returns errno or not.
      
      Fixes: 4bac00d1
      Cc: <stable@vger.kernel.org> # 3.14+
      Signed-off-by: NZefan Li <lizefan@huawei.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NCong Wang <xiyou.wangcong@gmail.com>
      eb4aec84
  20. 18 9月, 2014 3 次提交
  21. 08 9月, 2014 1 次提交
    • T
      percpu-refcount: add @gfp to percpu_ref_init() · a34375ef
      Tejun Heo 提交于
      Percpu allocator now supports allocation mask.  Add @gfp to
      percpu_ref_init() so that !GFP_KERNEL allocation masks can be used
      with percpu_refs too.
      
      This patch doesn't make any functional difference.
      
      v2: blk-mq conversion was missing.  Updated.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Kent Overstreet <koverstreet@google.com>
      Cc: Benjamin LaHaise <bcrl@kvack.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Nicholas A. Bellinger <nab@linux-iscsi.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      a34375ef
  22. 05 9月, 2014 2 次提交
    • L
      cgroup: check cgroup liveliness before unbreaking kernfs · aa32362f
      Li Zefan 提交于
      When cgroup_kn_lock_live() is called through some kernfs operation and
      another thread is calling cgroup_rmdir(), we'll trigger the warning in
      cgroup_get().
      
      ------------[ cut here ]------------
      WARNING: CPU: 1 PID: 1228 at kernel/cgroup.c:1034 cgroup_get+0x89/0xa0()
      ...
      Call Trace:
       [<c16ee73d>] dump_stack+0x41/0x52
       [<c10468ef>] warn_slowpath_common+0x7f/0xa0
       [<c104692d>] warn_slowpath_null+0x1d/0x20
       [<c10bb999>] cgroup_get+0x89/0xa0
       [<c10bbe58>] cgroup_kn_lock_live+0x28/0x70
       [<c10be3c1>] __cgroup_procs_write.isra.26+0x51/0x230
       [<c10be5b2>] cgroup_tasks_write+0x12/0x20
       [<c10bb7b0>] cgroup_file_write+0x40/0x130
       [<c11aee71>] kernfs_fop_write+0xd1/0x160
       [<c1148e58>] vfs_write+0x98/0x1e0
       [<c114934d>] SyS_write+0x4d/0xa0
       [<c16f656b>] sysenter_do_call+0x12/0x12
      ---[ end trace 6f2e0c38c2108a74 ]---
      
      Fix this by calling css_tryget() instead of cgroup_get().
      
      v2:
      - move cgroup_tryget() right below cgroup_get() definition. (Tejun)
      
      Cc: <stable@vger.kernel.org> # 3.15+
      Reported-by: NToralf Förster <toralf.foerster@gmx.de>
      Signed-off-by: NZefan Li <lizefan@huawei.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      aa32362f
    • L
      cgroup: delay the clearing of cgrp->kn->priv · a4189487
      Li Zefan 提交于
      Run these two scripts concurrently:
      
          for ((; ;))
          {
              mkdir /cgroup/sub
              rmdir /cgroup/sub
          }
      
          for ((; ;))
          {
              echo $$ > /cgroup/sub/cgroup.procs
              echo $$ > /cgroup/cgroup.procs
          }
      
      A kernel bug will be triggered:
      
      BUG: unable to handle kernel NULL pointer dereference at 00000038
      IP: [<c10bbd69>] cgroup_put+0x9/0x80
      ...
      Call Trace:
       [<c10bbe19>] cgroup_kn_unlock+0x39/0x50
       [<c10bbe91>] cgroup_kn_lock_live+0x61/0x70
       [<c10be3c1>] __cgroup_procs_write.isra.26+0x51/0x230
       [<c10be5b2>] cgroup_tasks_write+0x12/0x20
       [<c10bb7b0>] cgroup_file_write+0x40/0x130
       [<c11aee71>] kernfs_fop_write+0xd1/0x160
       [<c1148e58>] vfs_write+0x98/0x1e0
       [<c114934d>] SyS_write+0x4d/0xa0
       [<c16f656b>] sysenter_do_call+0x12/0x12
      
      We clear cgrp->kn->priv in the end of cgroup_rmdir(), but another
      concurrent thread can access kn->priv after the clearing.
      
      We should move the clearing to css_release_work_fn(). At that time
      no one is holding reference to the cgroup and no one can gain a new
      reference to access it.
      
      v2:
      - move RCU_INIT_POINTER() into the else block. (Tejun)
      - remove the cgroup_parent() check. (Tejun)
      - update the comment in css_tryget_online_from_dir().
      
      Cc: <stable@vger.kernel.org> # 3.15+
      Reported-by: NToralf Förster <toralf.foerster@gmx.de>
      Signed-off-by: NZefan Li <lizefan@huawei.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      a4189487