1. 29 10月, 2015 1 次提交
    • T
      cgroup: fix race condition around termination check in css_task_iter_next() · d5745675
      Tejun Heo 提交于
      css_task_iter_next() checked @it->cur_task before grabbing
      css_set_lock and assumed that the result won't change afterwards;
      however, tasks could leave the cgroup being iterated terminating the
      iterator before css_task_lock is acquired.  If this happens,
      css_task_iter_next() tries to calculate the current task from NULL
      cg_list pointer leading to the following oops.
      
       BUG: unable to handle kernel paging request at fffffffffffff7d0
       IP: [<ffffffff810d5f22>] css_task_iter_next+0x42/0x80
       ...
       CPU: 4 PID: 6391 Comm: JobQDisp2 Not tainted 4.0.9-22_fbk4_rc3_81616_ge8d9cb6 #1
       Hardware name: Quanta Freedom/Winterfell, BIOS F03_3B08 03/04/2014
       task: ffff880868e46400 ti: ffff88083404c000 task.ti: ffff88083404c000
       RIP: 0010:[<ffffffff810d5f22>]  [<ffffffff810d5f22>] css_task_iter_next+0x42/0x80
       RSP: 0018:ffff88083404fd28  EFLAGS: 00010246
       RAX: 0000000000000000 RBX: ffff88083404fd68 RCX: ffff8804697fb8b0
       RDX: fffffffffffff7c0 RSI: ffff8803b7dff800 RDI: ffffffff822c0278
       RBP: ffff88083404fd38 R08: 0000000000017160 R09: ffff88046f4070c0
       R10: ffffffff810d61f7 R11: 0000000000000293 R12: ffff880863bf8400
       R13: ffff88046b87fd80 R14: 0000000000000000 R15: ffff88083404fe58
       FS:  00007fa0567e2700(0000) GS:ffff88046f900000(0000) knlGS:0000000000000000
       CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
       CR2: fffffffffffff7d0 CR3: 0000000469568000 CR4: 00000000001406e0
       Stack:
        0000000000000246 0000000000000000 ffff88083404fde8 ffffffff810d6248
        ffff88083404fd68 0000000000000000 ffff8803b7dff800 000001ef000001ee
        0000000000000000 0000000000000000 ffff880863bf8568 0000000000000000
       Call Trace:
        [<ffffffff810d6248>] cgroup_pidlist_start+0x258/0x550
        [<ffffffff810cf66d>] cgroup_seqfile_start+0x1d/0x20
        [<ffffffff8121f8ef>] kernfs_seq_start+0x5f/0xa0
        [<ffffffff811cab76>] seq_read+0x166/0x380
        [<ffffffff812200fd>] kernfs_fop_read+0x11d/0x180
        [<ffffffff811a7398>] __vfs_read+0x18/0x50
        [<ffffffff811a745d>] vfs_read+0x8d/0x150
        [<ffffffff811a756f>] SyS_read+0x4f/0xb0
        [<ffffffff818d4772>] system_call_fastpath+0x12/0x17
      
      Fix it by moving the termination condition check inside css_set_lock.
      @it->cur_task is now cleared after being put and @it->task_pos is
      tested for termination instead of @it->cset_pos as they indicate the
      same condition and @it->task_pos is what's being dereferenced.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NCalvin Owens <calvinowens@fb.com>
      Fixes: ed27b9f7 ("cgroup: don't hold css_set_rwsem across css task iteration")
      Acked-by: NZefan Li <lizefan@huawei.com>
      d5745675
  2. 16 10月, 2015 16 次提交
    • T
      cgroup: drop cgroup__DEVEL__legacy_files_on_dfl · e4b7037c
      Tejun Heo 提交于
      Now that interfaces for the major three controllers - cpu, memory, io
      - are shaping up, there's no reason to have an option to force legacy
      files to show up on the unified hierarchy for testing.  Drop it.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      e4b7037c
    • T
      cgroup: replace error handling in cgroup_init() with WARN_ON()s · 035f4f51
      Tejun Heo 提交于
      The init sequence shouldn't fail short of bugs and even when it does
      it's better to continue with the rest of initialization and we were
      silently ignoring /proc/cgroups creation failure.
      
      Drop the explicit error handling and wrap sysfs_create_mount_point(),
      register_filesystem() and proc_create() with WARN_ON()s.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      035f4f51
    • T
      cgroup: add cgroup_subsys->free() method and use it to fix pids controller · afcf6c8b
      Tejun Heo 提交于
      pids controller is completely broken in that it uncharges when a task
      exits allowing zombies to escape resource control.  With the recent
      updates, cgroup core now maintains cgroup association till task free
      and pids controller can be fixed by uncharging on free instead of
      exit.
      
      This patch adds cgroup_subsys->free() method and update pids
      controller to use it instead of ->exit() for uncharging.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Aleksa Sarai <cyphar@cyphar.com>
      afcf6c8b
    • T
      cgroup: keep zombies associated with their original cgroups · 2e91fa7f
      Tejun Heo 提交于
      cgroup_exit() is called when a task exits and disassociates the
      exiting task from its cgroups and half-attach it to the root cgroup.
      This is unnecessary and undesirable.
      
      No controller actually needs an exiting task to be disassociated with
      non-root cgroups.  Both cpu and perf_event controllers update the
      association to the root cgroup from their exit callbacks just to keep
      consistent with the cgroup core behavior.
      
      Also, this disassociation makes it difficult to track resources held
      by zombies or determine where the zombies came from.  Currently, pids
      controller is completely broken as it uncharges on exit and zombies
      always escape the resource restriction.  With cgroup association being
      reset on exit, fixing it is pretty painful.
      
      There's no reason to reset cgroup membership on exit.  The zombie can
      be removed from its css_set so that it doesn't show up on
      "cgroup.procs" and thus can't be migrated or interfere with cgroup
      removal.  It can still pin and point to the css_set so that its cgroup
      membership is maintained.  This patch makes cgroup core keep zombies
      associated with their cgroups at the time of exit.
      
      * Previous patches decoupled populated_cnt tracking from css_set
        lifetime, so a dying task can be simply unlinked from its css_set
        while pinning and pointing to the css_set.  This keeps css_set
        association from task side alive while hiding it from "cgroup.procs"
        and populated_cnt tracking.  The css_set reference is dropped when
        the task_struct is freed.
      
      * ->exit() callback no longer needs the css arguments as the
        associated css never changes once PF_EXITING is set.  Removed.
      
      * cpu and perf_events controllers no longer need ->exit() callbacks.
        There's no reason to explicitly switch away on exit.  The final
        schedule out is enough.  The callbacks are removed.
      
      * On traditional hierarchies, nothing changes.  "/proc/PID/cgroup"
        still reports "/" for all zombies.  On the default hierarchy,
        "/proc/PID/cgroup" keeps reporting the cgroup that the task belonged
        to at the time of exit.  If the cgroup gets removed before the task
        is reaped, " (deleted)" is appended.
      
      v2: Build brekage due to missing dummy cgroup_free() when
          !CONFIG_CGROUP fixed.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      2e91fa7f
    • T
      cgroup: make css_set_rwsem a spinlock and rename it to css_set_lock · f0d9a5f1
      Tejun Heo 提交于
      css_set_rwsem is the inner lock protecting css_sets and is accessed
      from hot paths such as fork and exit.  Internally, it has no reason to
      be a rwsem or even mutex.  There are no internal blocking operations
      while holding it.  This was rwsem because css task iteration used to
      expose it to external iterator users.  As the previous patch updated
      css task iteration such that the locking is not leaked to its users,
      there's no reason to keep it a rwsem.
      
      This patch converts css_set_rwsem to a spinlock and rename it to
      css_set_lock.  It uses bh-safe operations as a planned usage needs to
      access it from RCU callback context.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      f0d9a5f1
    • T
      cgroup: don't hold css_set_rwsem across css task iteration · ed27b9f7
      Tejun Heo 提交于
      css_sets are synchronized through css_set_rwsem but the locking scheme
      is kinda bizarre.  The hot paths - fork and exit - have to write lock
      the rwsem making the rw part pointless; furthermore, many readers
      already hold cgroup_mutex.
      
      One of the readers is css task iteration.  It read locks the rwsem
      over the entire duration of iteration.  This leads to silly locking
      behavior.  When cpuset tries to migrate processes of a cgroup to a
      different NUMA node, css_set_rwsem is held across the entire migration
      attempt which can take a long time locking out forking, exiting and
      other cgroup operations.
      
      This patch updates css task iteration so that it locks css_set_rwsem
      only while the iterator is being advanced.  css task iteration
      involves two levels - css_set and task iteration.  As css_sets in use
      are practically immutable, simply pinning the current one is enough
      for resuming iteration afterwards.  Task iteration is tricky as tasks
      may leave their css_set while iteration is in progress.  This is
      solved by keeping track of active iterators and advancing them if
      their next task leaves its css_set.
      
      v2: put_task_struct() in css_task_iter_next() moved outside
          css_set_rwsem.  A later patch will add cgroup operations to
          task_struct free path which may grab the same lock and this avoids
          deadlock possibilities.
      
          css_set_move_task() updated to use list_for_each_entry_safe() when
          walking task_iters and advancing them.  This is necessary as
          advancing an iter may remove it from the list.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      ed27b9f7
    • T
      cgroup: reorganize css_task_iter functions · ecb9d535
      Tejun Heo 提交于
      * Rename css_advance_task_iter() to css_task_iter_advance_css_set()
        and make it clear it->task_pos too at the end of the iteration.
      
      * Factor out css_task_iter_advance() from css_task_iter_next().  The
        new function whines if called on a terminated iterator.
      
      Except for the termination check, this is pure reorganization and
      doesn't introduce any behavior changes.  This will help the planned
      locking update for css_task_iter.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      ecb9d535
    • T
      cgroup: factor out css_set_move_task() · f6d7d049
      Tejun Heo 提交于
      A task is associated and disassociated with its css_set in three
      places - during migration, after a new task is created and when a task
      exits.  The first is handled by cgroup_task_migrate() and the latter
      two are open-coded.
      
      These are similar operations and spreading them over multiple places
      makes it harder to follow and update.  This patch collects all task
      css_set [dis]association operations into css_set_move_task().
      
      While css_set_move_task() may check whether populated state needs to
      be updated when not strictly necessary, the behavior is essentially
      equivalent before and after this patch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      f6d7d049
    • T
      cgroup: keep css_set and task lists in chronological order · 389b9c1b
      Tejun Heo 提交于
      css task iteration will be updated to not leak cgroup internal locking
      to iterator users.  In preparation, update css_set and task lists to
      be in chronological order.
      
      For tasks, as migration path is already using list_splice_tail_init(),
      only cgroup_enable_task_cg_lists() and cgroup_post_fork() need
      updating.  For css_sets, link_css_set() is the only place which needs
      to be updated.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      389b9c1b
    • T
      cgroup: make cgroup_destroy_locked() test cgroup_is_populated() · 91486f61
      Tejun Heo 提交于
      cgroup_destroy_locked() currently tests whether any css_sets are
      associated to reject removal if the cgroup contains tasks.  This works
      because a css_set's refcnt converges with the number of tasks linked
      to it and thus there's no css_set linked to a cgroup if it doesn't
      have any live tasks.
      
      To help tracking resource usage of zombie tasks, putting the ref of
      css_set will be separated from disassociating the task from the
      css_set which means that a cgroup may have css_sets linked to it even
      when it doesn't have any live tasks.
      
      This patch updates cgroup_destroy_locked() so that it tests
      cgroup_is_populated(), which counts the number of populated css_sets,
      instead of whether cgrp->cset_links is empty to determine whether the
      cgroup is populated or not.  This ensures that rmdirs won't be
      incorrectly rejected for cgroups which only contain zombie tasks.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      91486f61
    • T
      cgroup: make css_sets pin the associated cgroups · 2ceb231b
      Tejun Heo 提交于
      Currently, css_sets don't pin the associated cgroups.  This is okay as
      a cgroup with css_sets associated are not allowed to be removed;
      however, to help resource tracking for zombie tasks, this is scheduled
      to change such that a cgroup can be removed even when it has css_sets
      associated as long as none of them are populated.
      
      To ensure that a cgroup doesn't go away while css_sets are still
      associated with it, make each associated css_set hold a reference on
      the cgroup if non-root.
      
      v2: Root cgroups are special and shouldn't be ref'd by css_sets.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      2ceb231b
    • T
      cgroup: relocate cgroup_[try]get/put() · 052c3f3a
      Tejun Heo 提交于
      Relocate cgroup_get(), cgroup_tryget() and cgroup_put() upwards.  This
      is pure code reorganization to prepare for future changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      052c3f3a
    • T
      cgroup: move check_for_release() invocation · ad2ed2b3
      Tejun Heo 提交于
      To trigger release agent when the last task leaves the cgroup,
      check_for_release() is called from put_css_set_locked(); however,
      css_set being unlinked is being decoupled from task leaving the cgroup
      and the correct condition to test is cgroup->nr_populated dropping to
      zero which check_for_release() is already updated to test.
      
      This patch moves check_for_release() invocation from
      put_css_set_locked() to cgroup_update_populated().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      ad2ed2b3
    • T
      cgroup: replace cgroup_has_tasks() with cgroup_is_populated() · 27bd4dbb
      Tejun Heo 提交于
      Currently, cgroup_has_tasks() tests whether the target cgroup has any
      css_set linked to it.  This works because a css_set's refcnt converges
      with the number of tasks linked to it and thus there's no css_set
      linked to a cgroup if it doesn't have any live tasks.
      
      To help tracking resource usage of zombie tasks, putting the ref of
      css_set will be separated from disassociating the task from the
      css_set which means that a cgroup may have css_sets linked to it even
      when it doesn't have any live tasks.
      
      This patch replaces cgroup_has_tasks() with cgroup_is_populated()
      which tests cgroup->nr_populated instead which locally counts the
      number of populated css_sets.  Unlike cgroup_has_tasks(),
      cgroup_is_populated() is recursive - if any of the descendants is
      populated, the cgroup is populated too.  While this changes the
      meaning of the test, all the existing users are okay with the change.
      
      While at it, replace the open-coded ->populated_cnt test in
      cgroup_events_show() with cgroup_is_populated().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      27bd4dbb
    • T
      cgroup: make cgroup->nr_populated count the number of populated css_sets · 0de0942d
      Tejun Heo 提交于
      Currently, cgroup->nr_populated counts whether the cgroup has any
      css_sets linked to it and the number of children which has non-zero
      ->nr_populated.  This works because a css_set's refcnt converges with
      the number of tasks linked to it and thus there's no css_set linked to
      a cgroup if it doesn't have any live tasks.
      
      To help tracking resource usage of zombie tasks, putting the ref of
      css_set will be separated from disassociating the task from the
      css_set which means that a cgroup may have css_sets linked to it even
      when it doesn't have any live tasks.
      
      This patch updates cgroup->nr_populated so that for the cgroup itself
      it counts the number of css_sets which have tasks associated with them
      so that empty css_sets don't skew the populated test.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      0de0942d
    • T
      cgroup: remove an unused parameter from cgroup_task_migrate() · b309e5b7
      Tejun Heo 提交于
      cgroup_task_migrate() no longer uses @old_cgrp.  Remove it.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      b309e5b7
  3. 26 9月, 2015 1 次提交
    • T
      cgroup: fix too early usage of static_branch_disable() · a3e72739
      Tejun Heo 提交于
      49d1dc4b ("cgroup: implement static_key based
      cgroup_subsys_enabled() and cgroup_subsys_on_dfl()") converted cgroup
      enabled test to use static_key; however, cgroup_disable() is called
      before static_key subsystem itself is initialized and thus leads to
      the following warning when "cgroup_disable=" parameter is specified.
      
       WARNING: CPU: 0 PID: 0 at kernel/jump_label.c:99 static_key_slow_dec+0x44/0x60()
       static_key_slow_dec used before call to jump_label_init
       ...
       Call Trace:
        [<ffffffff813b18c2>] dump_stack+0x44/0x62
        [<ffffffff8108dd52>] warn_slowpath_common+0x82/0xc0
        [<ffffffff8108ddec>] warn_slowpath_fmt+0x5c/0x80
        [<ffffffff8119c054>] static_key_slow_dec+0x44/0x60
        [<ffffffff81d826b6>] cgroup_disable+0xaf/0xd6
        [<ffffffff81d5f9de>] unknown_bootoption+0x8c/0x194
        [<ffffffff810b0c03>] parse_args+0x273/0x4a0
        [<ffffffff81d5fd67>] start_kernel+0x205/0x4b8
       ...
      
      Fix it by making cgroup_disable() to record the subsystems to disable
      in cgroup_disable_mask and moving the actual application to
      cgroup_init() which is late enough and where the enabled state is
      first used.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NAndrey Wagin <avagin@gmail.com>
      Link: http://lkml.kernel.org/g/CANaxB-yFuS4SA2znSvcKrO9L_CbHciHYW+o9bN8sZJ8eR9FxYA@mail.gmail.com
      Fixes: 49d1dc4b
      a3e72739
  4. 23 9月, 2015 5 次提交
    • T
      cgroup: make cgroup_update_dfl_csses() migrate all target processes atomically · 10265075
      Tejun Heo 提交于
      cgroup_update_dfl_csses() is responsible for migrating processes when
      controllers are enabled or disabled on the default hierarchy.  As the
      css association changes for all the processes in the affected cgroups,
      this involves migrating multiple processes.
      
      Up until now, it was implemented by migrating process-by-process until
      the source css_sets are empty; however, this means that if a process
      fails to migrate after some succeed before it, the recovery is very
      tricky.  This was considered okay as subsystems weren't allowed to
      reject process migration on the default hierarchy; unfortunately,
      enforcing this policy turned out to be problematic for certain types
      of resources - realtime slices for now.
      
      As such, the default hierarchy is gonna allow restricted failures
      during migration and to support that this patch makes
      cgroup_update_dfl_csses() migrate all target processes atomically
      rather than one-by-one.  The preceding patches made subsystems ready
      for multi-process migration and factored out taskset operations making
      this almost trivial.  All tasks of the target processes are put in the
      same taskset and the migration operations are performed once which
      either fails or succeeds for all.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NZefan Li <lizefan@huawei.com>
      10265075
    • T
      cgroup: separate out taskset operations from cgroup_migrate() · adaae5dc
      Tejun Heo 提交于
      Currently, cgroup_migreate() implements large part of the migration
      logic inline including building the target taskset and actually
      migrating them.  This patch separates out the following taskset
      operations.
      
       CGROUP_TASKSET_INIT()		: taskset initializer
       cgroup_taskset_add()		: add a task to a taskset
       cgroup_taskset_migrate()	: migrate a taskset to the destination cgroup
      
      This will be used to implement atomic multi-process migration in
      cgroup_update_dfl_csses().  This is pure reorganization which doesn't
      introduce any functional changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NZefan Li <lizefan@huawei.com>
      adaae5dc
    • T
      cgroup: reorder cgroup_migrate()'s parameters · 9af2ec45
      Tejun Heo 提交于
      cgroup_migrate() has the destination cgroup as the first parameter
      while cgroup_task_migrate() has the destination cset as the last.
      Another migration function is scheduled to be added which can make the
      discrepancy further stand out.  Let's reorder cgroup_migrate()'s
      parameters so that the destination cgroup is the last.
      
      This doesn't cause any functional difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NZefan Li <lizefan@huawei.com>
      9af2ec45
    • T
      cgroup, memcg, cpuset: implement cgroup_taskset_for_each_leader() · 4530eddb
      Tejun Heo 提交于
      It wasn't explicitly documented but, when a process is being migrated,
      cpuset and memcg depend on cgroup_taskset_first() returning the
      threadgroup leader; however, this approach is somewhat ghetto and
      would no longer work for the planned multi-process migration.
      
      This patch introduces explicit cgroup_taskset_for_each_leader() which
      iterates over only the threadgroup leaders and replaces
      cgroup_taskset_first() usages for accessing the leader with it.
      
      This prepares both memcg and cpuset for multi-process migration.  This
      patch also updates the documentation for cgroup_taskset_for_each() to
      clarify the iteration rules and removes comments mentioning task
      ordering in tasksets.
      
      v2: A previous patch which added threadgroup leader test was dropped.
          Patch updated accordingly.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NZefan Li <lizefan@huawei.com>
      Acked-by: NMichal Hocko <mhocko@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      4530eddb
    • T
      cpuset: migrate memory only for threadgroup leaders · 3df9ca0a
      Tejun Heo 提交于
      If memory_migrate flag is set, cpuset migrates memory according to the
      destnation css's nodemask.  The current implementation migrates memory
      whenever any thread of a process is migrated making the behavior
      somewhat arbitrary.  Let's tie memory operations to the threadgroup
      leader so that memory is migrated only when the leader is migrated.
      
      While this is a behavior change, given the inherent fuziness, this
      change is not too likely to be noticed and allows us to clearly define
      who owns the memory (always the leader) and helps the planned atomic
      multi-process migration.
      
      Note that we're currently migrating memory in migration path proper
      while holding all the locks.  In the long term, this should be moved
      out to an async work item.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NZefan Li <lizefan@huawei.com>
      3df9ca0a
  5. 19 9月, 2015 7 次提交
    • T
      cgroup: generalize obtaining the handles of and notifying cgroup files · 6f60eade
      Tejun Heo 提交于
      cgroup core handles creations and removals of cgroup interface files
      as described by cftypes.  There are cases where the handle for a given
      file instance is necessary, for example, to generate a file modified
      event.  Currently, this is handled by explicitly matching the callback
      method pointer and storing the file handle manually in
      cgroup_add_file().  While this simple approach works for cgroup core
      files, it can't for controller interface files.
      
      This patch generalizes cgroup interface file handle handling.  struct
      cgroup_file is defined and each cftype can optionally tell cgroup core
      to store the file handle by setting ->file_offset.  A file handle
      remains accessible as long as the containing css is accessible.
      
      Both "cgroup.procs" and "cgroup.events" are converted to use the new
      generic mechanism instead of hooking directly into cgroup_add_file().
      Also, cgroup_file_notify() which takes a struct cgroup_file and
      generates a file modified event on it is added and replaces explicit
      kernfs_notify() invocations.
      
      This generalizes cgroup file handle handling and allows controllers to
      generate file modified notifications.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      6f60eade
    • T
      cgroup: restructure file creation / removal handling · 4df8dc90
      Tejun Heo 提交于
      The file creation / removal path has always been a bit icky and the
      planned notification update requires css during file creation.
      Restructure as follows.
      
      * cgroup_addrm_files() now takes both @css and @cgrp and is only
        called directly by other file handling functions.
      
      * cgroup_populate/clear_dir() are replaced with
        css_populate/clear_dir() taking @css and @cgrp_override.
        @cgrp_override is used only when files needs to be created on /
        removed from a cgroup which isn't attached to @css which happens
        during subsystem rebinds.  Subsystem loops are moved to the callers.
      
      * cgroup_add_file() now takes both @css and @cgrp.  @css isn't used
        yet but will be used by the planned notification update.
      
      This patch doens't cause any behavior changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      4df8dc90
    • T
      cgroup: cosmetic updates to rebind_subsystems() · 1ada4838
      Tejun Heo 提交于
      * Use local variables @scgrp and @dcgrp for @src_root->cgrp and
        @dst_root->cgrp respectively.
      
      * Use initializers to set @src_root and @css in the inner bind loop.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      1ada4838
    • T
      cgroup: make cgroup_addrm_files() clean up after itself on failures · 6732ed85
      Tejun Heo 提交于
      After a file creation failure, cgroup_addrm_files() it didn't remove
      the files which had already been created.  When cgroup_populate_dir()
      is the caller, this is fine as the caller performs cleanup; however,
      for other callers, this may leave unactivated dangling files behind.
      As kernfs directory removals are recursive, this doesn't lead to
      permanent memory leak but it can, for example, fail future attempts to
      create those files again.
      
      There's no point in keeping around this sort of subtlety and it gets
      in the way of planned updates to file handling.  This patch makes
      cgroup_addrm_files() clean up after itself on failures.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      6732ed85
    • T
      cgroup: relocate cgroup_populate_dir() · ccdca218
      Tejun Heo 提交于
      Move it upwards so that it's right below cgroup_clear_dir() and the
      forward declaration is unnecessary.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      ccdca218
    • T
      cgroup: replace cftype->mode with CFTYPE_WORLD_WRITABLE · 7dbdb199
      Tejun Heo 提交于
      cftype->mode allows controllers to give arbitrary permissions to
      interface knobs.  Except for "cgroup.event_control", the existing uses
      are spurious.
      
      * Some explicitly specify S_IRUGO | S_IWUSR even though that's the
        default.
      
      * "cpuset.memory_pressure" specifies S_IRUGO while also setting a
        write callback which returns -EACCES.  All it needs to do is simply
        not setting a write callback.
      
      "cgroup.event_control" uses cftype->mode to make the file
      world-writable.  It's a misdesigned interface and we don't want
      controllers to be tweaking interface file permissions in general.
      This patch removes cftype->mode and all its spurious uses and
      implements CFTYPE_WORLD_WRITABLE for "cgroup.event_control" which is
      marked as compatibility-only.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      7dbdb199
    • T
      cgroup: replace "cgroup.populated" with "cgroup.events" · 4a07c222
      Tejun Heo 提交于
      memcg already uses "memory.events" for event reporting and other
      controllers may need event reporting too.  Let's standardize on
      "$SUBSYS.events" interface file for reporting events which don't
      happen too frequently and thus can share event notification.
      
      "cgroup.populated" is replaced with "populated" field in
      "cgroup.events" and documentation is updated accordingly.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      4a07c222
  6. 18 9月, 2015 3 次提交
    • T
      cgroup: replace cgroup_on_dfl() tests in controllers with cgroup_subsys_on_dfl() · 9e10a130
      Tejun Heo 提交于
      cgroup_on_dfl() tests whether the cgroup's root is the default
      hierarchy; however, an individual controller is only interested in
      whether the controller is attached to the default hierarchy and never
      tests a cgroup which doesn't belong to the hierarchy that the
      controller is attached to.
      
      This patch replaces cgroup_on_dfl() tests in controllers with faster
      static_key based cgroup_subsys_on_dfl().  This leaves cgroup core as
      the only user of cgroup_on_dfl() and the function is moved from the
      header file to cgroup.c.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NZefan Li <lizefan@huawei.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      9e10a130
    • T
      cgroup: replace cgroup_subsys->disabled tests with cgroup_subsys_enabled() · fc5ed1e9
      Tejun Heo 提交于
      Replace cgroup_subsys->disabled tests in controllers with
      cgroup_subsys_enabled().  cgroup_subsys_enabled() requires literal
      subsys name as its parameter and thus can't be used for cgroup core
      which iterates through controllers.  For cgroup core, introduce and
      use cgroup_ssid_enabled() which uses slower static_key_enabled() test
      and can be indexed by subsys ID.
      
      This leaves cgroup_subsys->disabled unused.  Removed.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NZefan Li <lizefan@huawei.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      fc5ed1e9
    • T
      cgroup: implement static_key based cgroup_subsys_enabled() and cgroup_subsys_on_dfl() · 49d1dc4b
      Tejun Heo 提交于
      Whether a subsys is enabled and attached to the default hierarchy
      seldom changes and may be tested in the hot paths.  This patch
      implements static_key based cgroup_subsys_enabled() and
      cgroup_subsys_on_dfl() tests.
      
      The following patches will update the users and remove duplicate
      mechanisms.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NZefan Li <lizefan@huawei.com>
      49d1dc4b
  7. 17 9月, 2015 2 次提交
    • T
      cgroup: simplify threadgroup locking · 3014dde7
      Tejun Heo 提交于
      Note: This commit was originally committed as b5ba75b5 but got
            reverted by f9f9e7b7 due to the performance regression from
            the percpu_rwsem write down/up operations added to cgroup task
            migration path.  percpu_rwsem changes which alleviate the
            performance issue are pending for v4.4-rc1 merge window.
            Re-apply.
      
      Now that threadgroup locking is made global, code paths around it can
      be simplified.
      
      * lock-verify-unlock-retry dancing removed from __cgroup_procs_write().
      
      * Race protection against de_thread() removed from
        cgroup_update_dfl_csses().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Link: http://lkml.kernel.org/g/55F8097A.7000206@de.ibm.com
      3014dde7
    • T
      sched, cgroup: replace signal_struct->group_rwsem with a global percpu_rwsem · 1ed13287
      Tejun Heo 提交于
      Note: This commit was originally committed as d59cfc09 but got
            reverted by 0c986253 due to the performance regression from
            the percpu_rwsem write down/up operations added to cgroup task
            migration path.  percpu_rwsem changes which alleviate the
            performance issue are pending for v4.4-rc1 merge window.
            Re-apply.
      
      The cgroup side of threadgroup locking uses signal_struct->group_rwsem
      to synchronize against threadgroup changes.  This per-process rwsem
      adds small overhead to thread creation, exit and exec paths, forces
      cgroup code paths to do lock-verify-unlock-retry dance in a couple
      places and makes it impossible to atomically perform operations across
      multiple processes.
      
      This patch replaces signal_struct->group_rwsem with a global
      percpu_rwsem cgroup_threadgroup_rwsem which is cheaper on the reader
      side and contained in cgroups proper.  This patch converts one-to-one.
      
      This does make writer side heavier and lower the granularity; however,
      cgroup process migration is a fairly cold path, we do want to optimize
      thread operations over it and cgroup migration operations don't take
      enough time for the lower granularity to matter.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Link: http://lkml.kernel.org/g/55F8097A.7000206@de.ibm.com
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      1ed13287
  8. 16 9月, 2015 2 次提交
    • T
      Revert "sched, cgroup: replace signal_struct->group_rwsem with a global percpu_rwsem" · 0c986253
      Tejun Heo 提交于
      This reverts commit d59cfc09.
      
      d59cfc09 ("sched, cgroup: replace signal_struct->group_rwsem with
      a global percpu_rwsem") and b5ba75b5 ("cgroup: simplify
      threadgroup locking") changed how cgroup synchronizes against task
      fork and exits so that it uses global percpu_rwsem instead of
      per-process rwsem; unfortunately, the write [un]lock paths of
      percpu_rwsem always involve synchronize_rcu_expedited() which turned
      out to be too expensive.
      
      Improvements for percpu_rwsem are scheduled to be merged in the coming
      v4.4-rc1 merge window which alleviates this issue.  For now, revert
      the two commits to restore per-process rwsem.  They will be re-applied
      for the v4.4-rc1 merge window.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Link: http://lkml.kernel.org/g/55F8097A.7000206@de.ibm.comReported-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: stable@vger.kernel.org # v4.2+
      0c986253
    • T
      Revert "cgroup: simplify threadgroup locking" · f9f9e7b7
      Tejun Heo 提交于
      This reverts commit b5ba75b5.
      
      d59cfc09 ("sched, cgroup: replace signal_struct->group_rwsem with
      a global percpu_rwsem") and b5ba75b5 ("cgroup: simplify
      threadgroup locking") changed how cgroup synchronizes against task
      fork and exits so that it uses global percpu_rwsem instead of
      per-process rwsem; unfortunately, the write [un]lock paths of
      percpu_rwsem always involve synchronize_rcu_expedited() which turned
      out to be too expensive.
      
      Improvements for percpu_rwsem are scheduled to be merged in the coming
      v4.4-rc1 merge window which alleviates this issue.  For now, revert
      the two commits to restore per-process rwsem.  They will be re-applied
      for the v4.4-rc1 merge window.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Link: http://lkml.kernel.org/g/55F8097A.7000206@de.ibm.comReported-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: stable@vger.kernel.org # v4.2+
      f9f9e7b7
  9. 12 9月, 2015 1 次提交
    • M
      sys_membarrier(): system-wide memory barrier (generic, x86) · 5b25b13a
      Mathieu Desnoyers 提交于
      Here is an implementation of a new system call, sys_membarrier(), which
      executes a memory barrier on all threads running on the system.  It is
      implemented by calling synchronize_sched().  It can be used to
      distribute the cost of user-space memory barriers asymmetrically by
      transforming pairs of memory barriers into pairs consisting of
      sys_membarrier() and a compiler barrier.  For synchronization primitives
      that distinguish between read-side and write-side (e.g.  userspace RCU
      [1], rwlocks), the read-side can be accelerated significantly by moving
      the bulk of the memory barrier overhead to the write-side.
      
      The existing applications of which I am aware that would be improved by
      this system call are as follows:
      
      * Through Userspace RCU library (http://urcu.so)
        - DNS server (Knot DNS) https://www.knot-dns.cz/
        - Network sniffer (http://netsniff-ng.org/)
        - Distributed object storage (https://sheepdog.github.io/sheepdog/)
        - User-space tracing (http://lttng.org)
        - Network storage system (https://www.gluster.org/)
        - Virtual routers (https://events.linuxfoundation.org/sites/events/files/slides/DPDK_RCU_0MQ.pdf)
        - Financial software (https://lkml.org/lkml/2015/3/23/189)
      
      Those projects use RCU in userspace to increase read-side speed and
      scalability compared to locking.  Especially in the case of RCU used by
      libraries, sys_membarrier can speed up the read-side by moving the bulk of
      the memory barrier cost to synchronize_rcu().
      
      * Direct users of sys_membarrier
        - core dotnet garbage collector (https://github.com/dotnet/coreclr/issues/198)
      
      Microsoft core dotnet GC developers are planning to use the mprotect()
      side-effect of issuing memory barriers through IPIs as a way to implement
      Windows FlushProcessWriteBuffers() on Linux.  They are referring to
      sys_membarrier in their github thread, specifically stating that
      sys_membarrier() is what they are looking for.
      
      To explain the benefit of this scheme, let's introduce two example threads:
      
      Thread A (non-frequent, e.g. executing liburcu synchronize_rcu())
      Thread B (frequent, e.g. executing liburcu
      rcu_read_lock()/rcu_read_unlock())
      
      In a scheme where all smp_mb() in thread A are ordering memory accesses
      with respect to smp_mb() present in Thread B, we can change each
      smp_mb() within Thread A into calls to sys_membarrier() and each
      smp_mb() within Thread B into compiler barriers "barrier()".
      
      Before the change, we had, for each smp_mb() pairs:
      
      Thread A                    Thread B
      previous mem accesses       previous mem accesses
      smp_mb()                    smp_mb()
      following mem accesses      following mem accesses
      
      After the change, these pairs become:
      
      Thread A                    Thread B
      prev mem accesses           prev mem accesses
      sys_membarrier()            barrier()
      follow mem accesses         follow mem accesses
      
      As we can see, there are two possible scenarios: either Thread B memory
      accesses do not happen concurrently with Thread A accesses (1), or they
      do (2).
      
      1) Non-concurrent Thread A vs Thread B accesses:
      
      Thread A                    Thread B
      prev mem accesses
      sys_membarrier()
      follow mem accesses
                                  prev mem accesses
                                  barrier()
                                  follow mem accesses
      
      In this case, thread B accesses will be weakly ordered. This is OK,
      because at that point, thread A is not particularly interested in
      ordering them with respect to its own accesses.
      
      2) Concurrent Thread A vs Thread B accesses
      
      Thread A                    Thread B
      prev mem accesses           prev mem accesses
      sys_membarrier()            barrier()
      follow mem accesses         follow mem accesses
      
      In this case, thread B accesses, which are ensured to be in program
      order thanks to the compiler barrier, will be "upgraded" to full
      smp_mb() by synchronize_sched().
      
      * Benchmarks
      
      On Intel Xeon E5405 (8 cores)
      (one thread is calling sys_membarrier, the other 7 threads are busy
      looping)
      
      1000 non-expedited sys_membarrier calls in 33s =3D 33 milliseconds/call.
      
      * User-space user of this system call: Userspace RCU library
      
      Both the signal-based and the sys_membarrier userspace RCU schemes
      permit us to remove the memory barrier from the userspace RCU
      rcu_read_lock() and rcu_read_unlock() primitives, thus significantly
      accelerating them. These memory barriers are replaced by compiler
      barriers on the read-side, and all matching memory barriers on the
      write-side are turned into an invocation of a memory barrier on all
      active threads in the process. By letting the kernel perform this
      synchronization rather than dumbly sending a signal to every process
      threads (as we currently do), we diminish the number of unnecessary wake
      ups and only issue the memory barriers on active threads. Non-running
      threads do not need to execute such barrier anyway, because these are
      implied by the scheduler context switches.
      
      Results in liburcu:
      
      Operations in 10s, 6 readers, 2 writers:
      
      memory barriers in reader:    1701557485 reads, 2202847 writes
      signal-based scheme:          9830061167 reads,    6700 writes
      sys_membarrier:               9952759104 reads,     425 writes
      sys_membarrier (dyn. check):  7970328887 reads,     425 writes
      
      The dynamic sys_membarrier availability check adds some overhead to
      the read-side compared to the signal-based scheme, but besides that,
      sys_membarrier slightly outperforms the signal-based scheme. However,
      this non-expedited sys_membarrier implementation has a much slower grace
      period than signal and memory barrier schemes.
      
      Besides diminishing the number of wake-ups, one major advantage of the
      membarrier system call over the signal-based scheme is that it does not
      need to reserve a signal. This plays much more nicely with libraries,
      and with processes injected into for tracing purposes, for which we
      cannot expect that signals will be unused by the application.
      
      An expedited version of this system call can be added later on to speed
      up the grace period. Its implementation will likely depend on reading
      the cpu_curr()->mm without holding each CPU's rq lock.
      
      This patch adds the system call to x86 and to asm-generic.
      
      [1] http://urcu.so
      
      membarrier(2) man page:
      
      MEMBARRIER(2)              Linux Programmer's Manual             MEMBARRIER(2)
      
      NAME
             membarrier - issue memory barriers on a set of threads
      
      SYNOPSIS
             #include <linux/membarrier.h>
      
             int membarrier(int cmd, int flags);
      
      DESCRIPTION
             The cmd argument is one of the following:
      
             MEMBARRIER_CMD_QUERY
                    Query  the  set  of  supported commands. It returns a bitmask of
                    supported commands.
      
             MEMBARRIER_CMD_SHARED
                    Execute a memory barrier on all threads running on  the  system.
                    Upon  return from system call, the caller thread is ensured that
                    all running threads have passed through a state where all memory
                    accesses  to  user-space  addresses  match program order between
                    entry to and return from the system  call  (non-running  threads
                    are de facto in such a state). This covers threads from all pro=E2=80=90
                    cesses running on the system.  This command returns 0.
      
             The flags argument needs to be 0. For future extensions.
      
             All memory accesses performed  in  program  order  from  each  targeted
             thread is guaranteed to be ordered with respect to sys_membarrier(). If
             we use the semantic "barrier()" to represent a compiler barrier forcing
             memory  accesses  to  be performed in program order across the barrier,
             and smp_mb() to represent explicit memory barriers forcing full  memory
             ordering  across  the barrier, we have the following ordering table for
             each pair of barrier(), sys_membarrier() and smp_mb():
      
             The pair ordering is detailed as (O: ordered, X: not ordered):
      
                                    barrier()   smp_mb() sys_membarrier()
                    barrier()          X           X            O
                    smp_mb()           X           O            O
                    sys_membarrier()   O           O            O
      
      RETURN VALUE
             On success, these system calls return zero.  On error, -1 is  returned,
             and errno is set appropriately. For a given command, with flags
             argument set to 0, this system call is guaranteed to always return the
             same value until reboot.
      
      ERRORS
             ENOSYS System call is not implemented.
      
             EINVAL Invalid arguments.
      
      Linux                             2015-04-15                     MEMBARRIER(2)
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Nicholas Miell <nmiell@comcast.net>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Alan Cox <gnomes@lxorguk.ukuu.org.uk>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Stephen Hemminger <stephen@networkplumber.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Pranith Kumar <bobby.prani@gmail.com>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Shuah Khan <shuahkh@osg.samsung.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5b25b13a
  10. 11 9月, 2015 2 次提交
    • I
      sysctl: fix int -> unsigned long assignments in INT_MIN case · 9a5bc726
      Ilya Dryomov 提交于
      The following
      
          if (val < 0)
              *lvalp = (unsigned long)-val;
      
      is incorrect because the compiler is free to assume -val to be positive
      and use a sign-extend instruction for extending the bit pattern.  This is
      a problem if val == INT_MIN:
      
          # echo -2147483648 >/proc/sys/dev/scsi/logging_level
          # cat /proc/sys/dev/scsi/logging_level
          -18446744071562067968
      
      Cast to unsigned long before negation - that way we first sign-extend and
      then negate an unsigned, which is well defined.  With this:
      
          # cat /proc/sys/dev/scsi/logging_level
          -2147483648
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      Cc: Mikulas Patocka <mikulas@twibright.com>
      Cc: Robert Xiao <nneonneo@gmail.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Kees Cook <keescook@chromium.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9a5bc726
    • B
      kexec: export KERNEL_IMAGE_SIZE to vmcoreinfo · 1303a27c
      Baoquan He 提交于
      In x86_64, since v2.6.26 the KERNEL_IMAGE_SIZE is changed to 512M, and
      accordingly the MODULES_VADDR is changed to 0xffffffffa0000000.  However,
      in v3.12 Kees Cook introduced kaslr to randomise the location of kernel.
      And the kernel text mapping addr space is enlarged from 512M to 1G.  That
      means now KERNEL_IMAGE_SIZE is variable, its value is 512M when kaslr
      support is not compiled in and 1G when kaslr support is compiled in.
      Accordingly the MODULES_VADDR is changed too to be:
      
          #define MODULES_VADDR    (__START_KERNEL_map + KERNEL_IMAGE_SIZE)
      
      So when kaslr is compiled in and enabled, the kernel text mapping addr
      space and modules vaddr space need be adjusted.  Otherwise makedumpfile
      will collapse since the addr for some symbols is not correct.
      
      Hence KERNEL_IMAGE_SIZE need be exported to vmcoreinfo and got in
      makedumpfile to help calculate MODULES_VADDR.
      Signed-off-by: NBaoquan He <bhe@redhat.com>
      Acked-by: NKees Cook <keescook@chromium.org>
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1303a27c