1. 29 7月, 2016 1 次提交
  2. 20 5月, 2016 2 次提交
    • V
      cpuset: use static key better and convert to new API · 002f2906
      Vlastimil Babka 提交于
      An important function for cpusets is cpuset_node_allowed(), which
      optimizes on the fact if there's a single root CPU set, it must be
      trivially allowed.  But the check "nr_cpusets() <= 1" doesn't use the
      cpusets_enabled_key static key the right way where static keys eliminate
      branching overhead with jump labels.
      
      This patch converts it so that static key is used properly.  It's also
      switched to the new static key API and the checking functions are
      converted to return bool instead of int.  We also provide a new variant
      __cpuset_zone_allowed() which expects that the static key check was
      already done and they key was enabled.  This is needed for
      get_page_from_freelist() where we want to also avoid the relatively
      slower check when ALLOC_CPUSET is not set in alloc_flags.
      
      The impact on the page allocator microbenchmark is less than expected
      but the cleanup in itself is worthwhile.
      
                                                   4.6.0-rc2                  4.6.0-rc2
                                             multcheck-v1r20               cpuset-v1r20
        Min      alloc-odr0-1               348.00 (  0.00%)           348.00 (  0.00%)
        Min      alloc-odr0-2               254.00 (  0.00%)           254.00 (  0.00%)
        Min      alloc-odr0-4               213.00 (  0.00%)           213.00 (  0.00%)
        Min      alloc-odr0-8               186.00 (  0.00%)           183.00 (  1.61%)
        Min      alloc-odr0-16              173.00 (  0.00%)           171.00 (  1.16%)
        Min      alloc-odr0-32              166.00 (  0.00%)           163.00 (  1.81%)
        Min      alloc-odr0-64              162.00 (  0.00%)           159.00 (  1.85%)
        Min      alloc-odr0-128             160.00 (  0.00%)           157.00 (  1.88%)
        Min      alloc-odr0-256             169.00 (  0.00%)           166.00 (  1.78%)
        Min      alloc-odr0-512             180.00 (  0.00%)           180.00 (  0.00%)
        Min      alloc-odr0-1024            188.00 (  0.00%)           187.00 (  0.53%)
        Min      alloc-odr0-2048            194.00 (  0.00%)           193.00 (  0.52%)
        Min      alloc-odr0-4096            199.00 (  0.00%)           198.00 (  0.50%)
        Min      alloc-odr0-8192            202.00 (  0.00%)           201.00 (  0.50%)
        Min      alloc-odr0-16384           203.00 (  0.00%)           202.00 (  0.49%)
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NZefan Li <lizefan@huawei.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      002f2906
    • A
      include/linux/nodemask.h: create next_node_in() helper · 0edaf86c
      Andrew Morton 提交于
      Lots of code does
      
      	node = next_node(node, XXX);
      	if (node == MAX_NUMNODES)
      		node = first_node(XXX);
      
      so create next_node_in() to do this and use it in various places.
      
      [mhocko@suse.com: use next_node_in() helper]
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMichal Hocko <mhocko@kernel.org>
      Signed-off-by: NMichal Hocko <mhocko@suse.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Hui Zhu <zhuhui@xiaomi.com>
      Cc: Wang Xiaoqiang <wangxq10@lzu.edu.cn>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0edaf86c
  3. 26 4月, 2016 1 次提交
    • T
      cgroup, cpuset: replace cpuset_post_attach_flush() with cgroup_subsys->post_attach callback · 5cf1cacb
      Tejun Heo 提交于
      Since e93ad19d ("cpuset: make mm migration asynchronous"), cpuset
      kicks off asynchronous NUMA node migration if necessary during task
      migration and flushes it from cpuset_post_attach_flush() which is
      called at the end of __cgroup_procs_write().  This is to avoid
      performing migration with cgroup_threadgroup_rwsem write-locked which
      can lead to deadlock through dependency on kworker creation.
      
      memcg has a similar issue with charge moving, so let's convert it to
      an official callback rather than the current one-off cpuset specific
      function.  This patch adds cgroup_subsys->post_attach callback and
      makes cpuset register cpuset_post_attach_flush() as its ->post_attach.
      
      The conversion is mostly one-to-one except that the new callback is
      called under cgroup_mutex.  This is to guarantee that no other
      migration operations are started before ->post_attach callbacks are
      finished.  cgroup_mutex is one of the outermost mutex in the system
      and has never been and shouldn't be a problem.  We can add specialized
      synchronization around __cgroup_procs_write() but I don't think
      there's any noticeable benefit.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: <stable@vger.kernel.org> # 4.4+ prerequisite for the next patch
      5cf1cacb
  4. 23 2月, 2016 1 次提交
  5. 17 2月, 2016 1 次提交
    • A
      cgroup: introduce cgroup namespaces · a79a908f
      Aditya Kali 提交于
      Introduce the ability to create new cgroup namespace. The newly created
      cgroup namespace remembers the cgroup of the process at the point
      of creation of the cgroup namespace (referred as cgroupns-root).
      The main purpose of cgroup namespace is to virtualize the contents
      of /proc/self/cgroup file. Processes inside a cgroup namespace
      are only able to see paths relative to their namespace root
      (unless they are moved outside of their cgroupns-root, at which point
       they will see a relative path from their cgroupns-root).
      For a correctly setup container this enables container-tools
      (like libcontainer, lxc, lmctfy, etc.) to create completely virtualized
      containers without leaking system level cgroup hierarchy to the task.
      This patch only implements the 'unshare' part of the cgroupns.
      Signed-off-by: NAditya Kali <adityakali@google.com>
      Signed-off-by: NSerge Hallyn <serge.hallyn@canonical.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      a79a908f
  6. 22 1月, 2016 1 次提交
    • T
      cpuset: make mm migration asynchronous · e93ad19d
      Tejun Heo 提交于
      If "cpuset.memory_migrate" is set, when a process is moved from one
      cpuset to another with a different memory node mask, pages in used by
      the process are migrated to the new set of nodes.  This was performed
      synchronously in the ->attach() callback, which is synchronized
      against process management.  Recently, the synchronization was changed
      from per-process rwsem to global percpu rwsem for simplicity and
      optimization.
      
      Combined with the synchronous mm migration, this led to deadlocks
      because mm migration could schedule a work item which may in turn try
      to create a new worker blocking on the process management lock held
      from cgroup process migration path.
      
      This heavy an operation shouldn't be performed synchronously from that
      deep inside cgroup migration in the first place.  This patch punts the
      actual migration to an ordered workqueue and updates cgroup process
      migration and cpuset config update paths to flush the workqueue after
      all locks are released.  This way, the operations still seem
      synchronous to userland without entangling mm migration with process
      management synchronization.  CPU hotplug can also invoke mm migration
      but there's no reason for it to wait for mm migrations and thus
      doesn't synchronize against their completions.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-and-tested-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Cc: stable@vger.kernel.org # v4.4+
      e93ad19d
  7. 03 12月, 2015 1 次提交
    • T
      cgroup: fix handling of multi-destination migration from subtree_control enabling · 1f7dd3e5
      Tejun Heo 提交于
      Consider the following v2 hierarchy.
      
        P0 (+memory) --- P1 (-memory) --- A
                                       \- B
             
      P0 has memory enabled in its subtree_control while P1 doesn't.  If
      both A and B contain processes, they would belong to the memory css of
      P1.  Now if memory is enabled on P1's subtree_control, memory csses
      should be created on both A and B and A's processes should be moved to
      the former and B's processes the latter.  IOW, enabling controllers
      can cause atomic migrations into different csses.
      
      The core cgroup migration logic has been updated accordingly but the
      controller migration methods haven't and still assume that all tasks
      migrate to a single target css; furthermore, the methods were fed the
      css in which subtree_control was updated which is the parent of the
      target csses.  pids controller depends on the migration methods to
      move charges and this made the controller attribute charges to the
      wrong csses often triggering the following warning by driving a
      counter negative.
      
       WARNING: CPU: 1 PID: 1 at kernel/cgroup_pids.c:97 pids_cancel.constprop.6+0x31/0x40()
       Modules linked in:
       CPU: 1 PID: 1 Comm: systemd Not tainted 4.4.0-rc1+ #29
       ...
        ffffffff81f65382 ffff88007c043b90 ffffffff81551ffc 0000000000000000
        ffff88007c043bc8 ffffffff810de202 ffff88007a752000 ffff88007a29ab00
        ffff88007c043c80 ffff88007a1d8400 0000000000000001 ffff88007c043bd8
       Call Trace:
        [<ffffffff81551ffc>] dump_stack+0x4e/0x82
        [<ffffffff810de202>] warn_slowpath_common+0x82/0xc0
        [<ffffffff810de2fa>] warn_slowpath_null+0x1a/0x20
        [<ffffffff8118e031>] pids_cancel.constprop.6+0x31/0x40
        [<ffffffff8118e0fd>] pids_can_attach+0x6d/0xf0
        [<ffffffff81188a4c>] cgroup_taskset_migrate+0x6c/0x330
        [<ffffffff81188e05>] cgroup_migrate+0xf5/0x190
        [<ffffffff81189016>] cgroup_attach_task+0x176/0x200
        [<ffffffff8118949d>] __cgroup_procs_write+0x2ad/0x460
        [<ffffffff81189684>] cgroup_procs_write+0x14/0x20
        [<ffffffff811854e5>] cgroup_file_write+0x35/0x1c0
        [<ffffffff812e26f1>] kernfs_fop_write+0x141/0x190
        [<ffffffff81265f88>] __vfs_write+0x28/0xe0
        [<ffffffff812666fc>] vfs_write+0xac/0x1a0
        [<ffffffff81267019>] SyS_write+0x49/0xb0
        [<ffffffff81bcef32>] entry_SYSCALL_64_fastpath+0x12/0x76
      
      This patch fixes the bug by removing @css parameter from the three
      migration methods, ->can_attach, ->cancel_attach() and ->attach() and
      updating cgroup_taskset iteration helpers also return the destination
      css in addition to the task being migrated.  All controllers are
      updated accordingly.
      
      * Controllers which don't care whether there are one or multiple
        target csses can be converted trivially.  cpu, io, freezer, perf,
        netclassid and netprio fall in this category.
      
      * cpuset's current implementation assumes that there's single source
        and destination and thus doesn't support v2 hierarchy already.  The
        only change made by this patchset is how that single destination css
        is obtained.
      
      * memory migration path already doesn't do anything on v2.  How the
        single destination css is obtained is updated and the prep stage of
        mem_cgroup_can_attach() is reordered to accomodate the change.
      
      * pids is the only controller which was affected by this bug.  It now
        correctly handles multi-destination migrations and no longer causes
        counter underflow from incorrect accounting.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-and-tested-by: NDaniel Wagner <daniel.wagner@bmw-carit.de>
      Cc: Aleksa Sarai <cyphar@cyphar.com>
      1f7dd3e5
  8. 26 11月, 2015 1 次提交
    • A
      cpuset: Replace all instances of time_t with time64_t · d2b43658
      Arnd Bergmann 提交于
      The following patch replaces all instances of time_t with time64_t i.e.
      change the type used for representing time from 32-bit to 64-bit. All
      32-bit kernels to date use a signed 32-bit time_t type, which can only
      represent time until January 2038. Since embedded systems running 32-bit
      Linux are going to survive beyond that date, we have to change all
      current uses, in a backwards compatible way.
      
      The patch also changes the function get_seconds() that returns a 32-bit
      integer to ktime_get_seconds() that returns seconds as 64-bit integer.
      
      The patch changes the type of ticks from time_t to u32. We keep ticks as
      32-bits as the function uses 32-bit arithmetic which would prove less
      expensive than 64-bit arithmetic and the function is expected to be
      called atleast once every 32 seconds.
      Signed-off-by: NHeena Sirwani <heenasirwani@gmail.com>
      Reviewed-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      d2b43658
  9. 06 11月, 2015 1 次提交
  10. 16 10月, 2015 1 次提交
    • T
      cgroup: replace cgroup_has_tasks() with cgroup_is_populated() · 27bd4dbb
      Tejun Heo 提交于
      Currently, cgroup_has_tasks() tests whether the target cgroup has any
      css_set linked to it.  This works because a css_set's refcnt converges
      with the number of tasks linked to it and thus there's no css_set
      linked to a cgroup if it doesn't have any live tasks.
      
      To help tracking resource usage of zombie tasks, putting the ref of
      css_set will be separated from disassociating the task from the
      css_set which means that a cgroup may have css_sets linked to it even
      when it doesn't have any live tasks.
      
      This patch replaces cgroup_has_tasks() with cgroup_is_populated()
      which tests cgroup->nr_populated instead which locally counts the
      number of populated css_sets.  Unlike cgroup_has_tasks(),
      cgroup_is_populated() is recursive - if any of the descendants is
      populated, the cgroup is populated too.  While this changes the
      meaning of the test, all the existing users are okay with the change.
      
      While at it, replace the open-coded ->populated_cnt test in
      cgroup_events_show() with cgroup_is_populated().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      27bd4dbb
  11. 23 9月, 2015 2 次提交
    • T
      cgroup, memcg, cpuset: implement cgroup_taskset_for_each_leader() · 4530eddb
      Tejun Heo 提交于
      It wasn't explicitly documented but, when a process is being migrated,
      cpuset and memcg depend on cgroup_taskset_first() returning the
      threadgroup leader; however, this approach is somewhat ghetto and
      would no longer work for the planned multi-process migration.
      
      This patch introduces explicit cgroup_taskset_for_each_leader() which
      iterates over only the threadgroup leaders and replaces
      cgroup_taskset_first() usages for accessing the leader with it.
      
      This prepares both memcg and cpuset for multi-process migration.  This
      patch also updates the documentation for cgroup_taskset_for_each() to
      clarify the iteration rules and removes comments mentioning task
      ordering in tasksets.
      
      v2: A previous patch which added threadgroup leader test was dropped.
          Patch updated accordingly.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NZefan Li <lizefan@huawei.com>
      Acked-by: NMichal Hocko <mhocko@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      4530eddb
    • T
      cpuset: migrate memory only for threadgroup leaders · 3df9ca0a
      Tejun Heo 提交于
      If memory_migrate flag is set, cpuset migrates memory according to the
      destnation css's nodemask.  The current implementation migrates memory
      whenever any thread of a process is migrated making the behavior
      somewhat arbitrary.  Let's tie memory operations to the threadgroup
      leader so that memory is migrated only when the leader is migrated.
      
      While this is a behavior change, given the inherent fuziness, this
      change is not too likely to be noticed and allows us to clearly define
      who owns the memory (always the leader) and helps the planned atomic
      multi-process migration.
      
      Note that we're currently migrating memory in migration path proper
      while holding all the locks.  In the long term, this should be moved
      out to an async work item.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NZefan Li <lizefan@huawei.com>
      3df9ca0a
  12. 19 9月, 2015 1 次提交
    • T
      cgroup: replace cftype->mode with CFTYPE_WORLD_WRITABLE · 7dbdb199
      Tejun Heo 提交于
      cftype->mode allows controllers to give arbitrary permissions to
      interface knobs.  Except for "cgroup.event_control", the existing uses
      are spurious.
      
      * Some explicitly specify S_IRUGO | S_IWUSR even though that's the
        default.
      
      * "cpuset.memory_pressure" specifies S_IRUGO while also setting a
        write callback which returns -EACCES.  All it needs to do is simply
        not setting a write callback.
      
      "cgroup.event_control" uses cftype->mode to make the file
      world-writable.  It's a misdesigned interface and we don't want
      controllers to be tweaking interface file permissions in general.
      This patch removes cftype->mode and all its spurious uses and
      implements CFTYPE_WORLD_WRITABLE for "cgroup.event_control" which is
      marked as compatibility-only.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      7dbdb199
  13. 18 9月, 2015 1 次提交
    • T
      cgroup: replace cgroup_on_dfl() tests in controllers with cgroup_subsys_on_dfl() · 9e10a130
      Tejun Heo 提交于
      cgroup_on_dfl() tests whether the cgroup's root is the default
      hierarchy; however, an individual controller is only interested in
      whether the controller is attached to the default hierarchy and never
      tests a cgroup which doesn't belong to the hierarchy that the
      controller is attached to.
      
      This patch replaces cgroup_on_dfl() tests in controllers with faster
      static_key based cgroup_subsys_on_dfl().  This leaves cgroup core as
      the only user of cgroup_on_dfl() and the function is moved from the
      header file to cgroup.c.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NZefan Li <lizefan@huawei.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      9e10a130
  14. 10 8月, 2015 1 次提交
    • A
      cpuset: use trialcs->mems_allowed as a temp variable · 24ee3cf8
      Alban Crequy 提交于
      The comment says it's using trialcs->mems_allowed as a temp variable but
      it didn't match the code. Change the code to match the comment.
      
      This fixes an issue when writing in cpuset.mems when a sub-directory
      exists: we need to write several times for the information to persist:
      
      | root@alban:/sys/fs/cgroup/cpuset# mkdir footest9
      | root@alban:/sys/fs/cgroup/cpuset# cd footest9
      | root@alban:/sys/fs/cgroup/cpuset/footest9# mkdir aa
      | root@alban:/sys/fs/cgroup/cpuset/footest9# cat cpuset.mems
      |
      | root@alban:/sys/fs/cgroup/cpuset/footest9# echo 0 > cpuset.mems
      | root@alban:/sys/fs/cgroup/cpuset/footest9# cat cpuset.mems
      |
      | root@alban:/sys/fs/cgroup/cpuset/footest9# echo 0 > cpuset.mems
      | root@alban:/sys/fs/cgroup/cpuset/footest9# cat cpuset.mems
      | 0
      | root@alban:/sys/fs/cgroup/cpuset/footest9# cat aa/cpuset.mems
      |
      | root@alban:/sys/fs/cgroup/cpuset/footest9# echo 0 > aa/cpuset.mems
      | root@alban:/sys/fs/cgroup/cpuset/footest9# cat aa/cpuset.mems
      | 0
      | root@alban:/sys/fs/cgroup/cpuset/footest9#
      
      This should help to fix the following issue in Docker:
      https://github.com/opencontainers/runc/issues/133
      In some conditions, a Docker container needs to be started twice in
      order to work.
      Signed-off-by: NAlban Crequy <alban@endocode.com>
      Tested-by: NIago López Galeiras <iago@endocode.com>
      Cc: <stable@vger.kernel.org> # 3.17+
      Acked-by: NLi Zefan <lizefan@huawei.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      24ee3cf8
  15. 15 4月, 2015 1 次提交
  16. 20 3月, 2015 1 次提交
    • R
      cpusets, isolcpus: exclude isolcpus from load balancing in cpusets · 47b8ea71
      Rik van Riel 提交于
      Ensure that cpus specified with the isolcpus= boot commandline
      option stay outside of the load balancing in the kernel scheduler.
      
      Operations like load balancing can introduce unwanted latencies,
      which is exactly what the isolcpus= commandline is there to prevent.
      
      Previously, simply creating a new cpuset, without even touching the
      cpuset.cpus field inside the new cpuset, would undo the effects of
      isolcpus=, by creating a scheduler domain spanning the whole system,
      and setting up load balancing inside that domain. The cpuset root
      cpuset.cpus file is read-only, so there was not even a way to undo
      that effect.
      
      This does not impact the majority of cpusets users, since isolcpus=
      is a fairly specialized feature used for realtime purposes.
      
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Clark Williams <williams@redhat.com>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
      Cc: cgroups@vger.kernel.org
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Tested-by: NDavid Rientjes <rientjes@google.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Acked-by: NZefan Li <lizefan@huawei.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      47b8ea71
  17. 03 3月, 2015 3 次提交
  18. 14 2月, 2015 1 次提交
  19. 13 2月, 2015 1 次提交
  20. 28 10月, 2014 2 次提交
  21. 27 10月, 2014 3 次提交
  22. 25 9月, 2014 1 次提交
    • Z
      cpuset: PF_SPREAD_PAGE and PF_SPREAD_SLAB should be atomic flags · 2ad654bc
      Zefan Li 提交于
      When we change cpuset.memory_spread_{page,slab}, cpuset will flip
      PF_SPREAD_{PAGE,SLAB} bit of tsk->flags for each task in that cpuset.
      This should be done using atomic bitops, but currently we don't,
      which is broken.
      
      Tetsuo reported a hard-to-reproduce kernel crash on RHEL6, which happened
      when one thread tried to clear PF_USED_MATH while at the same time another
      thread tried to flip PF_SPREAD_PAGE/PF_SPREAD_SLAB. They both operate on
      the same task.
      
      Here's the full report:
      https://lkml.org/lkml/2014/9/19/230
      
      To fix this, we make PF_SPREAD_PAGE and PF_SPREAD_SLAB atomic flags.
      
      v4:
      - updated mm/slab.c. (Fengguang Wu)
      - updated Documentation.
      
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Miao Xie <miaox@cn.fujitsu.com>
      Cc: Kees Cook <keescook@chromium.org>
      Fixes: 950592f7 ("cpusets: update tasks' page/slab spread flags in time")
      Cc: <stable@vger.kernel.org> # 2.6.31+
      Reported-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Signed-off-by: NZefan Li <lizefan@huawei.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      2ad654bc
  23. 19 9月, 2014 1 次提交
  24. 30 7月, 2014 1 次提交
  25. 15 7月, 2014 1 次提交
    • T
      cgroup: rename cgroup_subsys->base_cftypes to ->legacy_cftypes · 5577964e
      Tejun Heo 提交于
      Currently, cgroup_subsys->base_cftypes is used for both the unified
      default hierarchy and legacy ones and subsystems can mark each file
      with either CFTYPE_ONLY_ON_DFL or CFTYPE_INSANE if it has to appear
      only on one of them.  This is quite hairy and error-prone.  Also, we
      may end up exposing interface files to the default hierarchy without
      thinking it through.
      
      cgroup_subsys will grow two separate cftype arrays and apply each only
      on the hierarchies of the matching type.  This will allow organizing
      cftypes in a lot clearer way and encourage subsystems to scrutinize
      the interface which is being exposed in the new default hierarchy.
      
      In preparation, this patch renames cgroup_subsys->base_cftypes to
      cgroup_subsys->legacy_cftypes.  This patch is pure rename.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NNeil Horman <nhorman@tuxdriver.com>
      Acked-by: NLi Zefan <lizefan@huawei.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Aristeu Rozanski <aris@redhat.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      5577964e
  26. 10 7月, 2014 8 次提交
    • L
      cpuset: export effective masks to userspace · afd1a8b3
      Li Zefan 提交于
      cpuset.cpus and cpuset.mems are the configured masks, and we need
      to export effective masks to userspace, so users know the real
      cpus_allowed and mems_allowed that apply to the tasks in a cpuset.
      
      v2:
      - export those masks unconditionally, suggested by Tejun.
      Signed-off-by: NLi Zefan <lizefan@huawei.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      afd1a8b3
    • L
      cpuset: allow writing offlined masks to cpuset.cpus/mems · 5d8ba82c
      Li Zefan 提交于
      As the configured masks won't be limited by its parent, and the top
      cpuset's masks won't change when hotplug happens, it's natural to
      allow writing offlined masks to the configured masks.
      
      If on default hierarchy:
      
      	# echo 0 > /sys/devices/system/cpu/cpu1/online
      	# mkdir /cpuset/sub
      	# echo 1 > /cpuset/sub/cpuset.cpus
      	# cat /cpuset/sub/cpuset.cpus
      	1
      
      If on legacy hierarchy:
      
      	# echo 0 > /sys/devices/system/cpu/cpu1/online
      	# mkdir /cpuset/sub
      	# echo 1 > /cpuset/sub/cpuset.cpus
      	-bash: echo: write error: Invalid argument
      
      Note the checks don't need to be gated by cgroup_on_dfl, because we've
      initialized top_cpuset.{cpus,mems}_allowed accordingly in cpuset_bind().
      Signed-off-by: NLi Zefan <lizefan@huawei.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      5d8ba82c
    • L
      cpuset: enable onlined cpu/node in effective masks · be4c9dd7
      Li Zefan 提交于
      Firstly offline cpu1:
      
        # echo 0-1 > cpuset.cpus
        # echo 0 > /sys/devices/system/cpu/cpu1/online
        # cat cpuset.cpus
        0-1
        # cat cpuset.effective_cpus
        0
      
      Then online it:
      
        # echo 1 > /sys/devices/system/cpu/cpu1/online
        # cat cpuset.cpus
        0-1
        # cat cpuset.effective_cpus
        0-1
      
      And cpuset will bring it back to the effective mask.
      
      The implementation is quite straightforward. Instead of calculating the
      offlined cpus/mems and do updates, we just set the new effective_mask
      to online_mask & congifured_mask.
      
      This is a behavior change for default hierarchy, so legacy hierarchy
      won't be affected.
      
      v2:
      - make refactoring of cpuset_hotplug_update_tasks() as seperate patch,
        suggested by Tejun.
      - make hotplug_update_tasks_insane() use @new_cpus and @new_mems as
        hotplug_update_tasks_sane() does.
      Signed-off-by: NLi Zefan <lizefan@huawei.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      be4c9dd7
    • L
      cpuset: refactor cpuset_hotplug_update_tasks() · 390a36aa
      Li Zefan 提交于
      We mix the handling for both default hierarchy and legacy hierarchy in
      the same function, and it's quite messy, so split into two functions.
      Signed-off-by: NLi Zefan <lizefan@huawei.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      390a36aa
    • L
      cpuset: make cs->{cpus, mems}_allowed as user-configured masks · 7e88291b
      Li Zefan 提交于
      Now we've used effective cpumasks to enforce hierarchical manner,
      we can use cs->{cpus,mems}_allowed as configured masks.
      
      Configured masks can be changed by writing cpuset.cpus and cpuset.mems
      only. The new behaviors are:
      
      - They won't be changed by hotplug anymore.
      - They won't be limited by its parent's masks.
      
      This ia a behavior change, but won't take effect unless mount with
      sane_behavior.
      
      v2:
      - Add comments to explain the differences between configured masks and
      effective masks.
      Signed-off-by: NLi Zefan <lizefan@huawei.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      7e88291b
    • L
      cpuset: apply cs->effective_{cpus,mems} · ae1c8023
      Li Zefan 提交于
      Now we can use cs->effective_{cpus,mems} as effective masks. It's
      used whenever:
      
      - we update tasks' cpus_allowed/mems_allowed,
      - we want to retrieve tasks_cs(tsk)'s cpus_allowed/mems_allowed.
      
      They actually replace effective_{cpu,node}mask_cpuset().
      
      effective_mask == configured_mask & parent effective_mask except when
      the reault is empty, in which case it inherits parent effective_mask.
      The result equals the mask computed from effective_{cpu,node}mask_cpuset().
      
      This won't affect the original legacy hierarchy, because in this case we
      make sure the effective masks are always the same with user-configured
      masks.
      Signed-off-by: NLi Zefan <lizefan@huawei.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      ae1c8023
    • L
      cpuset: initialize top_cpuset's configured masks at mount · 39bd0d15
      Li Zefan 提交于
      We now have to support different behaviors for default hierachy and
      legacy hiearchy, top_cpuset's configured masks need to be initialized
      accordingly.
      
      Suppose we've offlined cpu1.
      
      On default hierarchy:
      
      	# mount -t cgroup -o __DEVEL__sane_behavior xxx /cpuset
      	# cat /cpuset/cpuset.cpus
      	0-15
      
      On legacy hierarchy:
      
      	# mount -t cgroup xxx /cpuset
      	# cat /cpuset/cpuset.cpus
      	0,2-15
      Signed-off-by: NLi Zefan <lizefan@huawei.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      39bd0d15
    • L
      cpuset: use effective cpumask to build sched domains · 8b5f1c52
      Li Zefan 提交于
      We're going to have separate user-configured masks and effective ones.
      
      Eventually configured masks can only be changed by writing cpuset.cpus
      and cpuset.mems, and they won't be restricted by parent cpuset. While
      effective masks reflect cpu/memory hotplug and hierachical restriction,
      and these are the real masks that apply to the tasks in the cpuset.
      
      We calculate effective mask this way:
        - top cpuset's effective_mask == online_mask, otherwise
        - cpuset's effective_mask == configured_mask & parent effective_mask,
          if the result is empty, it inherits parent effective mask.
      
      Those behavior changes are for default hierarchy only. For legacy
      hierarchy, effective_mask and configured_mask are the same, so we won't
      break old interfaces.
      
      We should partition sched domains according to effective_cpus, which
      is the real cpulist that takes effects on tasks in the cpuset.
      
      This won't introduce behavior change.
      
      v2:
      - Add a comment for the call of rebuild_sched_domains(), suggested
      by Tejun.
      Signed-off-by: NLi Zefan <lizefan@huawei.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      8b5f1c52