1. 08 2月, 2008 2 次提交
  2. 26 1月, 2008 1 次提交
    • G
      cpu-hotplug: replace lock_cpu_hotplug() with get_online_cpus() · 86ef5c9a
      Gautham R Shenoy 提交于
      Replace all lock_cpu_hotplug/unlock_cpu_hotplug from the kernel and use
      get_online_cpus and put_online_cpus instead as it highlights the
      refcount semantics in these operations.
      
      The new API guarantees protection against the cpu-hotplug operation, but
      it doesn't guarantee serialized access to any of the local data
      structures. Hence the changes needs to be reviewed.
      
      In case of pseries_add_processor/pseries_remove_processor, use
      cpu_maps_update_begin()/cpu_maps_update_done() as we're modifying the
      cpu_present_map there.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      86ef5c9a
  3. 20 10月, 2007 6 次提交
    • C
      hotplug cpu: migrate a task within its cpuset · 470fd646
      Cliff Wickman 提交于
      When a cpu is disabled, move_task_off_dead_cpu() is called for tasks that have
      been running on that cpu.
      
      Currently, such a task is migrated:
       1) to any cpu on the same node as the disabled cpu, which is both online
          and among that task's cpus_allowed
       2) to any cpu which is both online and among that task's cpus_allowed
      
      It is typical of a multithreaded application running on a large NUMA system to
      have its tasks confined to a cpuset so as to cluster them near the memory that
      they share.  Furthermore, it is typical to explicitly place such a task on a
      specific cpu in that cpuset.  And in that case the task's cpus_allowed
      includes only a single cpu.
      
      This patch would insert a preference to migrate such a task to some cpu within
      its cpuset (and set its cpus_allowed to its entire cpuset).
      
      With this patch, migrate the task to:
       1) to any cpu on the same node as the disabled cpu, which is both online
          and among that task's cpus_allowed
       2) to any online cpu within the task's cpuset
       3) to any cpu which is both online and among that task's cpus_allowed
      
      In order to do this, move_task_off_dead_cpu() must make a call to
      cpuset_cpus_allowed_locked(), a new subset of cpuset_cpus_allowed(), that will
      not block.  (name change - per Oleg's suggestion)
      
      Calls are made to cpuset_lock() and cpuset_unlock() in migration_call() to set
      the cpuset mutex during the whole migrate_live_tasks() and
      migrate_dead_tasks() procedure.
      
      [akpm@linux-foundation.org: build fix]
      [pj@sgi.com: Fix indentation and spacing]
      Signed-off-by: NCliff Wickman <cpw@sgi.com>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: Paul Jackson <pj@sgi.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      470fd646
    • P
      Fix cpusets update_cpumask · 8707d8b8
      Paul Menage 提交于
      Cause writes to cpuset "cpus" file to update cpus_allowed for member tasks:
      
      - collect batches of tasks under tasklist_lock and then call
        set_cpus_allowed() on them outside the lock (since this can sleep).
      
      - add a simple generic priority heap type to allow efficient collection
        of batches of tasks to be processed without duplicating or missing any
        tasks in subsequent batches.
      
      - make "cpus" file update a no-op if the mask hasn't changed
      
      - fix race between update_cpumask() and sched_setaffinity() by making
        sched_setaffinity() post-check that it's not running on any cpus outside
        cpuset_cpus_allowed().
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NPaul Menage <menage@google.com>
      Cc: Paul Jackson <pj@sgi.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Cedric Le Goater <clg@fr.ibm.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Serge Hallyn <serue@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8707d8b8
    • P
      cpusets: decrustify cpuset mask update code · 020958b6
      Paul Jackson 提交于
      Decrustify the kernel/cpuset.c 'cpus' and 'mems' updating code.
      
      Other than subtle improvements in the consistency of identifying
      white space at the beginning and end of passed in masks, this
      doesn't make any visible difference in behaviour.  But it's
      one or two hundred kernel text bytes smaller, and easier to
      understand.
      
      [akpm@linux-foundation.org: coding-style fix]
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Reviewed-by: NPaul Menage <menage@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      020958b6
    • P
      cpuset sched_load_balance flag · 029190c5
      Paul Jackson 提交于
      Add a new per-cpuset flag called 'sched_load_balance'.
      
      When enabled in a cpuset (the default value) it tells the kernel scheduler
      that the scheduler should provide the normal load balancing on the CPUs in
      that cpuset, sometimes moving tasks from one CPU to a second CPU if the
      second CPU is less loaded and if that task is allowed to run there.
      
      When disabled (write "0" to the file) then it tells the kernel scheduler
      that load balancing is not required for the CPUs in that cpuset.
      
      Now even if this flag is disabled for some cpuset, the kernel may still
      have to load balance some or all the CPUs in that cpuset, if some
      overlapping cpuset has its sched_load_balance flag enabled.
      
      If there are some CPUs that are not in any cpuset whose sched_load_balance
      flag is enabled, the kernel scheduler will not load balance tasks to those
      CPUs.
      
      Moreover the kernel will partition the 'sched domains' (non-overlapping
      sets of CPUs over which load balancing is attempted) into the finest
      granularity partition that it can find, while still keeping any two CPUs
      that are in the same shed_load_balance enabled cpuset in the same element
      of the partition.
      
      This serves two purposes:
       1) It provides a mechanism for real time isolation of some CPUs, and
       2) it can be used to improve performance on systems with many CPUs
          by supporting configurations in which load balancing is not done
          across all CPUs at once, but rather only done in several smaller
          disjoint sets of CPUs.
      
      This mechanism replaces the earlier overloading of the per-cpuset
      flag 'cpu_exclusive', which overloading was removed in an earlier
      patch: cpuset-remove-sched-domain-hooks-from-cpusets
      
      See further the Documentation and comments in the code itself.
      
      [akpm@linux-foundation.org: don't be weird]
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      029190c5
    • P
      Task Control Groups: make cpusets a client of cgroups · 8793d854
      Paul Menage 提交于
      Remove the filesystem support logic from the cpusets system and makes cpusets
      a cgroup subsystem
      
      The "cpuset" filesystem becomes a dummy filesystem; attempts to mount it get
      passed through to the cgroup filesystem with the appropriate options to
      emulate the old cpuset filesystem behaviour.
      Signed-off-by: NPaul Menage <menage@google.com>
      Cc: Serge E. Hallyn <serue@us.ibm.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Paul Jackson <pj@sgi.com>
      Cc: Kirill Korotaev <dev@openvz.org>
      Cc: Herbert Poetzl <herbert@13thfloor.at>
      Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
      Cc: Cedric Le Goater <clg@fr.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8793d854
    • P
      cpuset: zero malloc - revert the old cpuset fix · 55a230aa
      Paul Jackson 提交于
      The cpuset code to present a list of tasks using a cpuset to user space could
      write to an array that it had kmalloc'd, after a kmalloc request of zero size.
      
      The problem was that the code didn't check for writes past the allocated end
      of the array until -after- the first write.
      
      This is a race condition that is likely rare -- it would only show up if a
      cpuset went from being empty to having a task in it, during the brief time
      between the allocation and the first write.
      
      Prior to roughly 2.6.22 kernels, this was also a benign problem, because a
      zero kmalloc returned a few usable bytes anyway, and no harm was done with the
      bogus write.
      
      With the 2.6.22 kernel changes to make issue a warning if code tries to write
      to the location returned from a zero size allocation, this problem is no
      longer benign.  This cpuset code would occassionally trigger that warning.
      
      The fix is trivial -- check before storing into the array, not after, whether
      the array is big enough to hold the store.
      
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: "Serge E. Hallyn" <serue@us.ibm.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Herbert Poetzl <herbert@13thfloor.at>
      Cc: Kirill Korotaev <dev@openvz.org>
      Cc: Paul Menage <menage@google.com>
      Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
      Cc: Christoph Lameter <clameter@sgi.com>
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      55a230aa
  4. 19 10月, 2007 1 次提交
  5. 17 10月, 2007 4 次提交
    • D
      oom: compare cpuset mems_allowed instead of exclusive ancestors · bbe373f2
      David Rientjes 提交于
      Instead of testing for overlap in the memory nodes of the the nearest
      exclusive ancestor of both current and the candidate task, it is better to
      simply test for intersection between the task's mems_allowed in their task
      descriptors.  This does not require taking callback_mutex since it is only
      used as a hint in the badness scoring.
      
      Tasks that do not have an intersection in their mems_allowed with the current
      task are not explicitly restricted from being OOM killed because it is quite
      possible that the candidate task has allocated memory there before and has
      since changed its mems_allowed.
      
      Cc: Andrea Arcangeli <andrea@suse.de>
      Acked-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bbe373f2
    • P
      cpuset: remove sched domain hooks from cpusets · 607717a6
      Paul Jackson 提交于
      Remove the cpuset hooks that defined sched domains depending on the setting
      of the 'cpu_exclusive' flag.
      
      The cpu_exclusive flag can only be set on a child if it is set on the
      parent.
      
      This made that flag painfully unsuitable for use as a flag defining a
      partitioning of a system.
      
      It was entirely unobvious to a cpuset user what partitioning of sched
      domains they would be causing when they set that one cpu_exclusive bit on
      one cpuset, because it depended on what CPUs were in the remainder of that
      cpusets siblings and child cpusets, after subtracting out other
      cpu_exclusive cpusets.
      
      Furthermore, there was no way on production systems to query the
      result.
      
      Using the cpu_exclusive flag for this was simply wrong from the get go.
      
      Fortunately, it was sufficiently borked that so far as I know, almost no
      successful use has been made of this.  One real time group did use it to
      affectively isolate CPUs from any load balancing efforts.  They are willing
      to adapt to alternative mechanisms for this, such as someway to manipulate
      the list of isolated CPUs on a running system.  They can do without this
      present cpu_exclusive based mechanism while we develop an alternative.
      
      There is a real risk, to the best of my understanding, of users
      accidentally setting up a partitioned scheduler domains, inhibiting desired
      load balancing across all their CPUs, due to the nonobvious (from the
      cpuset perspective) side affects of the cpu_exclusive flag.
      
      Furthermore, since there was no way on a running system to see what one was
      doing with sched domains, this change will be invisible to any using code.
      Unless they have real insight to the scheduler load balancing choices, they
      will be unable to detect that this change has been made in the kernel's
      behaviour.
      
      Initial discussion on lkml of this patch has generated much comment.  My
      (probably controversial) take on that discussion is that it has reached a
      rough concensus that the current cpuset cpu_exclusive mechanism for
      defining sched domains is borked.  There is no concensus on the
      replacement.  But since we can remove this mechanism, and since its
      continued presence risks causing unwanted partitioning of the schedulers
      load balancing, we should remove it while we can, as we proceed to work the
      replacement scheduler domain mechanisms.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Christoph Lameter <clameter@engr.sgi.com>
      Cc: Dinakar Guniguntala <dino@in.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      607717a6
    • M
      Group short-lived and reclaimable kernel allocations · e12ba74d
      Mel Gorman 提交于
      This patch marks a number of allocations that are either short-lived such as
      network buffers or are reclaimable such as inode allocations.  When something
      like updatedb is called, long-lived and unmovable kernel allocations tend to
      be spread throughout the address space which increases fragmentation.
      
      This patch groups these allocations together as much as possible by adding a
      new MIGRATE_TYPE.  The MIGRATE_RECLAIMABLE type is for allocations that can be
      reclaimed on demand, but not moved.  i.e.  they can be migrated by deleting
      them and re-reading the information from elsewhere.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Cc: Christoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e12ba74d
    • C
      Memoryless nodes: Use N_HIGH_MEMORY for cpusets · 0e1e7c7a
      Christoph Lameter 提交于
      cpusets try to ensure that any node added to a cpuset's mems_allowed is
      on-line and contains memory.  The assumption was that online nodes contained
      memory.  Thus, it is possible to add memoryless nodes to a cpuset and then add
      tasks to this cpuset.  This results in continuous series of oom-kill and
      apparent system hang.
      
      Change cpusets to use node_states[N_HIGH_MEMORY] [a.k.a.  node_memory_map] in
      place of node_online_map when vetting memories.  Return error if admin
      attempts to write a non-empty mems_allowed node mask containing only
      memoryless-nodes.
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: NBob Picco <bob.picco@hp.com>
      Signed-off-by: NNishanth Aravamudan <nacc@us.ibm.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@skynet.ie>
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0e1e7c7a
  6. 18 7月, 2007 1 次提交
    • J
      usermodehelper: Tidy up waiting · 86313c48
      Jeremy Fitzhardinge 提交于
      Rather than using a tri-state integer for the wait flag in
      call_usermodehelper_exec, define a proper enum, and use that.  I've
      preserved the integer values so that any callers I've missed should
      still work OK.
      Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Randy Dunlap <randy.dunlap@oracle.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Johannes Berg <johannes@sipsolutions.net>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Bjorn Helgaas <bjorn.helgaas@hp.com>
      Cc: Joel Becker <joel.becker@oracle.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Kay Sievers <kay.sievers@vrfy.org>
      Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: David Howells <dhowells@redhat.com>
      86313c48
  7. 17 7月, 2007 1 次提交
  8. 16 7月, 2007 1 次提交
  9. 17 6月, 2007 1 次提交
    • P
      cpuset: zero malloc - fix for old cpusets · 3e903e7b
      Paul Jackson 提交于
      The cpuset code to present a list of tasks using a cpuset to user space could
      write to an array that it had kmalloc'd, after a kmalloc request of zero size.
      
      The problem was that the code didn't check for writes past the allocated end
      of the array until -after- the first write.
      
      This is a race condition that is likely rare -- it would only show up if a
      cpuset went from being empty to having a task in it, during the brief time
      between the allocation and the first write.
      
      Prior to roughly 2.6.22 kernels, this was also a benign problem, because a
      zero kmalloc returned a few usable bytes anyway, and no harm was done with the
      bogus write.
      
      With the 2.6.22 kernel changes to make issue a warning if code tries to write
      to the location returned from a zero size allocation, this problem is no
      longer benign.  This cpuset code would occassionally trigger that warning.
      
      The fix is trivial -- check before storing into the array, not after, whether
      the array is big enough to hold the store.
      
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: "Serge E. Hallyn" <serue@us.ibm.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Herbert Poetzl <herbert@13thfloor.at>
      Cc: Kirill Korotaev <dev@openvz.org>
      Cc: Paul Menage <menage@google.com>
      Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
      Cc: Christoph Lameter <clameter@sgi.com>
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3e903e7b
  10. 10 5月, 2007 1 次提交
  11. 09 5月, 2007 3 次提交
  12. 08 5月, 2007 1 次提交
  13. 13 2月, 2007 2 次提交
  14. 31 12月, 2006 1 次提交
  15. 14 12月, 2006 1 次提交
    • P
      [PATCH] cpuset: rework cpuset_zone_allowed api · 02a0e53d
      Paul Jackson 提交于
      Elaborate the API for calling cpuset_zone_allowed(), so that users have to
      explicitly choose between the two variants:
      
        cpuset_zone_allowed_hardwall()
        cpuset_zone_allowed_softwall()
      
      Until now, whether or not you got the hardwall flavor depended solely on
      whether or not you or'd in the __GFP_HARDWALL gfp flag to the gfp_mask
      argument.
      
      If you didn't specify __GFP_HARDWALL, you implicitly got the softwall
      version.
      
      Unfortunately, this meant that users would end up with the softwall version
      without thinking about it.  Since only the softwall version might sleep,
      this led to bugs with possible sleeping in interrupt context on more than
      one occassion.
      
      The hardwall version requires that the current tasks mems_allowed allows
      the node of the specified zone (or that you're in interrupt or that
      __GFP_THISNODE is set or that you're on a one cpuset system.)
      
      The softwall version, depending on the gfp_mask, might allow a node if it
      was allowed in the nearest enclusing cpuset marked mem_exclusive (which
      requires taking the cpuset lock 'callback_mutex' to evaluate.)
      
      This patch removes the cpuset_zone_allowed() call, and forces the caller to
      explicitly choose between the hardwall and the softwall case.
      
      If the caller wants the gfp_mask to determine this choice, they should (1)
      be sure they can sleep or that __GFP_HARDWALL is set, and (2) invoke the
      cpuset_zone_allowed_softwall() routine.
      
      This adds another 100 or 200 bytes to the kernel text space, due to the few
      lines of nearly duplicate code at the top of both cpuset_zone_allowed_*
      routines.  It should save a few instructions executed for the calls that
      turned into calls of cpuset_zone_allowed_hardwall, thanks to not having to
      set (before the call) then check (within the call) the __GFP_HARDWALL flag.
      
      For the most critical call, from get_page_from_freelist(), the same
      instructions are executed as before -- the old cpuset_zone_allowed()
      routine it used to call is the same code as the
      cpuset_zone_allowed_softwall() routine that it calls now.
      
      Not a perfect win, but seems worth it, to reduce this chance of hitting a
      sleeping with irq off complaint again.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      02a0e53d
  16. 09 12月, 2006 1 次提交
  17. 08 12月, 2006 4 次提交
  18. 11 10月, 2006 1 次提交
  19. 01 10月, 2006 1 次提交
  20. 30 9月, 2006 4 次提交
    • P
      [PATCH] cpuset: fix obscure attach_task vs exiting race · 181b6480
      Paul Jackson 提交于
      Fix obscure race condition in kernel/cpuset.c attach_task() code.
      
      There is basically zero chance of anyone accidentally being harmed by this
      race.
      
      It requires a special 'micro-stress' load and a special timing loop hacks
      in the kernel to hit in less than an hour, and even then you'd have to hit
      it hundreds or thousands of times, followed by some unusual and senseless
      cpuset configuration requests, including removing the top cpuset, to cause
      any visibly harm affects.
      
      One could, with perhaps a few days or weeks of such effort, get the
      reference count on the top cpuset below zero, and manage to crash the
      kernel by asking to remove the top cpuset.
      
      I found it by code inspection.
      
      The race was introduced when 'the_top_cpuset_hack' was introduced, and one
      piece of code was not updated.  An old check for a possibly null task
      cpuset pointer needed to be changed to a check for a task marked
      PF_EXITING.  The pointer can't be null anymore, thanks to
      the_top_cpuset_hack (documented in kernel/cpuset.c).  But the task could
      have gone into PF_EXITING state after it was found in the task_list scan.
      
      If a task is PF_EXITING in this code, it is possible that its task->cpuset
      pointer is pointing to the top cpuset due to the_top_cpuset_hack, rather
      than because the top_cpuset was that tasks last valid cpuset.  In that
      case, the wrong cpuset reference counter would be decremented.
      
      The fix is trivial.  Instead of failing the system call if the tasks cpuset
      pointer is null here, fail it if the task is in PF_EXITING state.
      
      The code for 'the_top_cpuset_hack' that changes an exiting tasks cpuset to
      the top_cpuset is done without locking, so could happen at anytime.  But it
      is done during the exit handling, after the PF_EXITING flag is set.  So if
      we verify that a task is still not PF_EXITING after we copy out its cpuset
      pointer (into 'oldcs', below), we know that 'oldcs' is not one of these
      hack references to the top_cpuset.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      181b6480
    • P
      [PATCH] cpuset: hotunplug cpus and mems in all cpusets · b1aac8bb
      Paul Jackson 提交于
      The cpuset code handling hot unplug of CPUs or Memory Nodes was incorrect -
      it could remove a CPU or Node from the top cpuset, while leaving it still
      in some child cpusets.
      
      One basic rule of cpusets is that each cpusets cpus and mems are subsets of
      its parents.  The cpuset hot unplug code violated this rule.
      
      So the cpuset hotunplug handler must walk down the tree, removing any
      removed CPU or Node from all cpusets.
      
      However, it is not allowed to make a cpusets cpus or mems become empty.
      They can only transition from empty to non-empty, not back.
      
      So if the last CPU or Node would be removed from a cpuset by the above
      walk, we scan back up the cpuset hierarchy, finding the nearest ancestor
      that still has something online, and copy its CPU or Memory placement.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Cc: Nathan Lynch <ntl@pobox.com>
      Cc: Anton Blanchard <anton@samba.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b1aac8bb
    • P
      [PATCH] cpuset: top_cpuset tracks hotplug changes to node_online_map · 38837fc7
      Paul Jackson 提交于
      Change the list of memory nodes allowed to tasks in the top (root) nodeset
      to dynamically track what cpus are online, using a call to a cpuset hook
      from the memory hotplug code.  Make this top cpus file read-only.
      
      On systems that have cpusets configured in their kernel, but that aren't
      actively using cpusets (for some distros, this covers the majority of
      systems) all tasks end up in the top cpuset.
      
      If that system does support memory hotplug, then these tasks cannot make
      use of memory nodes that are added after system boot, because the memory
      nodes are not allowed in the top cpuset.  This is a surprising regression
      over earlier kernels that didn't have cpusets enabled.
      
      One key motivation for this change is to remain consistent with the
      behaviour for the top_cpuset's 'cpus', which is also read-only, and which
      automatically tracks the cpu_online_map.
      
      This change also has the minor benefit that it fixes a long standing,
      little noticed, minor bug in cpusets.  The cpuset performance tweak to
      short circuit the cpuset_zone_allowed() check on systems with just a single
      cpuset (see 'number_of_cpusets', in linux/cpuset.h) meant that simply
      changing the 'mems' of the top_cpuset had no affect, even though the change
      (the write system call) appeared to succeed.  With the following change,
      that write to the 'mems' file fails -EACCES, and the 'mems' file stubbornly
      refuses to be changed via user space writes.  Thus no one should be mislead
      into thinking they've changed the top_cpusets's 'mems' when in affect they
      haven't.
      
      In order to keep the behaviour of cpusets consistent between systems
      actively making use of them and systems not using them, this patch changes
      the behaviour of the 'mems' file in the top (root) cpuset, making it read
      only, and making it automatically track the value of node_online_map.  Thus
      tasks in the top cpuset will have automatic use of hot plugged memory nodes
      allowed by their cpuset.
      
      [akpm@osdl.org: build fix]
      [bunk@stusta.de: build fix]
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      38837fc7
    • S
      [PATCH] pidspace: is_init() · f400e198
      Sukadev Bhattiprolu 提交于
      This is an updated version of Eric Biederman's is_init() patch.
      (http://lkml.org/lkml/2006/2/6/280).  It applies cleanly to 2.6.18-rc3 and
      replaces a few more instances of ->pid == 1 with is_init().
      
      Further, is_init() checks pid and thus removes dependency on Eric's other
      patches for now.
      
      Eric's original description:
      
      	There are a lot of places in the kernel where we test for init
      	because we give it special properties.  Most  significantly init
      	must not die.  This results in code all over the kernel test
      	->pid == 1.
      
      	Introduce is_init to capture this case.
      
      	With multiple pid spaces for all of the cases affected we are
      	looking for only the first process on the system, not some other
      	process that has pid == 1.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NSukadev Bhattiprolu <sukadev@us.ibm.com>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Serge Hallyn <serue@us.ibm.com>
      Cc: Cedric Le Goater <clg@fr.ibm.com>
      Cc: <lxc-devel@lists.sourceforge.net>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f400e198
  21. 27 9月, 2006 1 次提交
  22. 26 9月, 2006 1 次提交