1. 28 4月, 2008 1 次提交
  2. 20 4月, 2008 3 次提交
  3. 06 3月, 2008 1 次提交
  4. 09 2月, 2008 1 次提交
    • E
      proc: seqfile convert proc_pid_status to properly handle pid namespaces · df5f8314
      Eric W. Biederman 提交于
      Currently we possibly lookup the pid in the wrong pid namespace.  So
      seq_file convert proc_pid_status which ensures the proper pid namespaces is
      passed in.
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: build fix]
      [akpm@linux-foundation.org: another build fix]
      [akpm@linux-foundation.org: s390 build fix]
      [akpm@linux-foundation.org: fix task_name() output]
      [akpm@linux-foundation.org: fix nommu build]
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Cc: Andrew Morgan <morgan@kernel.org>
      Cc: Serge Hallyn <serue@us.ibm.com>
      Cc: Cedric Le Goater <clg@fr.ibm.com>
      Cc: Pavel Emelyanov <xemul@openvz.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Paul Menage <menage@google.com>
      Cc: Paul Jackson <pj@sgi.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      df5f8314
  5. 08 2月, 2008 5 次提交
  6. 26 1月, 2008 1 次提交
    • G
      cpu-hotplug: replace lock_cpu_hotplug() with get_online_cpus() · 86ef5c9a
      Gautham R Shenoy 提交于
      Replace all lock_cpu_hotplug/unlock_cpu_hotplug from the kernel and use
      get_online_cpus and put_online_cpus instead as it highlights the
      refcount semantics in these operations.
      
      The new API guarantees protection against the cpu-hotplug operation, but
      it doesn't guarantee serialized access to any of the local data
      structures. Hence the changes needs to be reviewed.
      
      In case of pseries_add_processor/pseries_remove_processor, use
      cpu_maps_update_begin()/cpu_maps_update_done() as we're modifying the
      cpu_present_map there.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      86ef5c9a
  7. 20 10月, 2007 6 次提交
    • C
      hotplug cpu: migrate a task within its cpuset · 470fd646
      Cliff Wickman 提交于
      When a cpu is disabled, move_task_off_dead_cpu() is called for tasks that have
      been running on that cpu.
      
      Currently, such a task is migrated:
       1) to any cpu on the same node as the disabled cpu, which is both online
          and among that task's cpus_allowed
       2) to any cpu which is both online and among that task's cpus_allowed
      
      It is typical of a multithreaded application running on a large NUMA system to
      have its tasks confined to a cpuset so as to cluster them near the memory that
      they share.  Furthermore, it is typical to explicitly place such a task on a
      specific cpu in that cpuset.  And in that case the task's cpus_allowed
      includes only a single cpu.
      
      This patch would insert a preference to migrate such a task to some cpu within
      its cpuset (and set its cpus_allowed to its entire cpuset).
      
      With this patch, migrate the task to:
       1) to any cpu on the same node as the disabled cpu, which is both online
          and among that task's cpus_allowed
       2) to any online cpu within the task's cpuset
       3) to any cpu which is both online and among that task's cpus_allowed
      
      In order to do this, move_task_off_dead_cpu() must make a call to
      cpuset_cpus_allowed_locked(), a new subset of cpuset_cpus_allowed(), that will
      not block.  (name change - per Oleg's suggestion)
      
      Calls are made to cpuset_lock() and cpuset_unlock() in migration_call() to set
      the cpuset mutex during the whole migrate_live_tasks() and
      migrate_dead_tasks() procedure.
      
      [akpm@linux-foundation.org: build fix]
      [pj@sgi.com: Fix indentation and spacing]
      Signed-off-by: NCliff Wickman <cpw@sgi.com>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: Paul Jackson <pj@sgi.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      470fd646
    • P
      Fix cpusets update_cpumask · 8707d8b8
      Paul Menage 提交于
      Cause writes to cpuset "cpus" file to update cpus_allowed for member tasks:
      
      - collect batches of tasks under tasklist_lock and then call
        set_cpus_allowed() on them outside the lock (since this can sleep).
      
      - add a simple generic priority heap type to allow efficient collection
        of batches of tasks to be processed without duplicating or missing any
        tasks in subsequent batches.
      
      - make "cpus" file update a no-op if the mask hasn't changed
      
      - fix race between update_cpumask() and sched_setaffinity() by making
        sched_setaffinity() post-check that it's not running on any cpus outside
        cpuset_cpus_allowed().
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NPaul Menage <menage@google.com>
      Cc: Paul Jackson <pj@sgi.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Cedric Le Goater <clg@fr.ibm.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Serge Hallyn <serue@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8707d8b8
    • P
      cpusets: decrustify cpuset mask update code · 020958b6
      Paul Jackson 提交于
      Decrustify the kernel/cpuset.c 'cpus' and 'mems' updating code.
      
      Other than subtle improvements in the consistency of identifying
      white space at the beginning and end of passed in masks, this
      doesn't make any visible difference in behaviour.  But it's
      one or two hundred kernel text bytes smaller, and easier to
      understand.
      
      [akpm@linux-foundation.org: coding-style fix]
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Reviewed-by: NPaul Menage <menage@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      020958b6
    • P
      cpuset sched_load_balance flag · 029190c5
      Paul Jackson 提交于
      Add a new per-cpuset flag called 'sched_load_balance'.
      
      When enabled in a cpuset (the default value) it tells the kernel scheduler
      that the scheduler should provide the normal load balancing on the CPUs in
      that cpuset, sometimes moving tasks from one CPU to a second CPU if the
      second CPU is less loaded and if that task is allowed to run there.
      
      When disabled (write "0" to the file) then it tells the kernel scheduler
      that load balancing is not required for the CPUs in that cpuset.
      
      Now even if this flag is disabled for some cpuset, the kernel may still
      have to load balance some or all the CPUs in that cpuset, if some
      overlapping cpuset has its sched_load_balance flag enabled.
      
      If there are some CPUs that are not in any cpuset whose sched_load_balance
      flag is enabled, the kernel scheduler will not load balance tasks to those
      CPUs.
      
      Moreover the kernel will partition the 'sched domains' (non-overlapping
      sets of CPUs over which load balancing is attempted) into the finest
      granularity partition that it can find, while still keeping any two CPUs
      that are in the same shed_load_balance enabled cpuset in the same element
      of the partition.
      
      This serves two purposes:
       1) It provides a mechanism for real time isolation of some CPUs, and
       2) it can be used to improve performance on systems with many CPUs
          by supporting configurations in which load balancing is not done
          across all CPUs at once, but rather only done in several smaller
          disjoint sets of CPUs.
      
      This mechanism replaces the earlier overloading of the per-cpuset
      flag 'cpu_exclusive', which overloading was removed in an earlier
      patch: cpuset-remove-sched-domain-hooks-from-cpusets
      
      See further the Documentation and comments in the code itself.
      
      [akpm@linux-foundation.org: don't be weird]
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      029190c5
    • P
      Task Control Groups: make cpusets a client of cgroups · 8793d854
      Paul Menage 提交于
      Remove the filesystem support logic from the cpusets system and makes cpusets
      a cgroup subsystem
      
      The "cpuset" filesystem becomes a dummy filesystem; attempts to mount it get
      passed through to the cgroup filesystem with the appropriate options to
      emulate the old cpuset filesystem behaviour.
      Signed-off-by: NPaul Menage <menage@google.com>
      Cc: Serge E. Hallyn <serue@us.ibm.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Paul Jackson <pj@sgi.com>
      Cc: Kirill Korotaev <dev@openvz.org>
      Cc: Herbert Poetzl <herbert@13thfloor.at>
      Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
      Cc: Cedric Le Goater <clg@fr.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8793d854
    • P
      cpuset: zero malloc - revert the old cpuset fix · 55a230aa
      Paul Jackson 提交于
      The cpuset code to present a list of tasks using a cpuset to user space could
      write to an array that it had kmalloc'd, after a kmalloc request of zero size.
      
      The problem was that the code didn't check for writes past the allocated end
      of the array until -after- the first write.
      
      This is a race condition that is likely rare -- it would only show up if a
      cpuset went from being empty to having a task in it, during the brief time
      between the allocation and the first write.
      
      Prior to roughly 2.6.22 kernels, this was also a benign problem, because a
      zero kmalloc returned a few usable bytes anyway, and no harm was done with the
      bogus write.
      
      With the 2.6.22 kernel changes to make issue a warning if code tries to write
      to the location returned from a zero size allocation, this problem is no
      longer benign.  This cpuset code would occassionally trigger that warning.
      
      The fix is trivial -- check before storing into the array, not after, whether
      the array is big enough to hold the store.
      
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: "Serge E. Hallyn" <serue@us.ibm.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Herbert Poetzl <herbert@13thfloor.at>
      Cc: Kirill Korotaev <dev@openvz.org>
      Cc: Paul Menage <menage@google.com>
      Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
      Cc: Christoph Lameter <clameter@sgi.com>
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      55a230aa
  8. 19 10月, 2007 1 次提交
  9. 17 10月, 2007 4 次提交
    • D
      oom: compare cpuset mems_allowed instead of exclusive ancestors · bbe373f2
      David Rientjes 提交于
      Instead of testing for overlap in the memory nodes of the the nearest
      exclusive ancestor of both current and the candidate task, it is better to
      simply test for intersection between the task's mems_allowed in their task
      descriptors.  This does not require taking callback_mutex since it is only
      used as a hint in the badness scoring.
      
      Tasks that do not have an intersection in their mems_allowed with the current
      task are not explicitly restricted from being OOM killed because it is quite
      possible that the candidate task has allocated memory there before and has
      since changed its mems_allowed.
      
      Cc: Andrea Arcangeli <andrea@suse.de>
      Acked-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bbe373f2
    • P
      cpuset: remove sched domain hooks from cpusets · 607717a6
      Paul Jackson 提交于
      Remove the cpuset hooks that defined sched domains depending on the setting
      of the 'cpu_exclusive' flag.
      
      The cpu_exclusive flag can only be set on a child if it is set on the
      parent.
      
      This made that flag painfully unsuitable for use as a flag defining a
      partitioning of a system.
      
      It was entirely unobvious to a cpuset user what partitioning of sched
      domains they would be causing when they set that one cpu_exclusive bit on
      one cpuset, because it depended on what CPUs were in the remainder of that
      cpusets siblings and child cpusets, after subtracting out other
      cpu_exclusive cpusets.
      
      Furthermore, there was no way on production systems to query the
      result.
      
      Using the cpu_exclusive flag for this was simply wrong from the get go.
      
      Fortunately, it was sufficiently borked that so far as I know, almost no
      successful use has been made of this.  One real time group did use it to
      affectively isolate CPUs from any load balancing efforts.  They are willing
      to adapt to alternative mechanisms for this, such as someway to manipulate
      the list of isolated CPUs on a running system.  They can do without this
      present cpu_exclusive based mechanism while we develop an alternative.
      
      There is a real risk, to the best of my understanding, of users
      accidentally setting up a partitioned scheduler domains, inhibiting desired
      load balancing across all their CPUs, due to the nonobvious (from the
      cpuset perspective) side affects of the cpu_exclusive flag.
      
      Furthermore, since there was no way on a running system to see what one was
      doing with sched domains, this change will be invisible to any using code.
      Unless they have real insight to the scheduler load balancing choices, they
      will be unable to detect that this change has been made in the kernel's
      behaviour.
      
      Initial discussion on lkml of this patch has generated much comment.  My
      (probably controversial) take on that discussion is that it has reached a
      rough concensus that the current cpuset cpu_exclusive mechanism for
      defining sched domains is borked.  There is no concensus on the
      replacement.  But since we can remove this mechanism, and since its
      continued presence risks causing unwanted partitioning of the schedulers
      load balancing, we should remove it while we can, as we proceed to work the
      replacement scheduler domain mechanisms.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Christoph Lameter <clameter@engr.sgi.com>
      Cc: Dinakar Guniguntala <dino@in.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      607717a6
    • M
      Group short-lived and reclaimable kernel allocations · e12ba74d
      Mel Gorman 提交于
      This patch marks a number of allocations that are either short-lived such as
      network buffers or are reclaimable such as inode allocations.  When something
      like updatedb is called, long-lived and unmovable kernel allocations tend to
      be spread throughout the address space which increases fragmentation.
      
      This patch groups these allocations together as much as possible by adding a
      new MIGRATE_TYPE.  The MIGRATE_RECLAIMABLE type is for allocations that can be
      reclaimed on demand, but not moved.  i.e.  they can be migrated by deleting
      them and re-reading the information from elsewhere.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Cc: Christoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e12ba74d
    • C
      Memoryless nodes: Use N_HIGH_MEMORY for cpusets · 0e1e7c7a
      Christoph Lameter 提交于
      cpusets try to ensure that any node added to a cpuset's mems_allowed is
      on-line and contains memory.  The assumption was that online nodes contained
      memory.  Thus, it is possible to add memoryless nodes to a cpuset and then add
      tasks to this cpuset.  This results in continuous series of oom-kill and
      apparent system hang.
      
      Change cpusets to use node_states[N_HIGH_MEMORY] [a.k.a.  node_memory_map] in
      place of node_online_map when vetting memories.  Return error if admin
      attempts to write a non-empty mems_allowed node mask containing only
      memoryless-nodes.
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: NBob Picco <bob.picco@hp.com>
      Signed-off-by: NNishanth Aravamudan <nacc@us.ibm.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@skynet.ie>
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0e1e7c7a
  10. 18 7月, 2007 1 次提交
    • J
      usermodehelper: Tidy up waiting · 86313c48
      Jeremy Fitzhardinge 提交于
      Rather than using a tri-state integer for the wait flag in
      call_usermodehelper_exec, define a proper enum, and use that.  I've
      preserved the integer values so that any callers I've missed should
      still work OK.
      Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Randy Dunlap <randy.dunlap@oracle.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Johannes Berg <johannes@sipsolutions.net>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Bjorn Helgaas <bjorn.helgaas@hp.com>
      Cc: Joel Becker <joel.becker@oracle.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Kay Sievers <kay.sievers@vrfy.org>
      Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: David Howells <dhowells@redhat.com>
      86313c48
  11. 17 7月, 2007 1 次提交
  12. 16 7月, 2007 1 次提交
  13. 17 6月, 2007 1 次提交
    • P
      cpuset: zero malloc - fix for old cpusets · 3e903e7b
      Paul Jackson 提交于
      The cpuset code to present a list of tasks using a cpuset to user space could
      write to an array that it had kmalloc'd, after a kmalloc request of zero size.
      
      The problem was that the code didn't check for writes past the allocated end
      of the array until -after- the first write.
      
      This is a race condition that is likely rare -- it would only show up if a
      cpuset went from being empty to having a task in it, during the brief time
      between the allocation and the first write.
      
      Prior to roughly 2.6.22 kernels, this was also a benign problem, because a
      zero kmalloc returned a few usable bytes anyway, and no harm was done with the
      bogus write.
      
      With the 2.6.22 kernel changes to make issue a warning if code tries to write
      to the location returned from a zero size allocation, this problem is no
      longer benign.  This cpuset code would occassionally trigger that warning.
      
      The fix is trivial -- check before storing into the array, not after, whether
      the array is big enough to hold the store.
      
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: "Serge E. Hallyn" <serue@us.ibm.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Herbert Poetzl <herbert@13thfloor.at>
      Cc: Kirill Korotaev <dev@openvz.org>
      Cc: Paul Menage <menage@google.com>
      Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
      Cc: Christoph Lameter <clameter@sgi.com>
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3e903e7b
  14. 10 5月, 2007 1 次提交
  15. 09 5月, 2007 3 次提交
  16. 08 5月, 2007 1 次提交
  17. 13 2月, 2007 2 次提交
  18. 31 12月, 2006 1 次提交
  19. 14 12月, 2006 1 次提交
    • P
      [PATCH] cpuset: rework cpuset_zone_allowed api · 02a0e53d
      Paul Jackson 提交于
      Elaborate the API for calling cpuset_zone_allowed(), so that users have to
      explicitly choose between the two variants:
      
        cpuset_zone_allowed_hardwall()
        cpuset_zone_allowed_softwall()
      
      Until now, whether or not you got the hardwall flavor depended solely on
      whether or not you or'd in the __GFP_HARDWALL gfp flag to the gfp_mask
      argument.
      
      If you didn't specify __GFP_HARDWALL, you implicitly got the softwall
      version.
      
      Unfortunately, this meant that users would end up with the softwall version
      without thinking about it.  Since only the softwall version might sleep,
      this led to bugs with possible sleeping in interrupt context on more than
      one occassion.
      
      The hardwall version requires that the current tasks mems_allowed allows
      the node of the specified zone (or that you're in interrupt or that
      __GFP_THISNODE is set or that you're on a one cpuset system.)
      
      The softwall version, depending on the gfp_mask, might allow a node if it
      was allowed in the nearest enclusing cpuset marked mem_exclusive (which
      requires taking the cpuset lock 'callback_mutex' to evaluate.)
      
      This patch removes the cpuset_zone_allowed() call, and forces the caller to
      explicitly choose between the hardwall and the softwall case.
      
      If the caller wants the gfp_mask to determine this choice, they should (1)
      be sure they can sleep or that __GFP_HARDWALL is set, and (2) invoke the
      cpuset_zone_allowed_softwall() routine.
      
      This adds another 100 or 200 bytes to the kernel text space, due to the few
      lines of nearly duplicate code at the top of both cpuset_zone_allowed_*
      routines.  It should save a few instructions executed for the calls that
      turned into calls of cpuset_zone_allowed_hardwall, thanks to not having to
      set (before the call) then check (within the call) the __GFP_HARDWALL flag.
      
      For the most critical call, from get_page_from_freelist(), the same
      instructions are executed as before -- the old cpuset_zone_allowed()
      routine it used to call is the same code as the
      cpuset_zone_allowed_softwall() routine that it calls now.
      
      Not a perfect win, but seems worth it, to reduce this chance of hitting a
      sleeping with irq off complaint again.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      02a0e53d
  20. 09 12月, 2006 1 次提交
  21. 08 12月, 2006 3 次提交