1. 22 10月, 2007 1 次提交
  2. 20 10月, 2007 11 次提交
    • M
      kernel/sched.c: remove bogus comment from account_user_time · 6888c1ec
      Michael Neuling 提交于
      hardirq_offset is no longer needed.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NAdrian Bunk <bunk@kernel.org>
      6888c1ec
    • R
      Fix misspellings of "system", "controller", "interrupt" and "necessary". · 3a4fa0a2
      Robert P. J. Day 提交于
      Fix the various misspellings of "system", controller", "interrupt" and
      "[un]necessary".
      Signed-off-by: NRobert P. J. Day <rpjday@mindspring.com>
      Signed-off-by: NAdrian Bunk <bunk@kernel.org>
      3a4fa0a2
    • S
      Hook up group scheduler with control groups · 68318b8e
      Srivatsa Vaddagiri 提交于
      Enable "cgroup" (formerly containers) based fair group scheduling.  This
      will let administrator create arbitrary groups of tasks (using "cgroup"
      pseudo filesystem) and control their cpu bandwidth usage.
      
      [akpm@linux-foundation.org: fix cpp condition]
      Signed-off-by: NSrivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Signed-off-by: NDhaval Giani <dhaval@linux.vnet.ibm.com>
      Cc: Randy Dunlap <randy.dunlap@oracle.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Paul Menage <menage@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      68318b8e
    • C
      hotplug cpu: migrate a task within its cpuset · 470fd646
      Cliff Wickman 提交于
      When a cpu is disabled, move_task_off_dead_cpu() is called for tasks that have
      been running on that cpu.
      
      Currently, such a task is migrated:
       1) to any cpu on the same node as the disabled cpu, which is both online
          and among that task's cpus_allowed
       2) to any cpu which is both online and among that task's cpus_allowed
      
      It is typical of a multithreaded application running on a large NUMA system to
      have its tasks confined to a cpuset so as to cluster them near the memory that
      they share.  Furthermore, it is typical to explicitly place such a task on a
      specific cpu in that cpuset.  And in that case the task's cpus_allowed
      includes only a single cpu.
      
      This patch would insert a preference to migrate such a task to some cpu within
      its cpuset (and set its cpus_allowed to its entire cpuset).
      
      With this patch, migrate the task to:
       1) to any cpu on the same node as the disabled cpu, which is both online
          and among that task's cpus_allowed
       2) to any online cpu within the task's cpuset
       3) to any cpu which is both online and among that task's cpus_allowed
      
      In order to do this, move_task_off_dead_cpu() must make a call to
      cpuset_cpus_allowed_locked(), a new subset of cpuset_cpus_allowed(), that will
      not block.  (name change - per Oleg's suggestion)
      
      Calls are made to cpuset_lock() and cpuset_unlock() in migration_call() to set
      the cpuset mutex during the whole migrate_live_tasks() and
      migrate_dead_tasks() procedure.
      
      [akpm@linux-foundation.org: build fix]
      [pj@sgi.com: Fix indentation and spacing]
      Signed-off-by: NCliff Wickman <cpw@sgi.com>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: Paul Jackson <pj@sgi.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      470fd646
    • P
      Use helpers to obtain task pid in printks · ba25f9dc
      Pavel Emelyanov 提交于
      The task_struct->pid member is going to be deprecated, so start
      using the helpers (task_pid_nr/task_pid_vnr/task_pid_nr_ns) in
      the kernel.
      
      The first thing to start with is the pid, printed to dmesg - in
      this case we may safely use task_pid_nr(). Besides, printks produce
      more (much more) than a half of all the explicit pid usage.
      
      [akpm@linux-foundation.org: git-drm went and changed lots of stuff]
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Cc: Dave Airlie <airlied@linux.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ba25f9dc
    • E
      Fix tsk->exit_state usage · 270f722d
      Eugene Teo 提交于
      tsk->exit_state can only be 0, EXIT_ZOMBIE, or EXIT_DEAD.  A non-zero test
      is the same as tsk->exit_state & (EXIT_ZOMBIE | EXIT_DEAD), so just testing
      tsk->exit_state is sufficient.
      Signed-off-by: NEugene Teo <eugeneteo@kernel.sg>
      Cc: Roland McGrath <roland@redhat.com>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      270f722d
    • P
      Fix cpusets update_cpumask · 8707d8b8
      Paul Menage 提交于
      Cause writes to cpuset "cpus" file to update cpus_allowed for member tasks:
      
      - collect batches of tasks under tasklist_lock and then call
        set_cpus_allowed() on them outside the lock (since this can sleep).
      
      - add a simple generic priority heap type to allow efficient collection
        of batches of tasks to be processed without duplicating or missing any
        tasks in subsequent batches.
      
      - make "cpus" file update a no-op if the mask hasn't changed
      
      - fix race between update_cpumask() and sched_setaffinity() by making
        sched_setaffinity() post-check that it's not running on any cpus outside
        cpuset_cpus_allowed().
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NPaul Menage <menage@google.com>
      Cc: Paul Jackson <pj@sgi.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Cedric Le Goater <clg@fr.ibm.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Serge Hallyn <serue@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8707d8b8
    • P
      cpuset sched_load_balance flag · 029190c5
      Paul Jackson 提交于
      Add a new per-cpuset flag called 'sched_load_balance'.
      
      When enabled in a cpuset (the default value) it tells the kernel scheduler
      that the scheduler should provide the normal load balancing on the CPUs in
      that cpuset, sometimes moving tasks from one CPU to a second CPU if the
      second CPU is less loaded and if that task is allowed to run there.
      
      When disabled (write "0" to the file) then it tells the kernel scheduler
      that load balancing is not required for the CPUs in that cpuset.
      
      Now even if this flag is disabled for some cpuset, the kernel may still
      have to load balance some or all the CPUs in that cpuset, if some
      overlapping cpuset has its sched_load_balance flag enabled.
      
      If there are some CPUs that are not in any cpuset whose sched_load_balance
      flag is enabled, the kernel scheduler will not load balance tasks to those
      CPUs.
      
      Moreover the kernel will partition the 'sched domains' (non-overlapping
      sets of CPUs over which load balancing is attempted) into the finest
      granularity partition that it can find, while still keeping any two CPUs
      that are in the same shed_load_balance enabled cpuset in the same element
      of the partition.
      
      This serves two purposes:
       1) It provides a mechanism for real time isolation of some CPUs, and
       2) it can be used to improve performance on systems with many CPUs
          by supporting configurations in which load balancing is not done
          across all CPUs at once, but rather only done in several smaller
          disjoint sets of CPUs.
      
      This mechanism replaces the earlier overloading of the per-cpuset
      flag 'cpu_exclusive', which overloading was removed in an earlier
      patch: cpuset-remove-sched-domain-hooks-from-cpusets
      
      See further the Documentation and comments in the code itself.
      
      [akpm@linux-foundation.org: don't be weird]
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      029190c5
    • P
      Uninline find_task_by_xxx set of functions · 228ebcbe
      Pavel Emelyanov 提交于
      The find_task_by_something is a set of macros are used to find task by pid
      depending on what kind of pid is proposed - global or virtual one.  All of
      them are wrappers above the most generic one - find_task_by_pid_type_ns() -
      and just substitute some args for it.
      
      It turned out, that dereferencing the current->nsproxy->pid_ns construction
      and pushing one more argument on the stack inline cause kernel text size to
      grow.
      
      This patch moves all this stuff out-of-line into kernel/pid.c.  Together
      with the next patch it saves a bit less than 400 bytes from the .text
      section.
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Cc: Sukadev Bhattiprolu <sukadev@us.ibm.com>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: Paul Menage <menage@google.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      228ebcbe
    • P
      pid namespaces: changes to show virtual ids to user · b488893a
      Pavel Emelyanov 提交于
      This is the largest patch in the set. Make all (I hope) the places where
      the pid is shown to or get from user operate on the virtual pids.
      
      The idea is:
       - all in-kernel data structures must store either struct pid itself
         or the pid's global nr, obtained with pid_nr() call;
       - when seeking the task from kernel code with the stored id one
         should use find_task_by_pid() call that works with global pids;
       - when showing pid's numerical value to the user the virtual one
         should be used, but however when one shows task's pid outside this
         task's namespace the global one is to be used;
       - when getting the pid from userspace one need to consider this as
         the virtual one and use appropriate task/pid-searching functions.
      
      [akpm@linux-foundation.org: build fix]
      [akpm@linux-foundation.org: nuther build fix]
      [akpm@linux-foundation.org: yet nuther build fix]
      [akpm@linux-foundation.org: remove unneeded casts]
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NAlexey Dobriyan <adobriyan@openvz.org>
      Cc: Sukadev Bhattiprolu <sukadev@us.ibm.com>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: Paul Menage <menage@google.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b488893a
    • P
      Task Control Groups: example CPU accounting subsystem · 62d0df64
      Paul Menage 提交于
      This example demonstrates how to use the generic cgroup subsystem for a
      simple resource tracker that counts, for the processes in a cgroup, the
      total CPU time used and the %CPU used in the last complete 10 second interval.
      
      Portions contributed by Balbir Singh <balbir@in.ibm.com>
      Signed-off-by: NPaul Menage <menage@google.com>
      Cc: Serge E. Hallyn <serue@us.ibm.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Paul Jackson <pj@sgi.com>
      Cc: Kirill Korotaev <dev@openvz.org>
      Cc: Herbert Poetzl <herbert@13thfloor.at>
      Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
      Cc: Cedric Le Goater <clg@fr.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      62d0df64
  3. 19 10月, 2007 6 次提交
  4. 17 10月, 2007 7 次提交
    • O
      migration_call(CPU_DEAD): use spin_lock_irq() instead of task_rq_lock() · d2da272a
      Oleg Nesterov 提交于
      Change migration_call(CPU_DEAD) to use direct spin_lock_irq() instead of
      task_rq_lock(rq->idle), rq->idle can't change its task_rq().
      
      This makes the code a bit more symmetrical with migrate_dead_tasks()'s path
      which uses spin_lock_irq/spin_unlock_irq.
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Cc: Cliff Wickman <cpw@sgi.com>
      Cc: Gautham R Shenoy <ego@in.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
      Cc: Akinobu Mita <akinobu.mita@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d2da272a
    • O
      do CPU_DEAD migrating under read_lock(tasklist) instead of write_lock_irq(tasklist) · f7b4cddc
      Oleg Nesterov 提交于
      Currently move_task_off_dead_cpu() is called under
      write_lock_irq(tasklist).  This means it can't use task_lock() which is
      needed to improve migrating to take task's ->cpuset into account.
      
      Change the code to call move_task_off_dead_cpu() with irqs enabled, and
      change migrate_live_tasks() to use read_lock(tasklist).
      
      This all is a preparation for the futher changes proposed by Cliff Wickman, see
      	http://marc.info/?t=117327786100003Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Cc: Cliff Wickman <cpw@sgi.com>
      Cc: Gautham R Shenoy <ego@in.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
      Cc: Akinobu Mita <akinobu.mita@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f7b4cddc
    • S
      sched: fix new task startup crash · b9dca1e0
      Srivatsa Vaddagiri 提交于
      Child task may be added on a different cpu that the one on which parent
      is running. In which case, task_new_fair() should check whether the new
      born task's parent entity should be added as well on the cfs_rq.
      
      Patch below fixes the problem in task_new_fair.
      
      This could fix the put_prev_task_fair() crashes reported.
      Reported-by: NKamalesh Babulal <kamalesh@linux.vnet.ibm.com>
      Reported-by: NAndy Whitcroft <apw@shadowen.org>
      Signed-off-by: NSrivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b9dca1e0
    • K
      sched: fix improper load balance across sched domain · 908a7c1b
      Ken Chen 提交于
      We recently discovered a nasty performance bug in the kernel CPU load
      balancer where we were hit by 50% performance regression.
      
      When tasks are assigned to a subset of CPUs that span across
      sched_domains (either ccNUMA node or the new multi-core domain) via
      cpu affinity, kernel fails to perform proper load balance at
      these domains, due to several logic in find_busiest_group() miss
      identified busiest sched group within a given domain. This leads to
      inadequate load balance and causes 50% performance hit.
      
      To give you a concrete example, on a dual-core, 2 socket numa system,
      there are 4 logical cpu, organized as:
      
      CPU0 attaching sched-domain:
       domain 0: span 0003  groups: 0001 0002
       domain 1: span 000f  groups: 0003 000c
      CPU1 attaching sched-domain:
       domain 0: span 0003  groups: 0002 0001
       domain 1: span 000f  groups: 0003 000c
      CPU2 attaching sched-domain:
       domain 0: span 000c  groups: 0004 0008
       domain 1: span 000f  groups: 000c 0003
      CPU3 attaching sched-domain:
       domain 0: span 000c  groups: 0008 0004
       domain 1: span 000f  groups: 000c 0003
      
      If I run 2 tasks with CPU affinity set to 0x5.  There are situation
      where cpu0 has run queue length of 2, and cpu2 will be idle.  The
      kernel load balancer is unable to balance out these two tasks over
      cpu0 and cpu2 due to at least three logics in find_busiest_group()
      that heavily bias load balance towards power saving mode. e.g. while
      determining "busiest" variable, kernel only set it when
      "sum_nr_running > group_capacity".  This test is flawed that
      "sum_nr_running" is not necessary same as
      sum-tasks-allowed-to-run-within-the sched-group.  The end result is
      that kernel "think" everything is balanced, but in reality we have an
      imbalance and thus causing one CPU to be over-subscribed and leaving
      other idle.  There are two other logic in the same function will also
      causing similar effect.  The nastiness of this bug is that kernel not
      be able to get unstuck in this unfortunate broken state.  From what
      we've seen in our environment, kernel will stuck in imbalanced state
      for extended period of time and it is also very easy for the kernel to
      stuck into that state (it's pretty much 100% reproducible for us).
      
      So proposing the following fix: add addition logic in
      find_busiest_group to detect intrinsic imbalance within the busiest
      group.  When such condition is detected, load balance goes into spread
      mode instead of default grouping mode.
      Signed-off-by: NKen Chen <kenchen@google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      908a7c1b
    • M
      sched: more robust sd-sysctl entry freeing · cd790076
      Milton Miller 提交于
      It occurred to me this morning that the procname field was dynamically
      allocated and needed to be freed.  I started to put in break statements
      when allocation failed but it was approaching 50% error handling code.
      
      I came up with this alternative of looping while entry->mode is set and
      checking proc_handler instead of ->table.  Alternatively, the string
      version of the domain name and cpu number could be stored the structs.
      
      I verified by compiling CONFIG_DEBUG_SLAB and checking the allocation
      counts after taking a cpuset exclusive and back.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cd790076
    • P
      cpuset: remove sched domain hooks from cpusets · 607717a6
      Paul Jackson 提交于
      Remove the cpuset hooks that defined sched domains depending on the setting
      of the 'cpu_exclusive' flag.
      
      The cpu_exclusive flag can only be set on a child if it is set on the
      parent.
      
      This made that flag painfully unsuitable for use as a flag defining a
      partitioning of a system.
      
      It was entirely unobvious to a cpuset user what partitioning of sched
      domains they would be causing when they set that one cpu_exclusive bit on
      one cpuset, because it depended on what CPUs were in the remainder of that
      cpusets siblings and child cpusets, after subtracting out other
      cpu_exclusive cpusets.
      
      Furthermore, there was no way on production systems to query the
      result.
      
      Using the cpu_exclusive flag for this was simply wrong from the get go.
      
      Fortunately, it was sufficiently borked that so far as I know, almost no
      successful use has been made of this.  One real time group did use it to
      affectively isolate CPUs from any load balancing efforts.  They are willing
      to adapt to alternative mechanisms for this, such as someway to manipulate
      the list of isolated CPUs on a running system.  They can do without this
      present cpu_exclusive based mechanism while we develop an alternative.
      
      There is a real risk, to the best of my understanding, of users
      accidentally setting up a partitioned scheduler domains, inhibiting desired
      load balancing across all their CPUs, due to the nonobvious (from the
      cpuset perspective) side affects of the cpu_exclusive flag.
      
      Furthermore, since there was no way on a running system to see what one was
      doing with sched domains, this change will be invisible to any using code.
      Unless they have real insight to the scheduler load balancing choices, they
      will be unable to detect that this change has been made in the kernel's
      behaviour.
      
      Initial discussion on lkml of this patch has generated much comment.  My
      (probably controversial) take on that discussion is that it has reached a
      rough concensus that the current cpuset cpu_exclusive mechanism for
      defining sched domains is borked.  There is no concensus on the
      replacement.  But since we can remove this mechanism, and since its
      continued presence risks causing unwanted partitioning of the schedulers
      load balancing, we should remove it while we can, as we proceed to work the
      replacement scheduler domain mechanisms.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Christoph Lameter <clameter@engr.sgi.com>
      Cc: Dinakar Guniguntala <dino@in.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      607717a6
    • M
      Convert cpu_sibling_map to be a per cpu variable · d5a7430d
      Mike Travis 提交于
      Convert cpu_sibling_map from a static array sized by NR_CPUS to a per_cpu
      variable.  This saves sizeof(cpumask_t) * NR unused cpus.  Access is mostly
      from startup and CPU HOTPLUG functions.
      Signed-off-by: NMike Travis <travis@sgi.com>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d5a7430d
  5. 15 10月, 2007 15 次提交