1. 01 4月, 2006 4 次提交
  2. 29 3月, 2006 1 次提交
  3. 28 3月, 2006 4 次提交
    • S
      [PATCH] sched: fix group power for allnodes_domains · 08069033
      Siddha, Suresh B 提交于
      Current sched groups power calculation for allnodes_domains is wrong.  We
      should really be using cumulative power of the physical packages in that
      group (similar to the calculation in node_domains)
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      08069033
    • S
      [PATCH] sched: new sched domain for representing multi-core · 1e9f28fa
      Siddha, Suresh B 提交于
      Add a new sched domain for representing multi-core with shared caches
      between cores.  Consider a dual package system, each package containing two
      cores and with last level cache shared between cores with in a package.  If
      there are two runnable processes, with this appended patch those two
      processes will be scheduled on different packages.
      
      On such systems, with this patch we have observed 8% perf improvement with
      specJBB(2 warehouse) benchmark and 35% improvement with CFP2000 rate(with 2
      users).
      
      This new domain will come into play only on multi-core systems with shared
      caches.  On other systems, this sched domain will be removed by domain
      degeneration code.  This new domain can be also used for implementing power
      savings policy (see OLS 2005 CMP kernel scheduler paper for more details..
      I will post another patch for power savings policy soon)
      
      Most of the arch/* file changes are for cpu_coregroup_map() implementation.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      1e9f28fa
    • A
      [PATCH] Small schedule() optimization · 77e4bfbc
      Andreas Mohr 提交于
      small schedule() microoptimization.
      Signed-off-by: NAndreas Mohr <andi@lisas.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      77e4bfbc
    • M
      [PATCH] sched: fix task interactivity calculation · 013d3868
      Martin Andersson 提交于
      Is a truncation error in kernel/sched.c triggered when the nice value is
      negative.  The affected code is used in the TASK_INTERACTIVE macro.
      
      The code is:
      #define SCALE(v1,v1_max,v2_max) \
      	(v1) * (v2_max) / (v1_max)
      
      which is used in this way:
      SCALE(TASK_NICE(p), 40, MAX_BONUS)
      
      Comments in the code says:
        * This part scales the interactivity limit depending on niceness.
        *
        * We scale it linearly, offset by the INTERACTIVE_DELTA delta.
        * Here are a few examples of different nice levels:
        *
        *  TASK_INTERACTIVE(-20): [1,1,1,1,1,1,1,1,1,0,0]
        *  TASK_INTERACTIVE(-10): [1,1,1,1,1,1,1,0,0,0,0]
        *  TASK_INTERACTIVE(  0): [1,1,1,1,0,0,0,0,0,0,0]
        *  TASK_INTERACTIVE( 10): [1,1,0,0,0,0,0,0,0,0,0]
        *  TASK_INTERACTIVE( 19): [0,0,0,0,0,0,0,0,0,0,0]
        *
        * (the X axis represents the possible -5 ... 0 ... +5 dynamic
        *  priority range a task can explore, a value of '1' means the
        *  task is rated interactive.)
      
      However, the current code does not scale it linearly and the result differs
      from the given examples.  If the mathematical function "floor" is used when
      the nice value is negative instead of the truncation one gets when using
      integer division, the result conforms to the documentation.
      
      Output of TASK_INTERACTIVE when using the kernel code:
      nice    dynamic priorities
      -20     1     1     1     1     1     1     1     1     1     0     0
      -19     1     1     1     1     1     1     1     1     0     0     0
      -18     1     1     1     1     1     1     1     1     0     0     0
      -17     1     1     1     1     1     1     1     1     0     0     0
      -16     1     1     1     1     1     1     1     1     0     0     0
      -15     1     1     1     1     1     1     1     0     0     0     0
      -14     1     1     1     1     1     1     1     0     0     0     0
      -13     1     1     1     1     1     1     1     0     0     0     0
      -12     1     1     1     1     1     1     1     0     0     0     0
      -11     1     1     1     1     1     1     0     0     0     0     0
      -10     1     1     1     1     1     1     0     0     0     0     0
        -9     1     1     1     1     1     1     0     0     0     0     0
        -8     1     1     1     1     1     1     0     0     0     0     0
        -7     1     1     1     1     1     0     0     0     0     0     0
        -6     1     1     1     1     1     0     0     0     0     0     0
        -5     1     1     1     1     1     0     0     0     0     0     0
        -4     1     1     1     1     1     0     0     0     0     0     0
        -3     1     1     1     1     0     0     0     0     0     0     0
        -2     1     1     1     1     0     0     0     0     0     0     0
        -1     1     1     1     1     0     0     0     0     0     0     0
        0      1     1     1     1     0     0     0     0     0     0     0
        1      1     1     1     1     0     0     0     0     0     0     0
        2      1     1     1     1     0     0     0     0     0     0     0
        3      1     1     1     1     0     0     0     0     0     0     0
        4      1     1     1     0     0     0     0     0     0     0     0
        5      1     1     1     0     0     0     0     0     0     0     0
        6      1     1     1     0     0     0     0     0     0     0     0
        7      1     1     1     0     0     0     0     0     0     0     0
        8      1     1     0     0     0     0     0     0     0     0     0
        9      1     1     0     0     0     0     0     0     0     0     0
      10      1     1     0     0     0     0     0     0     0     0     0
      11      1     1     0     0     0     0     0     0     0     0     0
      12      1     0     0     0     0     0     0     0     0     0     0
      13      1     0     0     0     0     0     0     0     0     0     0
      14      1     0     0     0     0     0     0     0     0     0     0
      15      1     0     0     0     0     0     0     0     0     0     0
      16      0     0     0     0     0     0     0     0     0     0     0
      17      0     0     0     0     0     0     0     0     0     0     0
      18      0     0     0     0     0     0     0     0     0     0     0
      19      0     0     0     0     0     0     0     0     0     0     0
      
      Output of TASK_INTERACTIVE when using "floor"
      nice    dynamic priorities
      -20     1     1     1     1     1     1     1     1     1     0     0
      -19     1     1     1     1     1     1     1     1     1     0     0
      -18     1     1     1     1     1     1     1     1     1     0     0
      -17     1     1     1     1     1     1     1     1     1     0     0
      -16     1     1     1     1     1     1     1     1     0     0     0
      -15     1     1     1     1     1     1     1     1     0     0     0
      -14     1     1     1     1     1     1     1     1     0     0     0
      -13     1     1     1     1     1     1     1     1     0     0     0
      -12     1     1     1     1     1     1     1     0     0     0     0
      -11     1     1     1     1     1     1     1     0     0     0     0
      -10     1     1     1     1     1     1     1     0     0     0     0
        -9     1     1     1     1     1     1     1     0     0     0     0
        -8     1     1     1     1     1     1     0     0     0     0     0
        -7     1     1     1     1     1     1     0     0     0     0     0
        -6     1     1     1     1     1     1     0     0     0     0     0
        -5     1     1     1     1     1     1     0     0     0     0     0
        -4     1     1     1     1     1     0     0     0     0     0     0
        -3     1     1     1     1     1     0     0     0     0     0     0
        -2     1     1     1     1     1     0     0     0     0     0     0
        -1     1     1     1     1     1     0     0     0     0     0     0
         0     1     1     1     1     0     0     0     0     0     0     0
         1     1     1     1     1     0     0     0     0     0     0     0
         2     1     1     1     1     0     0     0     0     0     0     0
         3     1     1     1     1     0     0     0     0     0     0     0
         4     1     1     1     0     0     0     0     0     0     0     0
         5     1     1     1     0     0     0     0     0     0     0     0
         6     1     1     1     0     0     0     0     0     0     0     0
         7     1     1     1     0     0     0     0     0     0     0     0
         8     1     1     0     0     0     0     0     0     0     0     0
         9     1     1     0     0     0     0     0     0     0     0     0
        10     1     1     0     0     0     0     0     0     0     0     0
        11     1     1     0     0     0     0     0     0     0     0     0
        12     1     0     0     0     0     0     0     0     0     0     0
        13     1     0     0     0     0     0     0     0     0     0     0
        14     1     0     0     0     0     0     0     0     0     0     0
        15     1     0     0     0     0     0     0     0     0     0     0
        16     0     0     0     0     0     0     0     0     0     0     0
        17     0     0     0     0     0     0     0     0     0     0     0
        18     0     0     0     0     0     0     0     0     0     0     0
        19     0     0     0     0     0     0     0     0     0     0     0
      Signed-off-by: NMartin Andersson <martin.andersson@control.lth.se>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Williams <pwil3058@bigpond.net.au>
      Cc: Con Kolivas <kernel@kolivas.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      013d3868
  4. 27 3月, 2006 1 次提交
    • B
      [PATCH] kretprobe instance recycled by parent process · c6fd91f0
      bibo mao 提交于
      When kretprobe probes the schedule() function, if the probed process exits
      then schedule() will never return, so some kretprobe instances will never
      be recycled.
      
      In this patch the parent process will recycle retprobe instances of the
      probed function and there will be no memory leak of kretprobe instances.
      Signed-off-by: Nbibo mao <bibo.mao@intel.com>
      Cc: Masami Hiramatsu <hiramatu@sdl.hitachi.co.jp>
      Cc: Prasanna S Panchamukhi <prasanna@in.ibm.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      c6fd91f0
  5. 23 3月, 2006 2 次提交
    • I
      [PATCH] make bug messages more consistent · 91368d73
      Ingo Molnar 提交于
      Consolidate all kernel bug printouts to begin with the "BUG: " string.
      Makes it easier to find them in large bootup logs.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      91368d73
    • A
      [PATCH] fix scheduler deadlock · e9028b0f
      Anton Blanchard 提交于
      We have noticed lockups during boot when stress testing kexec on ppc64.
      Two cpus would deadlock in scheduler code trying to grab already taken
      spinlocks.
      
      The double_rq_lock code uses the address of the runqueue to order the
      taking of multiple locks.  This address is a per cpu variable:
      
      	if (rq1 < rq2) {
      		spin_lock(&rq1->lock);
      		spin_lock(&rq2->lock);
      	} else {
      		spin_lock(&rq2->lock);
      		spin_lock(&rq1->lock);
      	}
      
      On the other hand, the code in wake_sleeping_dependent uses the cpu id
      order to grab locks:
      
      	for_each_cpu_mask(i, sibling_map)
      		spin_lock(&cpu_rq(i)->lock);
      
      This means we rely on the address of per cpu data increasing as cpu ids
      increase.  While this will be true for the generic percpu implementation it
      may not be true for arch specific implementations.
      
      One way to solve this is to always take runqueues in cpu id order. To do
      this we add a cpu variable to the runqueue and check it in the
      double runqueue locking functions.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e9028b0f
  6. 22 3月, 2006 1 次提交
  7. 12 3月, 2006 1 次提交
  8. 09 3月, 2006 1 次提交
  9. 07 3月, 2006 1 次提交
    • L
      Add early-boot-safety check to cond_resched() · 8ba7b0a1
      Linus Torvalds 提交于
      Just to be safe, we should not trigger a conditional reschedule during
      the early boot sequence.  We've historically done some questionable
      early on, and the safety warnings in __might_sleep() are generally
      turned off during that period, so there might be problems lurking.
      
      This affects CONFIG_PREEMPT_VOLUNTARY, which takes over might_sleep() to
      cause a voluntary conditional reschedule.
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      8ba7b0a1
  10. 18 2月, 2006 1 次提交
    • I
      [PATCH] Introduce CONFIG_DEFAULT_MIGRATION_COST · 4bbf39c2
      Ingo Molnar 提交于
      Heiko Carstens <heiko.carstens@de.ibm.com> wrote:
      
        The boot sequence on s390 sometimes takes ages and we spend a very long
        time (up to one or two minutes) in calibrate_migration_costs.  The time
        spent there differs from boot to boot.  Also the calculated costs differ
        a lot.  I've seen differences by up to a factor of 15 (yes, factor not
        percent).  Also I doubt that making these measurements make much sense on
        a completely virtualized architecture where you cannot tell how much cpu
        time you will get anyway.
      
      So introduce the CONFIG_DEFAULT_MIGRATION_COST method for an architecture
      to set the scheduler migration costs.  This turns off automatic detection
      of migration costs.  Makes sense on virtual platforms, where migration
      costs are hard to measure accurately.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4bbf39c2
  11. 15 2月, 2006 1 次提交
    • C
      [PATCH] sched: revert "filter affine wakeups" · d6077cb8
      Chen, Kenneth W 提交于
      Revert commit d7102e95:
      
          [PATCH] sched: filter affine wakeups
      
      Apparently caused more than 10% performance regression for aim7 benchmark.
      The setup in use is 16-cpu HP rx8620, 64Gb of memory and 12 MSA1000s with 144
      disks.  Each disk is 72Gb with a single ext3 filesystem (courtesy of HP, who
      supplied benchmark results).
      
      The problem is, for aim7, the wake-up pattern is random, but it still needs
      load balancing action in the wake-up path to achieve best performance.  With
      the above commit, lack of load balancing hurts that workload.
      
      However, for workloads like database transaction processing, the requirement
      is exactly opposite.  In the wake up path, best performance is achieved with
      absolutely zero load balancing.  We simply wake up the process on the CPU that
      it was previously run.  Worst performance is obtained when we do load
      balancing at wake up.
      
      There isn't an easy way to auto detect the workload characteristics.  Ingo's
      earlier patch that detects idle CPU and decide whether to load balance or not
      doesn't perform with aim7 either since all CPUs are busy (it causes even
      bigger perf.  regression).
      
      Revert commit d7102e95, which causes more
      than 10% performance regression with aim7.
      Signed-off-by: NKen Chen <kenneth.w.chen@intel.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d6077cb8
  12. 11 2月, 2006 1 次提交
    • N
      [PATCH] sched: remove smpnice · a2000572
      Nick Piggin 提交于
      I don't think the code is quite ready, which is why I asked for Peter's
      additions to also be merged before I acked it (although it turned out that
      it still isn't quite ready with his additions either).
      
      Basically I have had similar observations to Suresh in that it does not
      play nicely with the rest of the balancing infrastructure (and raised
      similar concerns in my review).
      
      The samples (group of 4) I got for "maximum recorded imbalance" on a 2x2
      SMP+HT Xeon are as follows:
      
                  | Following boot | hackbench 20        | hackbench 40
       -----------+----------------+---------------------+---------------------
       2.6.16-rc2 | 30,37,100,112  | 5600,5530,6020,6090 | 6390,7090,8760,8470
       +nosmpnice |  3, 2,  4,  2  |   28, 150, 294, 132 |  348, 348, 294, 347
      
      Hackbench raw performance is down around 15% with smpnice (but that in
      itself isn't a huge deal because it is just a benchmark).  However, the
      samples show that the imbalance passed into move_tasks is increased by
      about a factor of 10-30.  I think this would also go some way to explaining
      latency blips turning up in the balancing code (though I haven't actually
      measured that).
      
      We'll probably have to revert this in the SUSE kernel.
      
      Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Cc: Peter Williams <pwil3058@bigpond.net.au>
      Cc: "Martin J. Bligh" <mbligh@aracnet.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a2000572
  13. 06 2月, 2006 2 次提交
  14. 02 2月, 2006 1 次提交
    • J
      [PATCH] sys_sched_getaffinity() & hotplug · 2f7016d9
      Jack Steiner 提交于
      Change sched_getaffinity() so that it returns a bitmap that indicates the
      legally schedulable cpus that a task is allowed to run on.
      
      Without this patch, if CONFIG_HOTPLUG_CPU is enabled, sched_getaffinity()
      unconditionally returns (at least on IA64) a mask with NR_CPUS bits set.
      This conveys no useful infornmation except for a kernel compile option.
      
      This fixes a breakage we obseved running recent kernels. We have MPI jobs
      that use sched_getaffinity() to determine where to place their threads.
      Placing them on non-existant cpus is problematic :-)
      Signed-off-by: NJack Steiner <steiner@sgi.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Cc: Nathan Lynch <ntl@pobox.com>
      Cc: Paul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      2f7016d9
  15. 01 2月, 2006 1 次提交
  16. 19 1月, 2006 1 次提交
  17. 15 1月, 2006 2 次提交
  18. 13 1月, 2006 2 次提交
    • A
      [PATCH] sched: filter affine wakeups · d7102e95
      akpm@osdl.org 提交于
      )
      
      From: Nick Piggin <nickpiggin@yahoo.com.au>
      
      Track the last waker CPU, and only consider wakeup-balancing if there's a
      match between current waker CPU and the previous waker CPU.  This ensures
      that there is some correlation between two subsequent wakeup events before
      we move the task.  Should help random-wakeup workloads on large SMP
      systems, by reducing the migration attempts by a factor of nr_cpus.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d7102e95
    • A
      [PATCH] scheduler cache-hot-autodetect · 198e2f18
      akpm@osdl.org 提交于
      )
      
      From: Ingo Molnar <mingo@elte.hu>
      
      This is the latest version of the scheduler cache-hot-auto-tune patch.
      
      The first problem was that detection time scaled with O(N^2), which is
      unacceptable on larger SMP and NUMA systems. To solve this:
      
      - I've added a 'domain distance' function, which is used to cache
        measurement results. Each distance is only measured once. This means
        that e.g. on NUMA distances of 0, 1 and 2 might be measured, on HT
        distances 0 and 1, and on SMP distance 0 is measured. The code walks
        the domain tree to determine the distance, so it automatically follows
        whatever hierarchy an architecture sets up. This cuts down on the boot
        time significantly and removes the O(N^2) limit. The only assumption
        is that migration costs can be expressed as a function of domain
        distance - this covers the overwhelming majority of existing systems,
        and is a good guess even for more assymetric systems.
      
        [ People hacking systems that have assymetries that break this
          assumption (e.g. different CPU speeds) should experiment a bit with
          the cpu_distance() function. Adding a ->migration_distance factor to
          the domain structure would be one possible solution - but lets first
          see the problem systems, if they exist at all. Lets not overdesign. ]
      
      Another problem was that only a single cache-size was used for measuring
      the cost of migration, and most architectures didnt set that variable
      up. Furthermore, a single cache-size does not fit NUMA hierarchies with
      L3 caches and does not fit HT setups, where different CPUs will often
      have different 'effective cache sizes'. To solve this problem:
      
      - Instead of relying on a single cache-size provided by the platform and
        sticking to it, the code now auto-detects the 'effective migration
        cost' between two measured CPUs, via iterating through a wide range of
        cachesizes. The code searches for the maximum migration cost, which
        occurs when the working set of the test-workload falls just below the
        'effective cache size'. I.e. real-life optimized search is done for
        the maximum migration cost, between two real CPUs.
      
        This, amongst other things, has the positive effect hat if e.g. two
        CPUs share a L2/L3 cache, a different (and accurate) migration cost
        will be found than between two CPUs on the same system that dont share
        any caches.
      
      (The reliable measurement of migration costs is tricky - see the source
      for details.)
      
      Furthermore i've added various boot-time options to override/tune
      migration behavior.
      
      Firstly, there's a blanket override for autodetection:
      
      	migration_cost=1000,2000,3000
      
      will override the depth 0/1/2 values with 1msec/2msec/3msec values.
      
      Secondly, there's a global factor that can be used to increase (or
      decrease) the autodetected values:
      
      	migration_factor=120
      
      will increase the autodetected values by 20%. This option is useful to
      tune things in a workload-dependent way - e.g. if a workload is
      cache-insensitive then CPU utilization can be maximized by specifying
      migration_factor=0.
      
      I've tested the autodetection code quite extensively on x86, on 3
      P3/Xeon/2MB, and the autodetected values look pretty good:
      
      Dual Celeron (128K L2 cache):
      
       ---------------------
       migration cost matrix (max_cache_size: 131072, cpu: 467 MHz):
       ---------------------
                 [00]    [01]
       [00]:     -     1.7(1)
       [01]:   1.7(1)    -
       ---------------------
       cacheflush times [2]: 0.0 (0) 1.7 (1784008)
       ---------------------
      
      Here the slow memory subsystem dominates system performance, and even
      though caches are small, the migration cost is 1.7 msecs.
      
      Dual HT P4 (512K L2 cache):
      
       ---------------------
       migration cost matrix (max_cache_size: 524288, cpu: 2379 MHz):
       ---------------------
                 [00]    [01]    [02]    [03]
       [00]:     -     0.4(1)  0.0(0)  0.4(1)
       [01]:   0.4(1)    -     0.4(1)  0.0(0)
       [02]:   0.0(0)  0.4(1)    -     0.4(1)
       [03]:   0.4(1)  0.0(0)  0.4(1)    -
       ---------------------
       cacheflush times [2]: 0.0 (33900) 0.4 (448514)
       ---------------------
      
      Here it can be seen that there is no migration cost between two HT
      siblings (CPU#0/2 and CPU#1/3 are separate physical CPUs). A fast memory
      system makes inter-physical-CPU migration pretty cheap: 0.4 msecs.
      
      8-way P3/Xeon [2MB L2 cache]:
      
       ---------------------
       migration cost matrix (max_cache_size: 2097152, cpu: 700 MHz):
       ---------------------
                 [00]    [01]    [02]    [03]    [04]    [05]    [06]    [07]
       [00]:     -    19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)
       [01]:  19.2(1)    -    19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)
       [02]:  19.2(1) 19.2(1)    -    19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)
       [03]:  19.2(1) 19.2(1) 19.2(1)    -    19.2(1) 19.2(1) 19.2(1) 19.2(1)
       [04]:  19.2(1) 19.2(1) 19.2(1) 19.2(1)    -    19.2(1) 19.2(1) 19.2(1)
       [05]:  19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)    -    19.2(1) 19.2(1)
       [06]:  19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)    -    19.2(1)
       [07]:  19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1) 19.2(1)    -
       ---------------------
       cacheflush times [2]: 0.0 (0) 19.2 (19281756)
       ---------------------
      
      This one has huge caches and a relatively slow memory subsystem - so the
      migration cost is 19 msecs.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAshok Raj <ashok.raj@intel.com>
      Signed-off-by: NKen Chen <kenneth.w.chen@intel.com>
      Cc: <wilder@us.ibm.com>
      Signed-off-by: NJohn Hawkes <hawkes@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      198e2f18
  19. 12 1月, 2006 2 次提交
  20. 10 1月, 2006 1 次提交
  21. 09 1月, 2006 1 次提交
  22. 14 11月, 2005 2 次提交
  23. 10 11月, 2005 1 次提交
  24. 09 11月, 2005 5 次提交
    • N
      [PATCH] sched: resched and cpu_idle rework · 64c7c8f8
      Nick Piggin 提交于
      Make some changes to the NEED_RESCHED and POLLING_NRFLAG to reduce
      confusion, and make their semantics rigid.  Improves efficiency of
      resched_task and some cpu_idle routines.
      
      * In resched_task:
      - TIF_NEED_RESCHED is only cleared with the task's runqueue lock held,
        and as we hold it during resched_task, then there is no need for an
        atomic test and set there. The only other time this should be set is
        when the task's quantum expires, in the timer interrupt - this is
        protected against because the rq lock is irq-safe.
      
      - If TIF_NEED_RESCHED is set, then we don't need to do anything. It
        won't get unset until the task get's schedule()d off.
      
      - If we are running on the same CPU as the task we resched, then set
        TIF_NEED_RESCHED and no further action is required.
      
      - If we are running on another CPU, and TIF_POLLING_NRFLAG is *not* set
        after TIF_NEED_RESCHED has been set, then we need to send an IPI.
      
      Using these rules, we are able to remove the test and set operation in
      resched_task, and make clear the previously vague semantics of
      POLLING_NRFLAG.
      
      * In idle routines:
      - Enter cpu_idle with preempt disabled. When the need_resched() condition
        becomes true, explicitly call schedule(). This makes things a bit clearer
        (IMO), but haven't updated all architectures yet.
      
      - Many do a test and clear of TIF_NEED_RESCHED for some reason. According
        to the resched_task rules, this isn't needed (and actually breaks the
        assumption that TIF_NEED_RESCHED is only cleared with the runqueue lock
        held). So remove that. Generally one less locked memory op when switching
        to the idle thread.
      
      - Many idle routines clear TIF_POLLING_NRFLAG, and only set it in the inner
        most polling idle loops. The above resched_task semantics allow it to be
        set until before the last time need_resched() is checked before going into
        a halt requiring interrupt wakeup.
      
        Many idle routines simply never enter such a halt, and so POLLING_NRFLAG
        can be always left set, completely eliminating resched IPIs when rescheduling
        the idle task.
      
        POLLING_NRFLAG width can be increased, to reduce the chance of resched IPIs.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Con Kolivas <kernel@kolivas.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      64c7c8f8
    • C
      [PATCH] sched: consider migration thread with smp nice · ede3d0fb
      Con Kolivas 提交于
      The intermittent scheduling of the migration thread at ultra high priority
      makes the smp nice handling see that runqueue as being heavily loaded.  The
      migration thread itself actually handles the balancing so its influence on
      priority balancing should be ignored.
      Signed-off-by: NCon Kolivas <kernel@kolivas.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ede3d0fb
    • C
      [PATCH] sched: correct smp_nice_bias · 6dd4a85b
      Con Kolivas 提交于
      The priority biasing was off by mutliplying the total load by the total
      priority bias and this ruins the ratio of loads between runqueues. This
      patch should correct the ratios of loads between runqueues to be proportional
      to overall load. -2nd attempt.
      
      From: Dave Kleikamp <shaggy@austin.ibm.com>
      
        This patch fixes a divide-by-zero error that I hit on a two-way i386
        machine.  rq->nr_running is tested to be non-zero, but may change by the
        time it is used in the division.  Saving the value to a local variable
        ensures that the same value that is checked is used in the division.
      Signed-off-by: NCon Kolivas <kernel@kolivas.org>
      Signed-off-by: NDave Kleikamp <shaggy@austin.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      6dd4a85b
    • C
      [PATCH] sched: smp nice bias busy queues on idle rebalance · 3b0bd9bc
      Con Kolivas 提交于
      To intensify the 'nice' support across physical cpus on SMP we can bias the
      loads on idle rebalancing. To prevent idle rebalance from trying to pull tasks
      from queues that appear heavily loaded we only bias the load if there is more
      than one task running.
      
      Add some minor micro-optimisations and have only one return from __source_load
      and __target_load functions.
      
      Fix the fact that target_load was not biased by priority when type == 0.
      Signed-off-by: NCon Kolivas <kernel@kolivas.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      3b0bd9bc
    • C
      [PATCH] sched: account rt tasks in prio_bias() · dad1c65c
      Con Kolivas 提交于
      Real time tasks' effect on prio_bias should be based on their real time
      priority level instead of their static_prio which is based on nice.
      Signed-off-by: NCon Kolivas <kernel@kolivas.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      dad1c65c