1. 10 7月, 2007 1 次提交
    • I
      sched: zap the migration init / cache-hot balancing code · 0437e109
      Ingo Molnar 提交于
      the SMP load-balancer uses the boot-time migration-cost estimation
      code to attempt to improve the quality of balancing. The reason for
      this code is that the discrete priority queues do not preserve
      the order of scheduling accurately, so the load-balancer skips
      tasks that were running on a CPU 'recently'.
      
      this code is fundamental fragile: the boot-time migration cost detector
      doesnt really work on systems that had large L3 caches, it caused boot
      delays on large systems and the whole cache-hot concept made the
      balancing code pretty undeterministic as well.
      
      (and hey, i wrote most of it, so i can say it out loud that it sucks ;-)
      
      under CFS the same purpose of cache affinity can be achieved without
      any special cache-hot special-case: tasks are sorted in the 'timeline'
      tree and the SMP balancer picks tasks from the left side of the
      tree, thus the most cache-cold task is balanced automatically.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0437e109
  2. 05 6月, 2007 2 次提交
  3. 29 5月, 2007 2 次提交
    • D
      [SPARC64]: Eliminate NR_CPUS limitations. · 22adb358
      David S. Miller 提交于
      Cheetah systems can have cpuids as large as 1023, although physical
      systems don't have that many cpus.
      
      Only three limitations existed in the kernel preventing arbitrary
      NR_CPUS values:
      
      1) dcache dirty cpu state stored in page->flags on
         D-cache aliasing platforms.  With some build time
         calculations and some build-time BUG checks on
         page->flags layout, this one was easily solved.
      
      2) The cheetah XCALL delivery code could only handle
         a cpumask with up to 32 cpus set.  Some simple looping
         logic clears that up too.
      
      3) thread_info->cpu was a u8, easily changed to a u16.
      
      There are a few spots in the kernel that still put NR_CPUS
      sized arrays on the kernel stack, but that's not a sparc64
      specific problem.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      22adb358
    • D
  4. 14 5月, 2007 1 次提交
  5. 09 5月, 2007 1 次提交
  6. 03 5月, 2007 1 次提交
  7. 26 4月, 2007 2 次提交
    • D
      [SPARC64]: Add clocksource/clockevents support. · 112f4871
      David S. Miller 提交于
      I'd like to thank John Stul and others for helping
      me along the way.
      
      A lot of cleanups fell out of this.  For example, the get_compare()
      tick_op was totally unused, so was deleted.  And the most often used
      tick_op members were grouped together for cache-friendlyness.
      
      The sparc64 TSC is given to the kernel as a one-shot timer.
      
      tick_ops->init_timer() simply turns off the privileged bit in
      the tick register (when possible), and disables the interrupt
      by setting bit 63 in the compare register.  The ->disable_irq()
      op also sets this bit.
      
      tick_ops->add_compare() is changed to:
      
      1) Add the given delta to "tick" not to "compare"
      2) Return a boolean which, if true, means that the tick
         value read after writing the compare value was found
         to have incremented past the initial tick value.  This
         mirrors logic used in the HPET driver's ->next_event()
         method.
      
      Each tick_ops implementation also now provides a name string.
      And we feed this into the clocksource and clockevents layers.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      112f4871
    • D
      [SPARC64]: Unify timer interrupt handler. · 777a4475
      David S. Miller 提交于
      Things were scattered all over the place, split between
      SMP and non-SMP.
      
      Unify it all so that dyntick support is easier to add.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      777a4475
  8. 12 1月, 2007 1 次提交
    • G
      [PATCH] Change cpu_up and co from __devinit to __cpuinit · b282b6f8
      Gautham R Shenoy 提交于
      Compiling the kernel with CONFIG_HOTPLUG = y and CONFIG_HOTPLUG_CPU = n
      with CONFIG_RELOCATABLE = y generates the following modpost warnings
      
      WARNING: vmlinux - Section mismatch: reference to .init.data: from
      .text between '_cpu_up' (at offset 0xc0141b7d) and 'cpu_up'
      WARNING: vmlinux - Section mismatch: reference to .init.data: from
      .text between '_cpu_up' (at offset 0xc0141b9c) and 'cpu_up'
      WARNING: vmlinux - Section mismatch: reference to .init.text:__cpu_up
      from .text between '_cpu_up' (at offset 0xc0141bd8) and 'cpu_up'
      WARNING: vmlinux - Section mismatch: reference to .init.data: from
      .text between '_cpu_up' (at offset 0xc0141c05) and 'cpu_up'
      WARNING: vmlinux - Section mismatch: reference to .init.data: from
      .text between '_cpu_up' (at offset 0xc0141c26) and 'cpu_up'
      WARNING: vmlinux - Section mismatch: reference to .init.data: from
      .text between '_cpu_up' (at offset 0xc0141c37) and 'cpu_up'
      
      This is because cpu_up, _cpu_up and __cpu_up (in some architectures) are
      defined as __devinit
      AND
      __cpu_up calls some __cpuinit functions.
      
      Since __cpuinit would map to __init with this kind of a configuration,
      we get a .text refering .init.data warning.
      
      This patch solves the problem by converting all of __cpu_up, _cpu_up
      and cpu_up from __devinit to __cpuinit. The approach is justified since
      the callers of cpu_up are either dependent on CONFIG_HOTPLUG_CPU or
      are of __init type.
      
      Thus when CONFIG_HOTPLUG_CPU=y, all these cpu up functions would land up
      in .text section, and when CONFIG_HOTPLUG_CPU=n, all these functions would
      land up in .init section.
      
      Tested on a i386 SMP machine running linux-2.6.20-rc3-mm1.
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Cc: Vivek Goyal <vgoyal@in.ibm.com>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Kyle McMartin <kyle@mcmartin.ca>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b282b6f8
  9. 18 12月, 2006 1 次提交
  10. 09 10月, 2006 1 次提交
  11. 24 6月, 2006 1 次提交
  12. 11 6月, 2006 1 次提交
  13. 31 5月, 2006 1 次提交
  14. 11 4月, 2006 1 次提交
  15. 10 4月, 2006 1 次提交
  16. 01 4月, 2006 1 次提交
    • D
      [SPARC64]: Make tsb_sync() mm comparison more precise. · 6f25f398
      David S. Miller 提交于
      switch_mm() changes the mm state and does a tsb_context_switch()
      first, then we do the cpu register state switch which changes
      current_thread_info() and current().
      
      So it's safer to check the PGD physical address stored in the
      trap block (which will be updated by the tsb_context_switch() in
      switch_mm()) than current->active_mm.
      
      Technically we should never run here in between those two
      updates, because interrupts are disabled during the entire
      context switch operation.  But some day we might like to leave
      interrupts enabled during the context switch and this change
      allows that to happen without any surprises.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6f25f398
  17. 26 3月, 2006 1 次提交
  18. 23 3月, 2006 1 次提交
    • A
      [PATCH] more for_each_cpu() conversions · 394e3902
      Andrew Morton 提交于
      When we stop allocating percpu memory for not-possible CPUs we must not touch
      the percpu data for not-possible CPUs at all.  The correct way of doing this
      is to test cpu_possible() or to use for_each_cpu().
      
      This patch is a kernel-wide sweep of all instances of NR_CPUS.  I found very
      few instances of this bug, if any.  But the patch converts lots of open-coded
      test to use the preferred helper macros.
      
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: David Howells <dhowells@redhat.com>
      Acked-by: NKyle McMartin <kyle@parisc-linux.org>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Cc: Andi Kleen <ak@muc.de>
      Cc: Christian Zankel <chris@zankel.net>
      Cc: Philippe Elie <phil.el@wanadoo.fr>
      Cc: Nathan Scott <nathans@sgi.com>
      Cc: Jens Axboe <axboe@suse.de>
      Cc: Eric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      394e3902
  19. 20 3月, 2006 19 次提交