1. 27 1月, 2009 2 次提交
  2. 16 1月, 2009 2 次提交
    • I
      percpu: add optimized generic percpu accessors · 6dbde353
      Ingo Molnar 提交于
      It is an optimization and a cleanup, and adds the following new
      generic percpu methods:
      
        percpu_read()
        percpu_write()
        percpu_add()
        percpu_sub()
        percpu_and()
        percpu_or()
        percpu_xor()
      
      and implements support for them on x86. (other architectures will fall
      back to a default implementation)
      
      The advantage is that for example to read a local percpu variable,
      instead of this sequence:
      
       return __get_cpu_var(var);
      
       ffffffff8102ca2b:	48 8b 14 fd 80 09 74 	mov    -0x7e8bf680(,%rdi,8),%rdx
       ffffffff8102ca32:	81
       ffffffff8102ca33:	48 c7 c0 d8 59 00 00 	mov    $0x59d8,%rax
       ffffffff8102ca3a:	48 8b 04 10          	mov    (%rax,%rdx,1),%rax
      
      We can get a single instruction by using the optimized variants:
      
       return percpu_read(var);
      
       ffffffff8102ca3f:	65 48 8b 05 91 8f fd 	mov    %gs:0x7efd8f91(%rip),%rax
      
      I also cleaned up the x86-specific APIs and made the x86 code use
      these new generic percpu primitives.
      
      tj: * fixed generic percpu_sub() definition as Roel Kluin pointed out
          * added percpu_and() for completeness's sake
          * made generic percpu ops atomic against preemption
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      6dbde353
    • T
      x86: merge 64 and 32 SMP percpu handling · 9939ddaf
      Tejun Heo 提交于
      Now that pda is allocated as part of percpu, percpu doesn't need to be
      accessed through pda.  Unify x86_64 SMP percpu access with x86_32 SMP
      one.  Other than the segment register, operand size and the base of
      percpu symbols, they behave identical now.
      
      This patch replaces now unnecessary pda->data_offset with a dummy
      field which is necessary to keep stack_canary at its place.  This
      patch also moves per_cpu_offset initialization out of init_gdt() into
      setup_per_cpu_areas().  Note that this change also necessitates
      explicit per_cpu_offset initializations in voyager_smp.c.
      
      With this change, x86_OP_percpu()'s are as efficient on x86_64 as on
      x86_32 and also x86_64 can use assembly PER_CPU macros.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9939ddaf
  3. 04 1月, 2009 1 次提交
  4. 17 12月, 2008 1 次提交
  5. 13 12月, 2008 1 次提交
    • R
      cpumask: centralize cpu_online_map and cpu_possible_map · 98a79d6a
      Rusty Russell 提交于
      Impact: cleanup
      
      Each SMP arch defines these themselves.  Move them to a central
      location.
      
      Twists:
      1) Some archs (m32, parisc, s390) set possible_map to all 1, so we add a
         CONFIG_INIT_ALL_POSSIBLE for this rather than break them.
      
      2) mips and sparc32 '#define cpu_possible_map phys_cpu_present_map'.
         Those archs simply have phys_cpu_present_map replaced everywhere.
      
      3) Alpha defined cpu_possible_map to cpu_present_map; this is tricky
         so I just manipulate them both in sync.
      
      4) IA64, cris and m32r have gratuitous 'extern cpumask_t cpu_possible_map'
         declarations.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Reviewed-by: NGrant Grundler <grundler@parisc-linux.org>
      Tested-by: NTony Luck <tony.luck@intel.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Cc: Mike Travis <travis@sgi.com>
      Cc: ink@jurassic.park.msu.ru
      Cc: rmk@arm.linux.org.uk
      Cc: starvik@axis.com
      Cc: tony.luck@intel.com
      Cc: takata@linux-m32r.org
      Cc: ralf@linux-mips.org
      Cc: grundler@parisc-linux.org
      Cc: paulus@samba.org
      Cc: schwidefsky@de.ibm.com
      Cc: lethal@linux-sh.org
      Cc: wli@holomorphy.com
      Cc: davem@davemloft.net
      Cc: jdike@addtoit.com
      Cc: mingo@redhat.com
      98a79d6a
  6. 11 11月, 2008 1 次提交
  7. 03 11月, 2008 1 次提交
  8. 31 10月, 2008 3 次提交
  9. 16 10月, 2008 1 次提交
  10. 09 9月, 2008 1 次提交
    • M
      kernel/cpu.c: create a CPU_STARTING cpu_chain notifier · e545a614
      Manfred Spraul 提交于
      Right now, there is no notifier that is called on a new cpu, before the new
      cpu begins processing interrupts/softirqs.
      Various kernel function would need that notification, e.g. kvm works around
      by calling smp_call_function_single(), rcu polls cpu_online_map.
      
      The patch adds a CPU_STARTING notification. It also adds a helper function
      that sends the message to all cpu_chain handlers.
      
      Tested on x86-64.
      All other archs are untested. Especially on sparc, I'm not sure if I got
      it right.
      Signed-off-by: NManfred Spraul <manfred@colorfullife.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e545a614
  11. 26 6月, 2008 3 次提交
  12. 25 5月, 2008 2 次提交
    • A
      x86: move smp_found_config · bab4b27c
      Alexey Starikovskiy 提交于
      bab4b27c
    • Y
      x86: extend e820 ealy_res support 32bit · a4c81cf6
      Yinghai Lu 提交于
      move early_res related from e820_64.c to e820.c
      make edba detection to be done in head32.c
      remove smp_alloc_memory, because we have fixed trampoline address now.
      Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com>
      
       arch/x86/kernel/e820.c              |  214 ++++++++++++++++++++++++++++++++++++
       arch/x86/kernel/e820_64.c           |  196 --------------------------------
       arch/x86/kernel/head32.c            |   76 ++++++++++++
       arch/x86/kernel/setup_32.c          |  109 +++---------------
       arch/x86/kernel/smpboot.c           |   17 --
       arch/x86/kernel/trampoline.c        |    2
       arch/x86/mach-voyager/voyager_smp.c |    9 -
       include/asm-x86/e820.h              |    6 +
       include/asm-x86/e820_64.h           |    9 -
       include/asm-x86/smp.h               |    1
       arch/x86/kernel/e820.c              |  214 ++++++++++++++++++++++++++++++++++++
       arch/x86/kernel/e820_64.c           |  196 --------------------------------
       arch/x86/kernel/head32.c            |   76 ++++++++++++
       arch/x86/kernel/setup_32.c          |  109 +++---------------
       arch/x86/kernel/smpboot.c           |   17 --
       arch/x86/kernel/trampoline.c        |    2
       arch/x86/mach-voyager/voyager_smp.c |    9 -
       include/asm-x86/e820.h              |    6 +
       include/asm-x86/e820_64.h           |    9 -
       include/asm-x86/smp.h               |    1
       arch/x86/kernel/e820.c              |  214 ++++++++++++++++++++++++++++++++++++
       arch/x86/kernel/e820_64.c           |  196 --------------------------------
       arch/x86/kernel/head32.c            |   76 ++++++++++++
       arch/x86/kernel/setup_32.c          |  109 +++---------------
       arch/x86/kernel/smpboot.c           |   17 --
       arch/x86/kernel/trampoline.c        |    2
       arch/x86/mach-voyager/voyager_smp.c |    9 -
       include/asm-x86/e820.h              |    6 +
       include/asm-x86/e820_64.h           |    9 -
       include/asm-x86/smp.h               |    1
       10 files changed, 320 insertions(+), 319 deletions(-)
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a4c81cf6
  13. 26 4月, 2008 1 次提交
  14. 25 4月, 2008 2 次提交
  15. 20 4月, 2008 1 次提交
  16. 17 4月, 2008 4 次提交
  17. 07 2月, 2008 1 次提交
    • A
      calibrate_delay() must be __cpuinit · 6c81c32f
      Adrian Bunk 提交于
      calibrate_delay() must be __cpuinit, not __{dev,}init.
      
      I've verified that this is correct for all users.
      
      While doing the latter, I also did the following cleanups:
      - remove pointless additional prototypes in C files
      - ensure all users #include <linux/delay.h>
      
      This fixes the following section mismatches with CONFIG_HOTPLUG=n,
      CONFIG_HOTPLUG_CPU=y:
      
      WARNING: vmlinux.o(.text+0x1128d): Section mismatch: reference to .init.text.1:calibrate_delay (between 'check_cx686_slop' and 'set_cx86_reorder')
      WARNING: vmlinux.o(.text+0x25102): Section mismatch: reference to .init.text.1:calibrate_delay (between 'smp_callin' and 'cpu_coregroup_map')
      Signed-off-by: NAdrian Bunk <bunk@kernel.org>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Christian Zankel <chris@zankel.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6c81c32f
  18. 30 1月, 2008 6 次提交
  19. 17 11月, 2007 1 次提交
  20. 28 10月, 2007 1 次提交
  21. 20 10月, 2007 2 次提交
  22. 18 10月, 2007 2 次提交
    • J
      x86: expand /proc/interrupts to include missing vectors, v2 · 38e760a1
      Joe Korty 提交于
      Add missing IRQs and IRQ descriptions to /proc/interrupts.
      
      /proc/interrupts is most useful when it displays every IRQ vector in use by
      the system, not just those somebody thought would be interesting.
      
      This patch inserts the following vector displays to the i386 and x86_64
      platforms, as appropriate:
      
      	rescheduling interrupts
      	TLB flush interrupts
      	function call interrupts
      	thermal event interrupts
      	threshold interrupts
      	spurious interrupts
      
      A threshold interrupt occurs when ECC memory correction is occuring at too
      high a frequency.  Thresholds are used by the ECC hardware as occasional
      ECC failures are part of normal operation, but long sequences of ECC
      failures usually indicate a memory chip that is about to fail.
      
      Thermal event interrupts occur when a temperature threshold has been
      exceeded for some CPU chip.  IIRC, a thermal interrupt is also generated
      when the temperature drops back to a normal level.
      
      A spurious interrupt is an interrupt that was raised then lowered by the
      device before it could be fully processed by the APIC.  Hence the apic sees
      the interrupt but does not know what device it came from.  For this case
      the APIC hardware will assume a vector of 0xff.
      
      Rescheduling, call, and TLB flush interrupts are sent from one CPU to
      another per the needs of the OS.  Typically, their statistics would be used
      to discover if an interrupt flood of the given type has been occuring.
      
      AK: merged v2 and v4 which had some more tweaks
      AK: replace Local interrupts with Local timer interrupts
      AK: Fixed description of interrupt types.
      
      [ tglx: arch/x86 adaptation ]
      [ mingo: small cleanup ]
      Signed-off-by: NJoe Korty <joe.korty@ccur.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Cc: Tim Hockin <thockin@hockin.org>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      38e760a1
    • J
      x86: misc. constifications · 121d7bf5
      Jan Beulich 提交于
      Miscellaneous x86 stuff that can live in .rodata.
      
      [ tglx: arch/x86 adaptation ]
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      121d7bf5