1. 29 1月, 2009 3 次提交
  2. 27 1月, 2009 1 次提交
  3. 21 1月, 2009 1 次提交
    • T
      x86: uv cleanup · bdbcdd48
      Tejun Heo 提交于
      Impact: cleanup
      
      Make the following uv related cleanups.
      
      * collect visible uv related definitions and interfaces into uv/uv.h
        and use it.  this cleans up the messy situation where on 64bit, uv
        is defined properly, on 32bit generic it's dummy and on the rest
        undefined.  after this clean up, uv is defined on 64 and dummy on
        32.
      
      * update uv_flush_tlb_others() such that it takes cpumask of
        to-be-flushed cpus as argument, instead of that minus self, and
        returns yet-to-be-flushed cpumask, instead of modifying the passed
        in parameter.  this interface change will ease dummy implementation
        of uv_flush_tlb_others() and makes uv tlb flush related stuff
        defined in tlb_uv proper.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      bdbcdd48
  4. 18 1月, 2009 2 次提交
  5. 16 1月, 2009 3 次提交
    • T
      x86: misc clean up after the percpu update · 004aa322
      Tejun Heo 提交于
      Do the following cleanups:
      
      * kill x86_64_init_pda() which now is equivalent to pda_init()
      
      * use per_cpu_offset() instead of cpu_pda() when initializing
        initial_gs
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      004aa322
    • T
      x86: fold pda into percpu area on SMP · 1a51e3a0
      Tejun Heo 提交于
      [ Based on original patch from Christoph Lameter and Mike Travis. ]
      
      Currently pdas and percpu areas are allocated separately.  %gs points
      to local pda and percpu area can be reached using pda->data_offset.
      This patch folds pda into percpu area.
      
      Due to strange gcc requirement, pda needs to be at the beginning of
      the percpu area so that pda->stack_canary is at %gs:40.  To achieve
      this, a new percpu output section macro - PERCPU_VADDR_PREALLOC() - is
      added and used to reserve pda sized chunk at the start of the percpu
      area.
      
      After this change, for boot cpu, %gs first points to pda in the
      data.init area and later during setup_per_cpu_areas() gets updated to
      point to the actual pda.  This means that setup_per_cpu_areas() need
      to reload %gs for CPU0 while clearing pda area for other cpus as cpu0
      already has modified it when control reaches setup_per_cpu_areas().
      
      This patch also removes now unnecessary get_local_pda() and its call
      sites.
      
      A lot of this patch is taken from Mike Travis' "x86_64: Fold pda into
      per cpu area" patch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1a51e3a0
    • T
      x86: load pointer to pda into %gs while brining up a CPU · f32ff538
      Tejun Heo 提交于
      [ Based on original patch from Christoph Lameter and Mike Travis. ]
      
      CPU startup code in head_64.S loaded address of a zero page into %gs
      for temporary use till pda is loaded but address to the actual pda is
      available at the point.  Load the real address directly instead.
      
      This will help unifying percpu and pda handling later on.
      
      This patch is mostly taken from Mike Travis' "x86_64: Fold pda into
      per cpu area" patch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      f32ff538
  6. 15 1月, 2009 1 次提交
  7. 11 1月, 2009 2 次提交
  8. 07 1月, 2009 1 次提交
  9. 05 1月, 2009 1 次提交
  10. 04 1月, 2009 3 次提交
  11. 26 12月, 2008 1 次提交
  12. 25 12月, 2008 1 次提交
    • F
      tracing/ftrace: don't trace on early stage of a secondary cpu boot, v3 · 0ca59dd9
      Frederic Weisbecker 提交于
      Impact: fix a crash/hard-reboot on certain configs while enabling cpu runtime
      
      On some archs, the boot of a secondary cpu can have an early fragile state.
      On x86-64, the pda is not initialized on the first stage of a cpu boot but
      it is needed to get the cpu number and the current task pointer. This data
      is needed during tracing. As they were dereferenced at this stage, we got a
      crash while tracing a cpu being enabled at runtime.
      
      Some other archs like ia64 can have such kind of issue too.
      
      Changes on v2:
      
      We dropped the previous solution of a per-arch called function to guess the
      current state of a cpu. That could slow down the tracing.
      
      This patch removes the -pg flag on arch/x86/kernel/cpu/common.c where
      the low level cpu boot functions exist, on start_secondary() and a helper
      function used at this stage.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0ca59dd9
  13. 18 12月, 2008 1 次提交
  14. 17 12月, 2008 1 次提交
  15. 15 12月, 2008 1 次提交
  16. 13 12月, 2008 1 次提交
    • R
      cpumask: centralize cpu_online_map and cpu_possible_map · 98a79d6a
      Rusty Russell 提交于
      Impact: cleanup
      
      Each SMP arch defines these themselves.  Move them to a central
      location.
      
      Twists:
      1) Some archs (m32, parisc, s390) set possible_map to all 1, so we add a
         CONFIG_INIT_ALL_POSSIBLE for this rather than break them.
      
      2) mips and sparc32 '#define cpu_possible_map phys_cpu_present_map'.
         Those archs simply have phys_cpu_present_map replaced everywhere.
      
      3) Alpha defined cpu_possible_map to cpu_present_map; this is tricky
         so I just manipulate them both in sync.
      
      4) IA64, cris and m32r have gratuitous 'extern cpumask_t cpu_possible_map'
         declarations.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Reviewed-by: NGrant Grundler <grundler@parisc-linux.org>
      Tested-by: NTony Luck <tony.luck@intel.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Cc: Mike Travis <travis@sgi.com>
      Cc: ink@jurassic.park.msu.ru
      Cc: rmk@arm.linux.org.uk
      Cc: starvik@axis.com
      Cc: tony.luck@intel.com
      Cc: takata@linux-m32r.org
      Cc: ralf@linux-mips.org
      Cc: grundler@parisc-linux.org
      Cc: paulus@samba.org
      Cc: schwidefsky@de.ibm.com
      Cc: lethal@linux-sh.org
      Cc: wli@holomorphy.com
      Cc: davem@davemloft.net
      Cc: jdike@addtoit.com
      Cc: mingo@redhat.com
      98a79d6a
  17. 05 12月, 2008 1 次提交
  18. 18 11月, 2008 2 次提交
    • Y
      x86: fix wakeup_cpu with numaq/es7000, v2, fix · 54ac14a8
      Yinghai Lu 提交于
      Impact: fix wakeup_secondary_cpu with hotplug
      
      We can not put that into x86_quirks, because that is __initdata.
      So try to move that to genapic, and add update_genapic in x86_quirks.
      
      later we even could use that stub to:
      
       1. autodetect CONFIG_ES7000_CLUSTERED_APIC
       2. more correct inquire_remote_apic with apic_verbosity setting.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      54ac14a8
    • Y
      x86: fix wakeup_cpu with numaq/es7000, v2 · 569712b2
      Yinghai Lu 提交于
      Impact: fix secondary-CPU wakeup/init path with numaq and es7000
      
      While looking at wakeup_secondary_cpu for WAKE_SECONDARY_VIA_NMI:
      
      |#ifdef WAKE_SECONDARY_VIA_NMI
      |/*
      | * Poke the other CPU in the eye via NMI to wake it up. Remember that the normal
      | * INIT, INIT, STARTUP sequence will reset the chip hard for us, and this
      | * won't ... remember to clear down the APIC, etc later.
      | */
      |static int __devinit
      |wakeup_secondary_cpu(int logical_apicid, unsigned long start_eip)
      |{
      |        unsigned long send_status, accept_status = 0;
      |        int maxlvt;
      |...
      |        if (APIC_INTEGRATED(apic_version[phys_apicid])) {
      |                maxlvt = lapic_get_maxlvt();
      
      I noticed that there is no warning about undefined phys_apicid...
      
      because WAKE_SECONDARY_VIA_NMI and WAKE_SECONDARY_VIA_INIT can not be
      defined at the same time. So NUMAQ is using wrong wakeup_secondary_cpu.
      
      WAKE_SECONDARY_VIA_NMI, WAKE_SECONDARY_VIA_INIT and
      WAKE_SECONDARY_VIA_MIP are variants of a weird and fragile
      preprocessor-driven "HAL" mechanisms to specify the kind of secondary-CPU
      wakeup strategy a given x86 kernel will use.
      
      The vast majority of systems want to use INIT for secondary wakeup - NUMAQ
      uses an NMI, (old-style-) ES7000 uses 'MIP' (a firmware driven in-memory
      flag to let secondaries continue).
      
      So convert these mechanisms to x86_quirks and add a
      ->wakeup_secondary_cpu() method to specify the rare exception
      to the sane default.
      
      Extend genapic accordingly as well, for 32-bit.
      
      While looking further, I noticed that functions in wakecup.h for numaq
      and es7000 are different to the default in mach_wakecpu.h - but smpboot.c
      will only use default mach_wakecpu.h with smphook.h.
      
      So we need to add mach_wakecpu.h for mach_generic, to properly support
      numaq and es7000, and vectorize the following SMP init methods:
      
      	int trampoline_phys_low;
      	int trampoline_phys_high;
      	void (*wait_for_init_deassert)(atomic_t *deassert);
      	void (*smp_callin_clear_local_apic)(void);
      	void (*store_NMI_vector)(unsigned short *high, unsigned short *low);
      	void (*restore_NMI_vector)(unsigned short *high, unsigned short *low);
      	void (*inquire_remote_apic)(int apicid);
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      569712b2
  19. 22 10月, 2008 1 次提交
  20. 17 10月, 2008 1 次提交
    • A
      Make the taint flags reliable · 25ddbb18
      Andi Kleen 提交于
      It's somewhat unlikely that it happens, but right now a race window
      between interrupts or machine checks or oopses could corrupt the tainted
      bitmap because it is modified in a non atomic fashion.
      
      Convert the taint variable to an unsigned long and use only atomic bit
      operations on it.
      
      Unfortunately this means the intvec sysctl functions cannot be used on it
      anymore.
      
      It turned out the taint sysctl handler could actually be simplified a bit
      (since it only increases capabilities) so this patch actually removes
      code.
      
      [akpm@linux-foundation.org: remove unneeded include]
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      25ddbb18
  21. 16 10月, 2008 1 次提交
  22. 13 10月, 2008 6 次提交
    • T
      x86: remove additional_cpus · cb48bb59
      Thomas Gleixner 提交于
      remove remainder of additional_cpus logic. We now just listen to the
      disabled_cpus value like we did for years. disabled_cpus is always >=
      0 so no need for an extra check.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cb48bb59
    • I
      x86: remove additional_cpus configurability · b8073050
      Ingo Molnar 提交于
      additional_cpus=<x> parameter is dangerous and broken: for example
      if we boot additional_cpus=-2 on a stock dual-core system it will
      crash the box on bootup.
      
      So reduce the maze of code a bit by removingthe user-configurability
      angle.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b8073050
    • C
      x86: allow number of additional hotplug CPUs to be set at compile time, V2 · 7f2f49a5
      Chuck Ebbert 提交于
      x86: allow number of additional hotplug CPUs to be set at compile time, V2
      
      The default number of additional CPU IDs for hotplugging is determined
      by asking ACPI or mptables how many "disabled" CPUs there are in the
      system, but many systems get this wrong so that e.g. a uniprocessor
      machine gets an extra CPU allocated and never switches to single CPU
      mode.
      
      And sometimes CPU hotplugging is enabled only for suspend/hibernate
      anyway, so the additional CPU IDs are not wanted. Allow the number
      to be set to zero at compile time.
      
      Also, force the number of extra CPUs to zero if hotplugging is disabled
      which allows removing some conditional code.
      
      Tested on uniprocessor x86_64 that ACPI claims has a disabled processor,
      with CPU hotplugging configured.
      
      ("After" has the number of additional CPUs set to 0)
      Before: NR_CPUS: 512, nr_cpu_ids: 2, nr_node_ids 1
      After: NR_CPUS: 512, nr_cpu_ids: 1, nr_node_ids 1
      
      [Changed the name of the option and the prompt according to Ingo's
       suggestion.]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7f2f49a5
    • C
      x86: move prefill_possible_map calling early, fix, V2 · 14adf855
      Chuck Ebbert 提交于
      Commit 4a701737 ("x86: move prefill_possible_map calling early, fix")
      is the wrong fix: prefill_possible_map() needs to be available
      even when CONFIG_HOTPLUG_CPU is not set. A followon patch will do that.
      
      Fix this correctly by making prefill_possible_map() available even when
      CONFIG_HOTPLUG_CPU is not set. The function is needed so that
      the number of possible CPUs can be determined.
      
      Tested on uniprocessor machine with CPU hotplug disabled.
      
      From boot log:
        Before: NR_CPUS: 512, nr_cpu_ids: 512, nr_node_ids 1
        After: NR_CPUS: 512, nr_cpu_ids: 1, nr_node_ids 1
      Signed-off-by: NChuck Ebbert <cebbert@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      14adf855
    • C
      x86: smpboot - check if we have ESR register in wakeup_secondary_cpu · 59ef48a5
      Cyrill Gorcunov 提交于
      We should check if we have ESR register before reading from it.
      Signed-off-by: NCyrill Gorcunov <gorcunov@gmail.com>
      Cc: Yinghai Lu <yhlu.kernel@gmail.com>
      Cc: "Maciej W. Rozycki" <macro@linux-mips.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      59ef48a5
    • M
      arch/x86/kernel/smpboot.c: Clarify when irq processing begins. · 0cefa5b9
      Manfred Spraul 提交于
      Secondary cpus start with local interrupts disabled.
      start_secondary() first initializes the new cpu, then it enables the
      local interrupts. (although interrupts are enabled within smp_callin()
      as well).
      
      Right now, the local interrupts are enabled as a side effect of calling
      ipi_call_lock_irq().
      
      The attached patch clarifies when local interrupts are enabled.
      Signed-off-by: NManfred Spraul <manfred@colorfullife.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0cefa5b9
  23. 09 9月, 2008 1 次提交
    • M
      kernel/cpu.c: create a CPU_STARTING cpu_chain notifier · e545a614
      Manfred Spraul 提交于
      Right now, there is no notifier that is called on a new cpu, before the new
      cpu begins processing interrupts/softirqs.
      Various kernel function would need that notification, e.g. kvm works around
      by calling smp_call_function_single(), rcu polls cpu_online_map.
      
      The patch adds a CPU_STARTING notification. It also adds a helper function
      that sends the message to all cpu_chain handlers.
      
      Tested on x86-64.
      All other archs are untested. Especially on sparc, I'm not sure if I got
      it right.
      Signed-off-by: NManfred Spraul <manfred@colorfullife.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e545a614
  24. 25 8月, 2008 3 次提交