1. 12 1月, 2006 6 次提交
  2. 13 12月, 2005 1 次提交
  3. 15 11月, 2005 3 次提交
    • S
      [PATCH] x86-64/i386: Intel HT, Multi core detection fixes · 94605eff
      Siddha, Suresh B 提交于
      Fields obtained through cpuid vector 0x1(ebx[16:23]) and
      vector 0x4(eax[14:25], eax[26:31]) indicate the maximum values and might not
      always be the same as what is available and what OS sees.  So make sure
      "siblings" and "cpu cores" values in /proc/cpuinfo reflect the values as seen
      by OS instead of what cpuid instruction says. This will also fix the buggy BIOS
      cases (for example where cpuid on a single core cpu says there are "2" siblings,
      even when HT is disabled in the BIOS.
      http://bugzilla.kernel.org/show_bug.cgi?id=4359)
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      94605eff
    • A
      [PATCH] x86_64: New heuristics to find out hotpluggable CPUs. · 420f8f68
      Andi Kleen 提交于
      With a NR_CPUS==128 kernel with CPU hotplug enabled we would waste 4MB
      on per CPU data of all possible CPUs.  The reason was that HOTPLUG
      always set up possible map to NR_CPUS cpus and then we need to allocate
      that much (each per CPU data is roughly ~32k now)
      
      The underlying problem is that ACPI didn't tell us how many hotplug CPUs
      the platform supports.  So the old code just assumed all, which would
      lead to this memory wastage.
      
      This implements some new heuristics:
      
       - If the BIOS specified disabled CPUs in the ACPI/mptables assume they
         can be enabled later (this is bending the ACPI specification a bit,
         but seems like a obvious extension)
       - The user can overwrite it with a new additionals_cpus=NUM option
       - Otherwise use half of the available CPUs or 2, whatever is more.
      
      Cc: ashok.raj@intel.com
      Cc: len.brown@intel.com
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      420f8f68
    • S
      [PATCH] x86_64: Unmap NULL during early bootup · f6c2e333
      Siddha, Suresh B 提交于
      We should zap the low mappings, as soon as possible, so that we can catch
      kernel bugs more effectively. Previously early boot had NULL mapped
      and didn't trap on NULL references.
      
      This patch introduces boot_level4_pgt, which will always have low identity
      addresses mapped.  Druing boot, all the processors will use this as their
      level4 pgt.  On BP, we will switch to init_level4_pgt as soon as we enter C
      code and zap the low mappings as soon as we are done with the usage of
      identity low mapped addresses.  On AP's we will zap the low mappings as
      soon as we jump to C code.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NAshok Raj <ashok.raj@intel.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f6c2e333
  4. 09 11月, 2005 1 次提交
    • N
      [PATCH] sched: disable preempt in idle tasks · 5bfb5d69
      Nick Piggin 提交于
      Run idle threads with preempt disabled.
      
      Also corrected a bugs in arm26's cpu_idle (make it actually call schedule()).
      How did it ever work before?
      
      Might fix the CPU hotplugging hang which Nigel Cunningham noted.
      
      We think the bug hits if the idle thread is preempted after checking
      need_resched() and before going to sleep, then the CPU offlined.
      
      After calling stop_machine_run, the CPU eventually returns from preemption and
      into the idle thread and goes to sleep.  The CPU will continue executing
      previous idle and have no chance to call play_dead.
      
      By disabling preemption until we are ready to explicitly schedule, this bug is
      fixed and the idle threads generally become more robust.
      
      From: alexs <ashepard@u.washington.edu>
      
        PPC build fix
      
      From: Yoichi Yuasa <yuasa@hh.iij4u.or.jp>
      
        MIPS build fix
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NYoichi Yuasa <yuasa@hh.iij4u.or.jp>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      5bfb5d69
  5. 07 11月, 2005 1 次提交
  6. 11 10月, 2005 1 次提交
  7. 13 9月, 2005 5 次提交
  8. 10 9月, 2005 1 次提交
  9. 08 9月, 2005 1 次提交
  10. 20 8月, 2005 1 次提交
  11. 13 8月, 2005 1 次提交
  12. 30 7月, 2005 1 次提交
    • E
      [PATCH] Fix sync_tsc hang · 3d483f47
      Eric W. Biederman 提交于
      sync_tsc was using smp_call_function to ask the boot processor to report
      it's tsc value.  smp_call_function performs an IPI_send_allbutself which is
      a broadcast ipi.  There is a window during processor startup during which
      the target cpu has started and before it has initialized it's interrupt
      vectors so it can properly process an interrupt.  Receveing an interrupt
      during that window will triple fault the cpu and do other nasty things.
      
      Why cli does not protect us from that is beyond me.
      
      The simple fix is to match ia64 and provide a smp_call_function_single.
      Which avoids the broadcast and is more efficient.
      
      This certainly fixes the problem of getting stuck on boot which was
      very easy to trigger on my SMP Hyperthreaded Xeon, and I think
      it fixes it for the right reasons.
      
      Minor changes by AK
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      3d483f47
  13. 29 7月, 2005 3 次提交
  14. 28 7月, 2005 2 次提交
  15. 26 6月, 2005 4 次提交
  16. 21 5月, 2005 1 次提交
  17. 17 5月, 2005 3 次提交
    • A
      [PATCH] x86_64: Don't assume BSP has ID 0 in new smp bootup · 18a2b647
      Andi Kleen 提交于
      This patch removes the assumption that LAPIC entries contain the BSP as its
      first entry.  This is a slight improvement to the temporary fix submitted by
      Suresh Siddha.
      
      - Removes assumption that LAPIC entries contain BSP first.
      
      - Builds x86_acpiid_to_apicid[] and bios_cpu_apicid[] properly with BSP as
        first entry.
      
      - Made maxcpus=1 boot on these systems.  Since the parsing earlier in
        arch/x86_64/kernel/mpparse.c stopped after maxcpus entries, other entries
        were not processed, this causes kernel not to boot on these systems.
      
      TBD: x86_acpiid_to_apicid and bios_cpu_apicid[] seem to be exactly the
           same.  This could be removed, but might need more work to cleanup.
      Signed-off-by: NAshok Raj <ashok.raj@intel.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      18a2b647
    • A
      [PATCH] x86_64: Collected NMI watchdog fixes. · 75152114
      Andi Kleen 提交于
      Collected NMI watchdog fixes.
      
      - Fix call of check_nmi_watchdog
      
      - Remove earlier move of check_nmi_watchdog to later.  It does not fix the
        race it was supposed to fix fully.
      
      - Remove unused P6 definitions
      
      - Add support for performance counter based watchdog on P4 systems.
      
        This allows to run it only once per second, which saves some CPU time.
        Previously it would run at 1000Hz, which was too much.
      
        Code ported from i386
      
        Make this the default on Intel systems.
      
      - Use check_nmi_watchdog with local APIC based nmi
      
      - Fix race in touch_nmi_watchdog
      
      - Fix bug that caused incorrect performance counters to be programmed in a
        few cases on K8.
      
      - Remove useless check for local APIC
      
      - Use local_t and per_cpu variables for per CPU data.
      
      - Keep other CPUs busy during check_nmi_watchdog to make sure they really
        tick when in lapic mode.
      
      - Only check CPUs that are actually online.
      
      - Various other fixes.
      
      - Fix fallback path when MSRs are unimplemented
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      75152114
    • A
      [PATCH] x86_64: Update TSC sync algorithm · dda50e71
      Andi Kleen 提交于
      The new TSC sync algorithm recently submitted did not work too well.
      
      The result was that some MP machines where the TSC came up of the BIOS very
      unsynchronized and that did not have HPET support were nearly unusable because
      the time would jump forwards and backwards between CPUs.
      
      After a lot of research ;-) and some more prototypes I ended up with just
      using the one from IA64 which looks best.  It has some internal self tuning
      that should adapt to changing interconnect latencies.  It holds up in my tests
      so far.
      
      I believe it was originally written by David Mosberger, I just ported it over
      to x86-64.  See the inline comment for a description.
      
      This cleans up the code because it uses smp_call_function for syncing instead
      of having custom hooks in SMP bootup.
      
      Please note that the cycle numbers it outputs are too optimistic because they
      do not take into account the latency of WRMSR and RDTSC, which can be hundreds
      of cycles.  It seems to be able to sync a dual Opteron to 200-300 cycles,
      which is probably good enough.
      
      There is a timing window during AP bootup where interrupts can see
      inconsistent time before the TSC is synced.  It is hard to avoid unfortunately
      because we can only do the TSC sync after some setup, and we need to enable
      interrupts before that.  I just ignored it for now.
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      dda50e71
  18. 17 4月, 2005 4 次提交