1. 20 2月, 2009 3 次提交
    • A
      x86, mce: separate correct machine check poller and fatal exception handler · b79109c3
      Andi Kleen 提交于
      Impact: cleanup, performance enhancement
      
      The machine check poller is diverging more and more from the fatal
      exception handler. Instead of adding more special cases separate the code
      paths completely. The corrected poll path is actually quite simple,
      and this doesn't result in much code duplication.
      
      This makes both handlers much easier to read and results in
      cleaner code flow.  The exception handler now only needs to care
      about uncorrected errors, which also simplifies the handling of multiple
      errors. The corrected poller also now always runs in standard interrupt
      context and does not need to do anything special to handle NMI context.
      
      Minor behaviour changes:
      - MCG status is now not cleared on polling.
      - Only the banks which had corrected errors get cleared on polling
      - The exception handler only clears banks with errors now
      
      v2: Forward port to new patch order. Add "uc" argument.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      b79109c3
    • A
      x86, mce: factor out duplicated struct mce setup into one function · b5f2fa4e
      Andi Kleen 提交于
      Impact: cleanup
      
      This merely factors out duplicated code to set up
      the initial struct mce state into a single function.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      b5f2fa4e
    • A
      x86, mce: implement dynamic machine check banks support · 0d7482e3
      Andi Kleen 提交于
      Impact: cleanup; making code future proof; memory saving on small systems
      
      This patch replaces the hardcoded max number of machine check banks with 
      dynamic allocation depending on what the CPU reports. The sysfs
      data structures and the banks array are dynamically allocated.
      
      There is still a hard bank limit (128) because the mcelog protocol uses
      banks >= 128 as pseudo banks to escape other events. But we expect
      that 128 banks is beyond any reasonable CPU for now.
      
      This supersedes an earlier patch by Venki, but it solves the problem
      more completely by making the limit fully dynamic (up to the 128
      boundary).
      
      This saves some memory on machines with less than 6 banks because
      they won't need sysdevs for unused ones and also allows to 
      use sysfs to control these banks on possible future CPUs with
      more than 6 banks.
      
      This is an updated patch addressing Venki's comments.  I also added in
      another patch from Thomas which fixed the error allocation path (that
      patch was previously separated)
      
      Cc: Venki Pallipadi <venkatesh.pallipadi@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      0d7482e3
  2. 18 2月, 2009 9 次提交
    • H
      x86, mce: fix a race condition in mce_read() · ef41df43
      Huang Ying 提交于
      Impact: bugfix
      
      Considering the situation as follow:
      
      before: mcelog.next == 1, mcelog.entry[0].finished = 1
      
      +--------------------------------------------------------------------------
      R                   W1                  W2                  W3
      
      read mcelog.next (1)
                          mcelog.next++ (2)
                          (working on entry 1,
                          finished == 0)
      
      mcelog.next = 0
                                              mcelog.next++ (1)
                                              (working on entry 0)
                                                                 mcelog.next++ (2)
                                                                 (working on entry 1)
                              <----------------- race ---------------->
                          (done on entry 1,
                          finished = 1)
                                                                 (done on entry 1,
                                                                 finished = 1)
      
      To fix the race condition, a cmpxchg loop is added to mce_read() to
      ensure no new MCE record can be added between mcelog.next reading and
      mcelog.next = 0.
      Signed-off-by: NHuang Ying <ying.huang@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      ef41df43
    • A
      x86, mce: disable machine checks on offlined CPUs · d6b75584
      Andi Kleen 提交于
      Impact: Lower priority bug fix
      
      Offlined CPUs could still get machine checks, but the machine check handler
      cannot handle them properly, leading to an unconditional crash. Disable
      machine checks on CPUs that are going down.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      d6b75584
    • A
      x86, mce: don't set up mce sysdev devices with mce=off · 5b4408fd
      Andi Kleen 提交于
      Impact: bug fix, in this case the resume handler shouldn't run which
      	avoids incorrectly reenabling machine checks on resume
      
      When MCEs are completely disabled on the command line don't set
      up the sysdev devices for them either.
      
      Includes a comment fix from Thomas Gleixner.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      5b4408fd
    • A
      x86, mce: switch machine check polling to per CPU timer · 52d168e2
      Andi Kleen 提交于
      Impact: Higher priority bug fix
      
      The machine check poller runs a single timer and then broadcasted an
      IPI to all CPUs to check them. This leads to unnecessary
      synchronization between CPUs. The original CPU running the timer has
      to wait potentially a long time for all other CPUs answering. This is
      also real time unfriendly and in general inefficient.
      
      This was especially a problem on systems with a lot of events where
      the poller run with a higher frequency after processing some events.
      There could be more and more CPU time wasted with this, to
      the point of significantly slowing down machines.
      
      The machine check polling is actually fully independent per CPU, so
      there's no reason to not just do this all with per CPU timers.  This
      patch implements that.
      
      Also switch the poller also to use standard timers instead of work
      queues. It was using work queues to be able to execute a user program
      on a event, but mce_notify_user() handles this case now with a
      separate callback. So instead always run the poll code in in a
      standard per CPU timer, which means that in the common case of not
      having to execute a trigger there will be less overhead.
      
      This allows to clean up the initialization significantly, because
      standard timers are already up when machine checks get init'ed.  No
      multiple initialization functions.
      
      Thanks to Thomas Gleixner for some help.
      
      Cc: thockin@google.com
      v2: Use del_timer_sync() on cpu shutdown and don't try to handle
      migrated timers.
      v3: Add WARN_ON for timer running on unexpected CPU
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      52d168e2
    • A
      x86, mce: always use separate work queue to run trigger · 9bd98405
      Andi Kleen 提交于
      Impact: Needed for bug fix in next patch
      
      This relaxes the requirement that mce_notify_user has to run in process
      context. Useful for future changes, but also leads to cleaner
      behaviour now. Now instead mce_notify_user can be called directly
      from interrupt (but not NMI) context.
      
      The work queue only uses a single global work struct, which can be done safely
      because it is always free to reuse before the trigger function is executed.
      This way no events can be lost.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      9bd98405
    • A
      x86, mce: don't disable machine checks during code patching · 123aa76e
      Andi Kleen 提交于
      Impact: low priority bug fix
      
      This removes part of a a patch I added myself some time ago. After some
      consideration the patch was a bad idea. In particular it stopped machine check
      exceptions during code patching.
      
      To quote the comment:
      
              * MCEs only happen when something got corrupted and in this
              * case we must do something about the corruption.
              * Ignoring it is worse than a unlikely patching race.
              * Also machine checks tend to be broadcast and if one CPU
              * goes into machine check the others follow quickly, so we don't
              * expect a machine check to cause undue problems during to code
              * patching.
      
      So undo the machine check related parts of
      8f4e956b NMIs are still disabled.
      
      This only removes code, the only additions are a new comment.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      123aa76e
    • A
      x86, mce: disable machine checks on suspend · 973a2dd1
      Andi Kleen 提交于
      Impact: Bug fix
      
      During suspend it is not reliable to process machine check
      exceptions, because CPUs disappear but can still get machine check
      broadcasts.  Also the system is slightly more likely to
      machine check them, but the handler is typically not a position
      to handle them in a meaningfull way.
      
      So disable them during suspend and enable them during resume.
      
      Also make sure they are always disabled on hot-unplugged CPUs.
      
      This new code assumes that suspend always hotunplugs all
      non BP CPUs.
      
      v2: Remove the WARN_ONs Thomas objected to.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      973a2dd1
    • A
      x86, mce: use force_sig_info to kill process in machine check · 380851bc
      Andi Kleen 提交于
      Impact: bug fix (with tolerant == 3)
      
      do_exit cannot be called directly from the exception handler because
      it can sleep and the exception handler runs on the exception stack.
      Use force_sig() instead.
      
      Based on a earlier patch by Ying Huang who debugged the problem.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      380851bc
    • A
      x86, mce: reinitialize per cpu features on resume · 6ec68bff
      Andi Kleen 提交于
      Impact: Bug fix
      
      This fixes a long standing bug in the machine check code. On resume the
      boot CPU wouldn't get its vendor specific state like thermal handling
      reinitialized. This means the boot cpu wouldn't ever get any thermal
      events reported again.
      
      Call the respective initialization functions on resume
      
      v2: Remove ancient init because they don't have a resume device anyways.
          Pointed out by Thomas Gleixner.
      v3: Now fix the Subject too to reflect v2 change
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      6ec68bff
  3. 09 2月, 2009 1 次提交
  4. 01 2月, 2009 1 次提交
  5. 29 1月, 2009 1 次提交
  6. 27 1月, 2009 1 次提交
  7. 26 1月, 2009 1 次提交
    • I
      x86: unmask CPUID levels on Intel CPUs, fix · 99fb4d34
      Ingo Molnar 提交于
      Impact: fix boot hang on pre-model-15 Intel CPUs
      
      rdmsrl_safe() does not work in very early bootup code yet, because we
      dont have the pagefault handler installed yet so exception section
      does not get parsed. rdmsr_safe() will just crash and hang the bootup.
      
      So limit the MSR_IA32_MISC_ENABLE MSR read to those CPU types that
      support it.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      99fb4d34
  8. 22 1月, 2009 1 次提交
  9. 21 1月, 2009 1 次提交
  10. 20 1月, 2009 2 次提交
  11. 13 1月, 2009 2 次提交
    • I
      x86, cpufreq: remove leftover copymask_copy() · 4a922a96
      Ingo Molnar 提交于
      Impact: fix potential boot crash on MAXSMP
      
      Remove code left over by:
      
        50c668d6: Revert "cpumask: use work_on_cpu in acpi-cpufreq.c for drv_read
      
      That cmd.cpumask is not allocated anymore. No impact on default !MAXSMP
      kernels.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4a922a96
    • I
      Revert "cpumask: use work_on_cpu in acpi-cpufreq.c for drv_read and drv_write" · 50c668d6
      Ingo Molnar 提交于
      This reverts commit 7503bfba.
      
      Dieter Ries reported bootup soft-hangs and bisected it back to
      this commit, and reverting this commit gave him a working system.
      
      The commit introduces work_on_cpu() use into the cpufreq code,
      but that is subtly problematic from a lock hierarchy POV: the
      hotplug-cpu lock is an highlevel lock that is taken before
      lowlevel locks, and in this codepath we are called with the
      policy lock taken.
      
      Dieter did not have lockdep enabled so we dont have a nice stack
      trace proof for this, but using work_on_cpu() in such a lowlevel
      place certainly looks wrong, so we revert the patch.
      
      work_on_cpu() needs to be reworked to be more generally usable.
      Reported-by: NDieter Ries <clip2@gmx.de>
      Tested-by: NDieter Ries <clip2@gmx.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      50c668d6
  12. 09 1月, 2009 1 次提交
  13. 07 1月, 2009 1 次提交
    • L
      x86: fix section mismatch warnings in mcheck/mce_amd_64.c · 51d7a139
      Leonardo Potenza 提交于
      Mark the function local_allocate_threshold_blocks() with __cpuinit,
      in order to remove the following section mismatch messages:
      
      WARNING: arch/x86/kernel/cpu/mcheck/built-in.o(.text+0x1363): Section mismatch in reference from the function local_allocate_threshold_blocks() to the function .cpuinit.text:allocate_threshold_blocks()
      The function local_allocate_threshold_blocks() references
      the function __cpuinit allocate_threshold_blocks().
      This is often because local_allocate_threshold_blocks lacks a __cpuinit
      annotation or the annotation of allocate_threshold_blocks is wrong.
      
      WARNING: arch/x86/kernel/cpu/built-in.o(.text+0x1def): Section mismatch in reference from the function local_allocate_threshold_blocks() to the function .cpuinit.text:allocate_threshold_blocks()
      The function local_allocate_threshold_blocks() references
      the function __cpuinit allocate_threshold_blocks().
      This is often because local_allocate_threshold_blocks lacks a __cpuinit
      annotation or the annotation of allocate_threshold_blocks is wrong.
      
      WARNING: arch/x86/kernel/built-in.o(.text+0xef2b): Section mismatch in reference from the function local_allocate_threshold_blocks() to the function .cpuinit.text:allocate_threshold_blocks()
      The function local_allocate_threshold_blocks() references
      the function __cpuinit allocate_threshold_blocks().
      This is often because local_allocate_threshold_blocks lacks a __cpuinit
      annotation or the annotation of allocate_threshold_blocks is wrong.
      
      All the callsites of this function are __cpuinit already, and all the
      functions it calls are __cpuinit as well.
      Signed-off-by: NLeonardo Potenza <lpotenza@inwind.it>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      51d7a139
  14. 06 1月, 2009 5 次提交
  15. 05 1月, 2009 1 次提交
  16. 04 1月, 2009 4 次提交
  17. 31 12月, 2008 2 次提交
  18. 29 12月, 2008 1 次提交
  19. 26 12月, 2008 1 次提交
  20. 25 12月, 2008 1 次提交
    • F
      tracing/ftrace: don't trace on early stage of a secondary cpu boot, v3 · 0ca59dd9
      Frederic Weisbecker 提交于
      Impact: fix a crash/hard-reboot on certain configs while enabling cpu runtime
      
      On some archs, the boot of a secondary cpu can have an early fragile state.
      On x86-64, the pda is not initialized on the first stage of a cpu boot but
      it is needed to get the cpu number and the current task pointer. This data
      is needed during tracing. As they were dereferenced at this stage, we got a
      crash while tracing a cpu being enabled at runtime.
      
      Some other archs like ia64 can have such kind of issue too.
      
      Changes on v2:
      
      We dropped the previous solution of a per-arch called function to guess the
      current state of a cpu. That could slow down the tracing.
      
      This patch removes the -pg flag on arch/x86/kernel/cpu/common.c where
      the low level cpu boot functions exist, on start_secondary() and a helper
      function used at this stage.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0ca59dd9