1. 16 6月, 2011 8 次提交
  2. 21 4月, 2011 1 次提交
  3. 20 4月, 2011 1 次提交
  4. 01 4月, 2011 1 次提交
  5. 24 3月, 2011 1 次提交
    • R
      x86: Use syscore_ops instead of sysdev classes and sysdevs · f3c6ea1b
      Rafael J. Wysocki 提交于
      Some subsystems in the x86 tree need to carry out suspend/resume and
      shutdown operations with one CPU on-line and interrupts disabled and
      they define sysdev classes and sysdevs or sysdev drivers for this
      purpose.  This leads to unnecessarily complicated code and excessive
      memory usage, so switch them to using struct syscore_ops objects for
      this purpose instead.
      
      Generally, there are three categories of subsystems that use
      sysdevs for implementing PM operations: (1) subsystems whose
      suspend/resume callbacks ignore their arguments entirely (the
      majority), (2) subsystems whose suspend/resume callbacks use their
      struct sys_device argument, but don't really need to do that,
      because they can be implemented differently in an arguably simpler
      way (io_apic.c), and (3) subsystems whose suspend/resume callbacks
      use their struct sys_device argument, but the value of that argument
      is always the same and could be ignored (microcode_core.c).  In all
      of these cases the subsystems in question may be readily converted to
      using struct syscore_ops objects for power management and shutdown.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      f3c6ea1b
  6. 18 3月, 2011 1 次提交
  7. 30 12月, 2010 2 次提交
  8. 15 10月, 2010 1 次提交
    • A
      llseek: automatically add .llseek fop · 6038f373
      Arnd Bergmann 提交于
      All file_operations should get a .llseek operation so we can make
      nonseekable_open the default for future file operations without a
      .llseek pointer.
      
      The three cases that we can automatically detect are no_llseek, seq_lseek
      and default_llseek. For cases where we can we can automatically prove that
      the file offset is always ignored, we use noop_llseek, which maintains
      the current behavior of not returning an error from a seek.
      
      New drivers should normally not use noop_llseek but instead use no_llseek
      and call nonseekable_open at open time.  Existing drivers can be converted
      to do the same when the maintainer knows for certain that no user code
      relies on calling seek on the device file.
      
      The generated code is often incorrectly indented and right now contains
      comments that clarify for each added line why a specific variant was
      chosen. In the version that gets submitted upstream, the comments will
      be gone and I will manually fix the indentation, because there does not
      seem to be a way to do that using coccinelle.
      
      Some amount of new code is currently sitting in linux-next that should get
      the same modifications, which I will do at the end of the merge window.
      
      Many thanks to Julia Lawall for helping me learn to write a semantic
      patch that does all this.
      
      ===== begin semantic patch =====
      // This adds an llseek= method to all file operations,
      // as a preparation for making no_llseek the default.
      //
      // The rules are
      // - use no_llseek explicitly if we do nonseekable_open
      // - use seq_lseek for sequential files
      // - use default_llseek if we know we access f_pos
      // - use noop_llseek if we know we don't access f_pos,
      //   but we still want to allow users to call lseek
      //
      @ open1 exists @
      identifier nested_open;
      @@
      nested_open(...)
      {
      <+...
      nonseekable_open(...)
      ...+>
      }
      
      @ open exists@
      identifier open_f;
      identifier i, f;
      identifier open1.nested_open;
      @@
      int open_f(struct inode *i, struct file *f)
      {
      <+...
      (
      nonseekable_open(...)
      |
      nested_open(...)
      )
      ...+>
      }
      
      @ read disable optional_qualifier exists @
      identifier read_f;
      identifier f, p, s, off;
      type ssize_t, size_t, loff_t;
      expression E;
      identifier func;
      @@
      ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
      {
      <+...
      (
         *off = E
      |
         *off += E
      |
         func(..., off, ...)
      |
         E = *off
      )
      ...+>
      }
      
      @ read_no_fpos disable optional_qualifier exists @
      identifier read_f;
      identifier f, p, s, off;
      type ssize_t, size_t, loff_t;
      @@
      ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
      {
      ... when != off
      }
      
      @ write @
      identifier write_f;
      identifier f, p, s, off;
      type ssize_t, size_t, loff_t;
      expression E;
      identifier func;
      @@
      ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
      {
      <+...
      (
        *off = E
      |
        *off += E
      |
        func(..., off, ...)
      |
        E = *off
      )
      ...+>
      }
      
      @ write_no_fpos @
      identifier write_f;
      identifier f, p, s, off;
      type ssize_t, size_t, loff_t;
      @@
      ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
      {
      ... when != off
      }
      
      @ fops0 @
      identifier fops;
      @@
      struct file_operations fops = {
       ...
      };
      
      @ has_llseek depends on fops0 @
      identifier fops0.fops;
      identifier llseek_f;
      @@
      struct file_operations fops = {
      ...
       .llseek = llseek_f,
      ...
      };
      
      @ has_read depends on fops0 @
      identifier fops0.fops;
      identifier read_f;
      @@
      struct file_operations fops = {
      ...
       .read = read_f,
      ...
      };
      
      @ has_write depends on fops0 @
      identifier fops0.fops;
      identifier write_f;
      @@
      struct file_operations fops = {
      ...
       .write = write_f,
      ...
      };
      
      @ has_open depends on fops0 @
      identifier fops0.fops;
      identifier open_f;
      @@
      struct file_operations fops = {
      ...
       .open = open_f,
      ...
      };
      
      // use no_llseek if we call nonseekable_open
      ////////////////////////////////////////////
      @ nonseekable1 depends on !has_llseek && has_open @
      identifier fops0.fops;
      identifier nso ~= "nonseekable_open";
      @@
      struct file_operations fops = {
      ...  .open = nso, ...
      +.llseek = no_llseek, /* nonseekable */
      };
      
      @ nonseekable2 depends on !has_llseek @
      identifier fops0.fops;
      identifier open.open_f;
      @@
      struct file_operations fops = {
      ...  .open = open_f, ...
      +.llseek = no_llseek, /* open uses nonseekable */
      };
      
      // use seq_lseek for sequential files
      /////////////////////////////////////
      @ seq depends on !has_llseek @
      identifier fops0.fops;
      identifier sr ~= "seq_read";
      @@
      struct file_operations fops = {
      ...  .read = sr, ...
      +.llseek = seq_lseek, /* we have seq_read */
      };
      
      // use default_llseek if there is a readdir
      ///////////////////////////////////////////
      @ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier readdir_e;
      @@
      // any other fop is used that changes pos
      struct file_operations fops = {
      ... .readdir = readdir_e, ...
      +.llseek = default_llseek, /* readdir is present */
      };
      
      // use default_llseek if at least one of read/write touches f_pos
      /////////////////////////////////////////////////////////////////
      @ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier read.read_f;
      @@
      // read fops use offset
      struct file_operations fops = {
      ... .read = read_f, ...
      +.llseek = default_llseek, /* read accesses f_pos */
      };
      
      @ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier write.write_f;
      @@
      // write fops use offset
      struct file_operations fops = {
      ... .write = write_f, ...
      +	.llseek = default_llseek, /* write accesses f_pos */
      };
      
      // Use noop_llseek if neither read nor write accesses f_pos
      ///////////////////////////////////////////////////////////
      
      @ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier read_no_fpos.read_f;
      identifier write_no_fpos.write_f;
      @@
      // write fops use offset
      struct file_operations fops = {
      ...
       .write = write_f,
       .read = read_f,
      ...
      +.llseek = noop_llseek, /* read and write both use no f_pos */
      };
      
      @ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier write_no_fpos.write_f;
      @@
      struct file_operations fops = {
      ... .write = write_f, ...
      +.llseek = noop_llseek, /* write uses no f_pos */
      };
      
      @ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier read_no_fpos.read_f;
      @@
      struct file_operations fops = {
      ... .read = read_f, ...
      +.llseek = noop_llseek, /* read uses no f_pos */
      };
      
      @ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      @@
      struct file_operations fops = {
      ...
      +.llseek = noop_llseek, /* no read or write fn */
      };
      ===== End semantic patch =====
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Julia Lawall <julia@diku.dk>
      Cc: Christoph Hellwig <hch@infradead.org>
      6038f373
  9. 03 8月, 2010 1 次提交
  10. 15 6月, 2010 1 次提交
    • P
      mce: convert to rcu_dereference_index_check() · ec8c27e0
      Paul E. McKenney 提交于
      The mce processing applies rcu_dereference_check() to integers used as
      array indices.  This patch therefore moves mce to the new RCU API
      rcu_dereference_index_check() that avoids the sparse processing that
      would otherwise result in compiler errors.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      ec8c27e0
  11. 11 6月, 2010 1 次提交
  12. 20 5月, 2010 1 次提交
    • H
      ACPI, APEI, Use ERST for persistent storage of MCE · 482908b4
      Huang Ying 提交于
      Traditionally, fatal MCE will cause Linux print error log to console
      then reboot. Because MCE registers will preserve their content after
      warm reboot, the hardware error can be logged to disk or network after
      reboot. But system may fail to warm reboot, then you may lose the
      hardware error log. ERST can help here. Through saving the hardware
      error log into flash via ERST before go panic, the hardware error log
      can be gotten from the flash after system boot successful again.
      
      The fatal MCE processing procedure with ERST involved is as follow:
      
      - Hardware detect error, MCE raised
      - MCE read MCE registers, check error severity (fatal), prepare error record
      - Write MCE error record into flash via ERST
      - Go panic, then trigger system reboot
      - System reboot, /sbin/mcelog run, it reads /dev/mcelog to check flash
        for error record of previous boot via ERST, and output and clear
        them if available
      - /sbin/mcelog logs error records into disk or network
      
      ERST only accepts CPER record format, but there is no pre-defined CPER
      section can accommodate all information in struct mce, so a customized
      section type is defined to hold struct mce inside a CPER record as an
      error section.
      Signed-off-by: NHuang Ying <ying.huang@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      482908b4
  13. 10 5月, 2010 1 次提交
  14. 29 4月, 2010 1 次提交
  15. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  16. 14 3月, 2010 1 次提交
    • I
      x86/mce: Fix build bug with CONFIG_PROVE_LOCKING=y && CONFIG_X86_MCE_INTEL=y · 2aa2b50d
      Ingo Molnar 提交于
      Commit f56e8a07 "x86/mce: Fix RCU lockdep splats" introduced the
      following build bug:
      
        arch/x86/kernel/cpu/mcheck/mce.c: In function 'mce_log':
        arch/x86/kernel/cpu/mcheck/mce.c:166: error: 'mce_read_mutex' undeclared (first use in this function)
        arch/x86/kernel/cpu/mcheck/mce.c:166: error: (Each undeclared identifier is reported only once
        arch/x86/kernel/cpu/mcheck/mce.c:166: error: for each function it appears in.)
      
      Move the in-the-middle-of-file lock variable up to the variable
      definition section, the top of the .c file.
      
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: x86@kernel.org
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1267830207-9474-3-git-send-email-paulmck@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2aa2b50d
  17. 11 3月, 2010 1 次提交
    • P
      x86/mce: Fix RCU lockdep splats · f56e8a07
      Paul E. McKenney 提交于
      Create an rcu_dereference_check_mce() that checks for RCU-sched
      read side and mce_read_mutex being held on update side.  Replace
      uses of rcu_dereference() in arch/x86/kernel/cpu/mcheck/mce.c
      with this new macro.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: x86@kernel.org
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1267830207-9474-3-git-send-email-paulmck@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f56e8a07
  18. 08 3月, 2010 1 次提交
  19. 09 12月, 2009 1 次提交
  20. 08 12月, 2009 1 次提交
  21. 03 12月, 2009 1 次提交
  22. 26 11月, 2009 1 次提交
  23. 12 11月, 2009 1 次提交
  24. 10 11月, 2009 1 次提交
    • Y
      x86: Under BIOS control, restore AP's APIC_LVTTHMR to the BSP value · a2202aa2
      Yong Wang 提交于
      On platforms where the BIOS handles the thermal monitor interrupt,
      APIC_LVTTHMR on each logical CPU is programmed to generate a SMI
      and OS must not touch it.
      
      Unfortunately AP bringup sequence using INIT-SIPI-SIPI clears all
      the LVT entries except the mask bit. Essentially this results in
      all LVT entries including the thermal monitoring interrupt set
      to masked (clearing the bios programmed value for APIC_LVTTHMR).
      
      And this leads to kernel take over the thermal monitoring
      interrupt on AP's but not on BSP (leaving the bios programmed
      value only on BSP).
      
      As a result of this, we have seen system hangs when the thermal
      monitoring interrupt is generated.
      
      Fix this by reading the initial value of thermal LVT entry on
      BSP and if bios has taken over the control, then program the
      same value on all AP's and leave the thermal monitoring
      interrupt control on all the logical cpu's to the bios.
      Signed-off-by: NYong Wang <yong.y.wang@intel.com>
      Reviewed-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Borislav Petkov <borislav.petkov@amd.com>
      Cc: Arjan van de Ven <arjan@infradead.org>
      LKML-Reference: <20091110013824.GA24940@ywang-moblin2.bj.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: stable@kernel.org
      a2202aa2
  25. 16 10月, 2009 3 次提交
  26. 13 10月, 2009 1 次提交
    • H
      perf_event, x86, mce: Use TRACE_EVENT() for MCE logging · 8968f9d3
      Hidetoshi Seto 提交于
      This approach is the first baby step towards solving many of the
      structural problems the x86 MCE logging code is having today:
      
       - It has a private ring-buffer implementation that has a number
         of limitations and has been historically fragile and buggy.
      
       - It is using a quirky /dev/mcelog ioctl driven ABI that is MCE
         specific. /dev/mcelog is not part of any larger logging
         framework and hence has remained on the fringes for many years.
      
       - The MCE logging code is still very unclean partly due to its ABI
         limitations. Fields are being reused for multiple purposes, and
         the whole message structure is limited and x86 specific to begin
         with.
      
      All in one, the x86 tree would like to move away from this private
      implementation of an event logging facility to a broader framework.
      
      By using perf events we gain the following advantages:
      
       - Multiple user-space agents can access MCE events. We can have an
         mcelog daemon running but also a system-wide tracer capturing
         important events in flight-recorder mode.
      
       - Sampling support: the kernel and the user-space call-chain of MCE
         events can be stored and analyzed as well. This way actual patterns
         of bad behavior can be matched to precisely what kind of activity
         happened in the kernel (and/or in the app) around that moment in
         time.
      
       - Coupling with other hardware and software events: the PMU can track a
         number of other anomalies - monitoring software might chose to
         monitor those plus the MCE events as well - in one coherent stream of
         events.
      
       - Discovery of MCE sources - tracepoints are enumerated and tools can
         act upon the existence (or non-existence) of various channels of MCE
         information.
      
       - Filtering support: we just subscribe to and act upon the events we
         are interested in. Then even on a per event source basis there's
         in-kernel filter expressions available that can restrict the amount
         of data that hits the event channel.
      
       - Arbitrary deep per cpu buffering of events - we can buffer 32
         entries or we can buffer as much as we want, as long as we have
         the RAM.
      
       - An NMI-safe ring-buffer implementation - mappable to user-space.
      
       - Built-in support for timestamping of events, PID markers, CPU
         markers, etc.
      
       - A rich ABI accessible over system call interface. Per cpu, per task
         and per workload monitoring of MCE events can be done this way. The
         ABI itself has a nice, meaningful structure.
      
       - Extensible ABI: new fields can be added without breaking tooling.
         New tracepoints can be added as the hardware side evolves. There's
         various parsers that can be used.
      
       - Lots of scheduling/buffering/batching modes of operandi for MCE
         events. poll() support. mmap() support. read() support. You name it.
      
       - Rich tooling support: even without any MCE specific extensions added
         the 'perf' tool today offers various views of MCE data: perf report,
         perf stat, perf trace can all be used to view logged MCE events and
         perhaps correlate them to certain user-space usage patterns. But it
         can be used directly as well, for user-space agents and policy action
         in mcelog, etc.
      
      With this we hope to achieve significant code cleanup and feature
      improvements in the MCE code, and we hope to be able to drop the
      /dev/mcelog facility in the end.
      
      This patch is just a plain dumb dump of mce_log() records to
      the tracepoints / perf events framework - a first proof of
      concept step.
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      LKML-Reference: <4AD42A0D.7050104@jp.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8968f9d3
  27. 12 10月, 2009 1 次提交
  28. 02 10月, 2009 1 次提交
    • I
      x86: EDAC: MCE: Fix MCE decoding callback logic · f436f8bb
      Ingo Molnar 提交于
      Make decoding of MCEs happen only on AMD hardware by registering a
      non-default callback only on CPU families which support it.
      
      While looking at the interaction of decode_mce() with the other MCE
      code i also noticed a few other things and made the following
      cleanups/fixes:
      
       - Fixed the mce_decode() weak alias - a weak alias is really not
         good here, it should be a proper callback. A weak alias will be
         overriden if a piece of code is built into the kernel - not
         good, obviously.
      
       - The patch initializes the callback on AMD family 10h and 11h.
      
       - Added the more correct fallback printk of:
      
      	No support for human readable MCE decoding on this CPU type.
      	Transcribe the message and run it through 'mcelog --ascii' to decode.
      
         On CPUs that dont have a decoder.
      
       - Made the surrounding code more readable.
      
      Note that the callback allows us to have a default fallback -
      without having to check the CPU versions during the printout
      itself. When an EDAC module registers itself, it can install the
      decode-print function.
      
      (there's no unregister needed as this is core code.)
      
      version -v2 by Borislav Petkov:
      
       - add K8 to the set of supported CPUs
      
       - always build in edac_mce_amd since we use an early_initcall now
      
       - fix checkpatch warnings
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      LKML-Reference: <20091001141432.GA11410@aftab>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f436f8bb
  29. 30 9月, 2009 1 次提交
  30. 24 9月, 2009 1 次提交
    • I
      x86: mce: Use safer ways to access MCE registers · 11868a2d
      Ingo Molnar 提交于
      Use rdmsrl_safe() when accessing MCE registers. While in
      theory we always 'know' which ones are safe to access from
      the capability bits, there's a lot of hardware variations
      and reality might differ from theory, as it did in this case:
      
         http://bugzilla.kernel.org/show_bug.cgi?id=14204
      
      [    0.010016] mce: CPU supports 5 MCE banks
      [    0.011029] general protection fault: 0000 [#1]
      [    0.011998] last sysfs file:
      [    0.011998] Modules linked in:
      [    0.011998]
      [    0.011998] Pid: 0, comm: swapper Not tainted (2.6.31_router #1) HP Vectra
      [    0.011998] EIP: 0060:[<c100d9b9>] EFLAGS: 00010246 CPU: 0
      [    0.011998] EIP is at mce_rdmsrl+0x19/0x60
      [    0.011998] EAX: 00000000 EBX: 00000001 ECX: 00000407 EDX: 08000000
      [    0.011998] ESI: 00000000 EDI: 8c000000 EBP: 00000405 ESP: c17d5eac
      
      So WARN_ONCE() instead of crashing the box.
      
      ( also fix a number of stylistic inconsistencies in the code. )
      
      Note, we might still crash in wrmsrl() if we get that far, but
      we shouldnt if the registers are truly inaccessible.
      Reported-by: NGNUtoo <GNUtoo@no-log.org>
      Cc: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      LKML-Reference: <bug-14204-5438@http.bugzilla.kernel.org/>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      11868a2d