提交 e128b4f4 编写于 作者: B Borislav Petkov 提交者: Ingo Molnar

x86/mce/AMD: Save an indentation level in prepare_threshold_block()

Do the !SMCA work first and then save us an indentation level for the
SMCA code.

No functionality change.
Signed-off-by: NBorislav Petkov <bp@suse.de>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Aravind Gopalakrishnan <aravindksg.lkml@gmail.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Yazen Ghannam <Yazen.Ghannam@amd.com>
Cc: linux-edac <linux-edac@vger.kernel.org>
Link: http://lkml.kernel.org/r/1462971509-3856-4-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
上级 32544f06
...@@ -343,6 +343,7 @@ prepare_threshold_block(unsigned int bank, unsigned int block, u32 addr, ...@@ -343,6 +343,7 @@ prepare_threshold_block(unsigned int bank, unsigned int block, u32 addr,
int offset, u32 misc_high) int offset, u32 misc_high)
{ {
unsigned int cpu = smp_processor_id(); unsigned int cpu = smp_processor_id();
u32 smca_low, smca_high, smca_addr;
struct threshold_block b; struct threshold_block b;
int new; int new;
...@@ -361,52 +362,49 @@ prepare_threshold_block(unsigned int bank, unsigned int block, u32 addr, ...@@ -361,52 +362,49 @@ prepare_threshold_block(unsigned int bank, unsigned int block, u32 addr,
b.interrupt_enable = 1; b.interrupt_enable = 1;
if (mce_flags.smca) { if (!mce_flags.smca) {
u32 smca_low, smca_high; new = (misc_high & MASK_LVTOFF_HI) >> 20;
u32 smca_addr = MSR_AMD64_SMCA_MCx_CONFIG(bank); goto set_offset;
}
if (!rdmsr_safe(smca_addr, &smca_low, &smca_high)) {
/*
* OS is required to set the MCAX bit to acknowledge
* that it is now using the new MSR ranges and new
* registers under each bank. It also means that the OS
* will configure deferred errors in the new MCx_CONFIG
* register. If the bit is not set, uncorrectable errors
* will cause a system panic.
*
* MCA_CONFIG[MCAX] is bit 32 (0 in the high portion of
* the MSR.)
*/
smca_high |= BIT(0);
/* smca_addr = MSR_AMD64_SMCA_MCx_CONFIG(bank);
* SMCA logs Deferred Error information in
* MCA_DE{STAT,ADDR} registers with the option of
* additionally logging to MCA_{STATUS,ADDR} if
* MCA_CONFIG[LogDeferredInMcaStat] is set.
*
* This bit is usually set by BIOS to retain the old
* behavior for OSes that don't use the new registers.
* Linux supports the new registers so let's disable
* that additional logging here.
*
* MCA_CONFIG[LogDeferredInMcaStat] is bit 34 (bit 2 in
* the high portion of the MSR).
*/
smca_high &= ~BIT(2);
wrmsr(smca_addr, smca_low, smca_high); if (!rdmsr_safe(smca_addr, &smca_low, &smca_high)) {
} /*
* OS is required to set the MCAX bit to acknowledge that it is
* now using the new MSR ranges and new registers under each
* bank. It also means that the OS will configure deferred
* errors in the new MCx_CONFIG register. If the bit is not set,
* uncorrectable errors will cause a system panic.
*
* MCA_CONFIG[MCAX] is bit 32 (0 in the high portion of the MSR.)
*/
smca_high |= BIT(0);
/* Gather LVT offset for thresholding: */ /*
if (rdmsr_safe(MSR_CU_DEF_ERR, &smca_low, &smca_high)) * SMCA logs Deferred Error information in MCA_DE{STAT,ADDR}
goto out; * registers with the option of additionally logging to
* MCA_{STATUS,ADDR} if MCA_CONFIG[LogDeferredInMcaStat] is set.
*
* This bit is usually set by BIOS to retain the old behavior
* for OSes that don't use the new registers. Linux supports the
* new registers so let's disable that additional logging here.
*
* MCA_CONFIG[LogDeferredInMcaStat] is bit 34 (bit 2 in the high
* portion of the MSR).
*/
smca_high &= ~BIT(2);
new = (smca_low & SMCA_THR_LVT_OFF) >> 12; wrmsr(smca_addr, smca_low, smca_high);
} else {
new = (misc_high & MASK_LVTOFF_HI) >> 20;
} }
/* Gather LVT offset for thresholding: */
if (rdmsr_safe(MSR_CU_DEF_ERR, &smca_low, &smca_high))
goto out;
new = (smca_low & SMCA_THR_LVT_OFF) >> 12;
set_offset:
offset = setup_APIC_mce_threshold(offset, new); offset = setup_APIC_mce_threshold(offset, new);
if ((offset == new) && (mce_threshold_vector != amd_threshold_interrupt)) if ((offset == new) && (mce_threshold_vector != amd_threshold_interrupt))
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册