未验证 提交 543f2481 编写于 作者: O openeuler-ci-bot 提交者: Gitee

!471 Intel: Support In Field Scan(IFS) multi-blob images

Merge Pull Request from: @allen-shi 
 
IFS is a hardware feature to run circuit level tests on a CPU core to detect problems that are not caught by parity or ECC checks.
Intel In Field Scan(IFS) with single-blob image supported in [PR316](https://gitee.com/openeuler/kernel/pulls/316).
This PR is to support Intel In Field Scan support(IFS) with multi-blob images, and support blob images with V2 only.

This PR includes 34 commits totally.
The 1-10 commits upstreamed before v6.2 are dependent commits.
https://lore.kernel.org/all/20220525161232.14924-1-bp@alien8.de/
https://lore.kernel.org/all/20220710140736.6492-1-hdegoede@redhat.com/
https://lore.kernel.org/all/20220727114948.30123-1-bp@alien8.de/
https://lore.kernel.org/all/20201202153244.709752-1-me@mathieu.digital/
https://lore.kernel.org/all/20220813223825.3164861-2-ashok.raj@intel.com/
https://lore.kernel.org/all/20220825075445.28171-1-bp@alien8.de/
https://lore.kernel.org/all/20220829181030.722891-1-ashok.raj@intel.com/
The 11-34 commits upstreamed in v6.2 are for IFS multi-blob images.
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=a70210f41566131f88d31583f96e36cb7f5d2ad0

 **Intel-Kernel Issue** 
[#I6L337](https://gitee.com/openeuler/intel-kernel/issues/I6L337)

 **Test** 
Built and run the kernel successfully on openEuler 22.03 LTS SP1.
Test is PASS on SPR platform.

 **Known Issue** 
N/A

 **Default config change** 
N/A 
 
Link:https://gitee.com/openeuler/kernel/pulls/471 

Reviewed-by: Jason Zeng <jason.zeng@intel.com> 
Signed-off-by: Zheng Zengkai <zhengzengkai@huawei.com> 
What: /sys/devices/virtual/misc/intel_ifs_<N>/run_test What: /sys/devices/virtual/misc/intel_ifs_<N>/run_test
Date: April 21 2022 Date: Nov 16 2022
KernelVersion: 5.19 KernelVersion: 6.2
Contact: "Jithu Joseph" <jithu.joseph@intel.com> Contact: "Jithu Joseph" <jithu.joseph@intel.com>
Description: Write <cpu#> to trigger IFS test for one online core. Description: Write <cpu#> to trigger IFS test for one online core.
Note that the test is per core. The cpu# can be Note that the test is per core. The cpu# can be
for any thread on the core. Running on one thread for any thread on the core. Running on one thread
completes the test for the core containing that thread. completes the test for the core containing that thread.
Example: to test the core containing cpu5: echo 5 > Example: to test the core containing cpu5: echo 5 >
/sys/devices/platform/intel_ifs.<N>/run_test /sys/devices/virtual/misc/intel_ifs_<N>/run_test
What: /sys/devices/virtual/misc/intel_ifs_<N>/status What: /sys/devices/virtual/misc/intel_ifs_<N>/status
Date: April 21 2022 Date: Nov 16 2022
KernelVersion: 5.19 KernelVersion: 6.2
Contact: "Jithu Joseph" <jithu.joseph@intel.com> Contact: "Jithu Joseph" <jithu.joseph@intel.com>
Description: The status of the last test. It can be one of "pass", "fail" Description: The status of the last test. It can be one of "pass", "fail"
or "untested". or "untested".
What: /sys/devices/virtual/misc/intel_ifs_<N>/details What: /sys/devices/virtual/misc/intel_ifs_<N>/details
Date: April 21 2022 Date: Nov 16 2022
KernelVersion: 5.19 KernelVersion: 6.2
Contact: "Jithu Joseph" <jithu.joseph@intel.com> Contact: "Jithu Joseph" <jithu.joseph@intel.com>
Description: Additional information regarding the last test. The details file reports Description: Additional information regarding the last test. The details file reports
the hex value of the SCAN_STATUS MSR. Note that the error_code field the hex value of the SCAN_STATUS MSR. Note that the error_code field
may contain driver defined software code not defined in the Intel SDM. may contain driver defined software code not defined in the Intel SDM.
What: /sys/devices/virtual/misc/intel_ifs_<N>/image_version What: /sys/devices/virtual/misc/intel_ifs_<N>/image_version
Date: April 21 2022 Date: Nov 16 2022
KernelVersion: 5.19 KernelVersion: 6.2
Contact: "Jithu Joseph" <jithu.joseph@intel.com> Contact: "Jithu Joseph" <jithu.joseph@intel.com>
Description: Version (hexadecimal) of loaded IFS binary image. If no scan image Description: Version (hexadecimal) of loaded IFS binary image. If no scan image
is loaded reports "none". is loaded reports "none".
What: /sys/devices/virtual/misc/intel_ifs_<N>/reload What: /sys/devices/virtual/misc/intel_ifs_<N>/current_batch
Date: April 21 2022 Date: Nov 16 2022
KernelVersion: 5.19 KernelVersion: 6.2
Contact: "Jithu Joseph" <jithu.joseph@intel.com> Contact: "Jithu Joseph" <jithu.joseph@intel.com>
Description: Write "1" (or "y" or "Y") to reload the IFS image from Description: Write a number less than or equal to 0xff to load an IFS test image.
/lib/firmware/intel/ifs/ff-mm-ss.scan. The number written treated as the 2 digit suffix in the following file name:
/lib/firmware/intel/ifs_<N>/ff-mm-ss-02x.scan
Reading the file will provide the suffix of the currently loaded IFS test image.
...@@ -1352,7 +1352,7 @@ ORed together. The letters are seen in "Tainted" line of Oops reports. ...@@ -1352,7 +1352,7 @@ ORed together. The letters are seen in "Tainted" line of Oops reports.
====== ===== ============================================================== ====== ===== ==============================================================
1 `(P)` proprietary module was loaded 1 `(P)` proprietary module was loaded
2 `(F)` module was force loaded 2 `(F)` module was force loaded
4 `(S)` SMP kernel oops on an officially SMP incapable processor 4 `(S)` kernel running on an out of specification system
8 `(R)` module was force unloaded 8 `(R)` module was force unloaded
16 `(M)` processor reported a Machine Check Exception (MCE) 16 `(M)` processor reported a Machine Check Exception (MCE)
32 `(B)` bad page referenced or some unexpected page flags 32 `(B)` bad page referenced or some unexpected page flags
......
...@@ -84,7 +84,7 @@ Bit Log Number Reason that got the kernel tainted ...@@ -84,7 +84,7 @@ Bit Log Number Reason that got the kernel tainted
=== === ====== ======================================================== === === ====== ========================================================
0 G/P 1 proprietary module was loaded 0 G/P 1 proprietary module was loaded
1 _/F 2 module was force loaded 1 _/F 2 module was force loaded
2 _/S 4 SMP kernel oops on an officially SMP incapable processor 2 _/S 4 kernel running on an out of specification system
3 _/R 8 module was force unloaded 3 _/R 8 module was force unloaded
4 _/M 16 processor reported a Machine Check Exception (MCE) 4 _/M 16 processor reported a Machine Check Exception (MCE)
5 _/B 32 bad page referenced or some unexpected page flags 5 _/B 32 bad page referenced or some unexpected page flags
...@@ -116,10 +116,29 @@ More detailed explanation for tainting ...@@ -116,10 +116,29 @@ More detailed explanation for tainting
1) ``F`` if any module was force loaded by ``insmod -f``, ``' '`` if all 1) ``F`` if any module was force loaded by ``insmod -f``, ``' '`` if all
modules were loaded normally. modules were loaded normally.
2) ``S`` if the oops occurred on an SMP kernel running on hardware that 2) ``S`` if the kernel is running on a processor or system that is out of
hasn't been certified as safe to run multiprocessor. specification: hardware has been put into an unsupported configuration,
Currently this occurs only on various Athlons that are not therefore proper execution cannot be guaranteed.
SMP capable. Kernel will be tainted if, for example:
- on x86: PAE is forced through forcepae on intel CPUs (such as Pentium M)
which do not report PAE but may have a functional implementation, an SMP
kernel is running on non officially capable SMP Athlon CPUs, MSRs are
being poked at from userspace.
- on arm: kernel running on certain CPUs (such as Keystone 2) without
having certain kernel features enabled.
- on arm64: there are mismatched hardware features between CPUs, the
bootloader has booted CPUs in different modes.
- certain drivers are being used on non supported architectures (such as
scsi/snic on something else than x86_64, scsi/ips on non
x86/x86_64/itanium, have broken firmware settings for the
irqchip/irq-gic on arm64 ...).
- x86/x86_64: Microcode late loading is dangerous and will result in
tainting the kernel. It requires that all CPUs rendezvous to make sure
the update happens when the system is as quiescent as possible. However,
a higher priority MCE/SMI/NMI can move control flow away from that
rendezvous and interrupt the update, which can be detrimental to the
machine.
3) ``R`` if a module was force unloaded by ``rmmod -f``, ``' '`` if all 3) ``R`` if a module was force unloaded by ``rmmod -f``, ``' '`` if all
modules were unloaded normally. modules were unloaded normally.
......
...@@ -6,6 +6,7 @@ The Linux Microcode Loader ...@@ -6,6 +6,7 @@ The Linux Microcode Loader
:Authors: - Fenghua Yu <fenghua.yu@intel.com> :Authors: - Fenghua Yu <fenghua.yu@intel.com>
- Borislav Petkov <bp@suse.de> - Borislav Petkov <bp@suse.de>
- Ashok Raj <ashok.raj@intel.com>
The kernel has a x86 microcode loading facility which is supposed to The kernel has a x86 microcode loading facility which is supposed to
provide microcode loading methods in the OS. Potential use cases are provide microcode loading methods in the OS. Potential use cases are
...@@ -92,15 +93,8 @@ vendor's site. ...@@ -92,15 +93,8 @@ vendor's site.
Late loading Late loading
============ ============
There are two legacy user space interfaces to load microcode, either through You simply install the microcode packages your distro supplies and
/dev/cpu/microcode or through /sys/devices/system/cpu/microcode/reload file run::
in sysfs.
The /dev/cpu/microcode method is deprecated because it needs a special
userspace tool for that.
The easier method is simply installing the microcode packages your distro
supplies and running::
# echo 1 > /sys/devices/system/cpu/microcode/reload # echo 1 > /sys/devices/system/cpu/microcode/reload
...@@ -110,6 +104,110 @@ The loading mechanism looks for microcode blobs in ...@@ -110,6 +104,110 @@ The loading mechanism looks for microcode blobs in
/lib/firmware/{intel-ucode,amd-ucode}. The default distro installation /lib/firmware/{intel-ucode,amd-ucode}. The default distro installation
packages already put them there. packages already put them there.
Since kernel 5.19, late loading is not enabled by default.
The /dev/cpu/microcode method has been removed in 5.19.
Why is late loading dangerous?
==============================
Synchronizing all CPUs
----------------------
The microcode engine which receives the microcode update is shared
between the two logical threads in a SMT system. Therefore, when
the update is executed on one SMT thread of the core, the sibling
"automatically" gets the update.
Since the microcode can "simulate" MSRs too, while the microcode update
is in progress, those simulated MSRs transiently cease to exist. This
can result in unpredictable results if the SMT sibling thread happens to
be in the middle of an access to such an MSR. The usual observation is
that such MSR accesses cause #GPs to be raised to signal that former are
not present.
The disappearing MSRs are just one common issue which is being observed.
Any other instruction that's being patched and gets concurrently
executed by the other SMT sibling, can also result in similar,
unpredictable behavior.
To eliminate this case, a stop_machine()-based CPU synchronization was
introduced as a way to guarantee that all logical CPUs will not execute
any code but just wait in a spin loop, polling an atomic variable.
While this took care of device or external interrupts, IPIs including
LVT ones, such as CMCI etc, it cannot address other special interrupts
that can't be shut off. Those are Machine Check (#MC), System Management
(#SMI) and Non-Maskable interrupts (#NMI).
Machine Checks
--------------
Machine Checks (#MC) are non-maskable. There are two kinds of MCEs.
Fatal un-recoverable MCEs and recoverable MCEs. While un-recoverable
errors are fatal, recoverable errors can also happen in kernel context
are also treated as fatal by the kernel.
On certain Intel machines, MCEs are also broadcast to all threads in a
system. If one thread is in the middle of executing WRMSR, a MCE will be
taken at the end of the flow. Either way, they will wait for the thread
performing the wrmsr(0x79) to rendezvous in the MCE handler and shutdown
eventually if any of the threads in the system fail to check in to the
MCE rendezvous.
To be paranoid and get predictable behavior, the OS can choose to set
MCG_STATUS.MCIP. Since MCEs can be at most one in a system, if an
MCE was signaled, the above condition will promote to a system reset
automatically. OS can turn off MCIP at the end of the update for that
core.
System Management Interrupt
---------------------------
SMIs are also broadcast to all CPUs in the platform. Microcode update
requests exclusive access to the core before writing to MSR 0x79. So if
it does happen such that, one thread is in WRMSR flow, and the 2nd got
an SMI, that thread will be stopped in the first instruction in the SMI
handler.
Since the secondary thread is stopped in the first instruction in SMI,
there is very little chance that it would be in the middle of executing
an instruction being patched. Plus OS has no way to stop SMIs from
happening.
Non-Maskable Interrupts
-----------------------
When thread0 of a core is doing the microcode update, if thread1 is
pulled into NMI, that can cause unpredictable behavior due to the
reasons above.
OS can choose a variety of methods to avoid running into this situation.
Is the microcode suitable for late loading?
-------------------------------------------
Late loading is done when the system is fully operational and running
real workloads. Late loading behavior depends on what the base patch on
the CPU is before upgrading to the new patch.
This is true for Intel CPUs.
Consider, for example, a CPU has patch level 1 and the update is to
patch level 3.
Between patch1 and patch3, patch2 might have deprecated a software-visible
feature.
This is unacceptable if software is even potentially using that feature.
For instance, say MSR_X is no longer available after an update,
accessing that MSR will cause a #GP fault.
Basically there is no way to declare a new microcode update suitable
for late-loading. This is another one of the problems that caused late
loading to be not enabled by default.
Builtin microcode Builtin microcode
================= =================
......
...@@ -1336,17 +1336,16 @@ config MICROCODE_AMD ...@@ -1336,17 +1336,16 @@ config MICROCODE_AMD
If you select this option, microcode patch loading support for AMD If you select this option, microcode patch loading support for AMD
processors will be enabled. processors will be enabled.
config MICROCODE_OLD_INTERFACE config MICROCODE_LATE_LOADING
bool "Ancient loading interface (DEPRECATED)" bool "Late microcode loading (DANGEROUS)"
default n default n
depends on MICROCODE depends on MICROCODE
help help
DO NOT USE THIS! This is the ancient /dev/cpu/microcode interface Loading microcode late, when the system is up and executing instructions
which was used by userspace tools like iucode_tool and microcode.ctl. is a tricky business and should be avoided if possible. Just the sequence
It is inadequate because it runs too late to be able to properly of synchronizing all cores and SMT threads is one fragile dance which does
load microcode on a machine and it needs special tools. Instead, you not guarantee that cores might not softlock after the loading. Therefore,
should've switched to the early loading method with the initrd or use this at your own risk. Late loading taints the kernel too.
builtin microcode by now: Documentation/x86/microcode.rst
config X86_MSR config X86_MSR
tristate "/dev/cpu/*/msr - Model-specific register support" tristate "/dev/cpu/*/msr - Model-specific register support"
......
...@@ -85,4 +85,7 @@ static inline bool intel_cpu_signatures_match(unsigned int s1, unsigned int p1, ...@@ -85,4 +85,7 @@ static inline bool intel_cpu_signatures_match(unsigned int s1, unsigned int p1,
return p1 & p2; return p1 & p2;
} }
int intel_find_matching_signature(void *mc, unsigned int csig, int cpf);
int intel_microcode_sanity_check(void *mc, bool print_err, int hdr_type);
#endif /* _ASM_X86_CPU_H */ #endif /* _ASM_X86_CPU_H */
...@@ -32,11 +32,7 @@ enum ucode_state { ...@@ -32,11 +32,7 @@ enum ucode_state {
}; };
struct microcode_ops { struct microcode_ops {
enum ucode_state (*request_microcode_user) (int cpu, enum ucode_state (*request_microcode_fw) (int cpu, struct device *);
const void __user *buf, size_t size);
enum ucode_state (*request_microcode_fw) (int cpu, struct device *,
bool refresh_fw);
void (*microcode_fini_cpu) (int cpu); void (*microcode_fini_cpu) (int cpu);
...@@ -52,7 +48,6 @@ struct microcode_ops { ...@@ -52,7 +48,6 @@ struct microcode_ops {
struct ucode_cpu_info { struct ucode_cpu_info {
struct cpu_signature cpu_sig; struct cpu_signature cpu_sig;
int valid;
void *mc; void *mc;
}; };
extern struct ucode_cpu_info ucode_cpu_info[]; extern struct ucode_cpu_info ucode_cpu_info[];
......
...@@ -14,7 +14,8 @@ struct microcode_header_intel { ...@@ -14,7 +14,8 @@ struct microcode_header_intel {
unsigned int pf; unsigned int pf;
unsigned int datasize; unsigned int datasize;
unsigned int totalsize; unsigned int totalsize;
unsigned int reserved[3]; unsigned int metasize;
unsigned int reserved[2];
}; };
struct microcode_intel { struct microcode_intel {
...@@ -41,6 +42,8 @@ struct extended_sigtable { ...@@ -41,6 +42,8 @@ struct extended_sigtable {
#define DEFAULT_UCODE_TOTALSIZE (DEFAULT_UCODE_DATASIZE + MC_HEADER_SIZE) #define DEFAULT_UCODE_TOTALSIZE (DEFAULT_UCODE_DATASIZE + MC_HEADER_SIZE)
#define EXT_HEADER_SIZE (sizeof(struct extended_sigtable)) #define EXT_HEADER_SIZE (sizeof(struct extended_sigtable))
#define EXT_SIGNATURE_SIZE (sizeof(struct extended_signature)) #define EXT_SIGNATURE_SIZE (sizeof(struct extended_signature))
#define MC_HEADER_TYPE_MICROCODE 1
#define MC_HEADER_TYPE_IFS 2
#define get_totalsize(mc) \ #define get_totalsize(mc) \
(((struct microcode_intel *)mc)->hdr.datasize ? \ (((struct microcode_intel *)mc)->hdr.datasize ? \
......
...@@ -2145,6 +2145,7 @@ void cpu_init(void) ...@@ -2145,6 +2145,7 @@ void cpu_init(void)
load_fixmap_gdt(cpu); load_fixmap_gdt(cpu);
} }
#ifdef CONFIG_MICROCODE_LATE_LOADING
/* /*
* The microcode loader calls this upon late microcode load to recheck features, * The microcode loader calls this upon late microcode load to recheck features,
* only when microcode has been updated. Caller holds microcode_mutex and CPU * only when microcode has been updated. Caller holds microcode_mutex and CPU
...@@ -2174,6 +2175,7 @@ void microcode_check(void) ...@@ -2174,6 +2175,7 @@ void microcode_check(void)
pr_warn("x86/CPU: CPU features have changed after loading microcode, but might not take effect.\n"); pr_warn("x86/CPU: CPU features have changed after loading microcode, but might not take effect.\n");
pr_warn("x86/CPU: Please consider either early loading through initrd/built-in or a potential BIOS update.\n"); pr_warn("x86/CPU: Please consider either early loading through initrd/built-in or a potential BIOS update.\n");
} }
#endif
/* /*
* Invoked from core CPU hotplug code after hotplug operations * Invoked from core CPU hotplug code after hotplug operations
......
...@@ -207,12 +207,154 @@ int intel_cpu_collect_info(struct ucode_cpu_info *uci) ...@@ -207,12 +207,154 @@ int intel_cpu_collect_info(struct ucode_cpu_info *uci)
csig.rev = intel_get_microcode_revision(); csig.rev = intel_get_microcode_revision();
uci->cpu_sig = csig; uci->cpu_sig = csig;
uci->valid = 1;
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(intel_cpu_collect_info); EXPORT_SYMBOL_GPL(intel_cpu_collect_info);
/*
* Returns 1 if update has been found, 0 otherwise.
*/
int intel_find_matching_signature(void *mc, unsigned int csig, int cpf)
{
struct microcode_header_intel *mc_hdr = mc;
struct extended_sigtable *ext_hdr;
struct extended_signature *ext_sig;
int i;
if (intel_cpu_signatures_match(csig, cpf, mc_hdr->sig, mc_hdr->pf))
return 1;
/* Look for ext. headers: */
if (get_totalsize(mc_hdr) <= get_datasize(mc_hdr) + MC_HEADER_SIZE)
return 0;
ext_hdr = mc + get_datasize(mc_hdr) + MC_HEADER_SIZE;
ext_sig = (void *)ext_hdr + EXT_HEADER_SIZE;
for (i = 0; i < ext_hdr->count; i++) {
if (intel_cpu_signatures_match(csig, cpf, ext_sig->sig, ext_sig->pf))
return 1;
ext_sig++;
}
return 0;
}
EXPORT_SYMBOL_GPL(intel_find_matching_signature);
/**
* intel_microcode_sanity_check() - Sanity check microcode file.
* @mc: Pointer to the microcode file contents.
* @print_err: Display failure reason if true, silent if false.
* @hdr_type: Type of file, i.e. normal microcode file or In Field Scan file.
* Validate if the microcode header type matches with the type
* specified here.
*
* Validate certain header fields and verify if computed checksum matches
* with the one specified in the header.
*
* Return: 0 if the file passes all the checks, -EINVAL if any of the checks
* fail.
*/
int intel_microcode_sanity_check(void *mc, bool print_err, int hdr_type)
{
unsigned long total_size, data_size, ext_table_size;
struct microcode_header_intel *mc_header = mc;
struct extended_sigtable *ext_header = NULL;
u32 sum, orig_sum, ext_sigcount = 0, i;
struct extended_signature *ext_sig;
total_size = get_totalsize(mc_header);
data_size = get_datasize(mc_header);
if (data_size + MC_HEADER_SIZE > total_size) {
if (print_err)
pr_err("Error: bad microcode data file size.\n");
return -EINVAL;
}
if (mc_header->ldrver != 1 || mc_header->hdrver != hdr_type) {
if (print_err)
pr_err("Error: invalid/unknown microcode update format. Header type %d\n",
mc_header->hdrver);
return -EINVAL;
}
ext_table_size = total_size - (MC_HEADER_SIZE + data_size);
if (ext_table_size) {
u32 ext_table_sum = 0;
u32 *ext_tablep;
if (ext_table_size < EXT_HEADER_SIZE ||
((ext_table_size - EXT_HEADER_SIZE) % EXT_SIGNATURE_SIZE)) {
if (print_err)
pr_err("Error: truncated extended signature table.\n");
return -EINVAL;
}
ext_header = mc + MC_HEADER_SIZE + data_size;
if (ext_table_size != exttable_size(ext_header)) {
if (print_err)
pr_err("Error: extended signature table size mismatch.\n");
return -EFAULT;
}
ext_sigcount = ext_header->count;
/*
* Check extended table checksum: the sum of all dwords that
* comprise a valid table must be 0.
*/
ext_tablep = (u32 *)ext_header;
i = ext_table_size / sizeof(u32);
while (i--)
ext_table_sum += ext_tablep[i];
if (ext_table_sum) {
if (print_err)
pr_warn("Bad extended signature table checksum, aborting.\n");
return -EINVAL;
}
}
/*
* Calculate the checksum of update data and header. The checksum of
* valid update data and header including the extended signature table
* must be 0.
*/
orig_sum = 0;
i = (MC_HEADER_SIZE + data_size) / sizeof(u32);
while (i--)
orig_sum += ((u32 *)mc)[i];
if (orig_sum) {
if (print_err)
pr_err("Bad microcode data checksum, aborting.\n");
return -EINVAL;
}
if (!ext_table_size)
return 0;
/*
* Check extended signature checksum: 0 => valid.
*/
for (i = 0; i < ext_sigcount; i++) {
ext_sig = (void *)ext_header + EXT_HEADER_SIZE +
EXT_SIGNATURE_SIZE * i;
sum = (mc_header->sig + mc_header->pf + mc_header->cksum) -
(ext_sig->sig + ext_sig->pf + ext_sig->cksum);
if (sum) {
if (print_err)
pr_err("Bad extended signature checksum, aborting.\n");
return -EINVAL;
}
}
return 0;
}
EXPORT_SYMBOL_GPL(intel_microcode_sanity_check);
static void early_init_intel(struct cpuinfo_x86 *c) static void early_init_intel(struct cpuinfo_x86 *c)
{ {
u64 misc_enable; u64 misc_enable;
......
...@@ -885,8 +885,7 @@ load_microcode_amd(bool save, u8 family, const u8 *data, size_t size) ...@@ -885,8 +885,7 @@ load_microcode_amd(bool save, u8 family, const u8 *data, size_t size)
* *
* These might be larger than 2K. * These might be larger than 2K.
*/ */
static enum ucode_state request_microcode_amd(int cpu, struct device *device, static enum ucode_state request_microcode_amd(int cpu, struct device *device)
bool refresh_fw)
{ {
char fw_name[36] = "amd-ucode/microcode_amd.bin"; char fw_name[36] = "amd-ucode/microcode_amd.bin";
struct cpuinfo_x86 *c = &cpu_data(cpu); struct cpuinfo_x86 *c = &cpu_data(cpu);
...@@ -895,7 +894,7 @@ static enum ucode_state request_microcode_amd(int cpu, struct device *device, ...@@ -895,7 +894,7 @@ static enum ucode_state request_microcode_amd(int cpu, struct device *device,
const struct firmware *fw; const struct firmware *fw;
/* reload ucode container only on the boot cpu */ /* reload ucode container only on the boot cpu */
if (!refresh_fw || !bsp) if (!bsp)
return UCODE_OK; return UCODE_OK;
if (c->x86 >= 0x15) if (c->x86 >= 0x15)
...@@ -919,12 +918,6 @@ static enum ucode_state request_microcode_amd(int cpu, struct device *device, ...@@ -919,12 +918,6 @@ static enum ucode_state request_microcode_amd(int cpu, struct device *device,
return ret; return ret;
} }
static enum ucode_state
request_microcode_user(int cpu, const void __user *buf, size_t size)
{
return UCODE_ERROR;
}
static void microcode_fini_cpu_amd(int cpu) static void microcode_fini_cpu_amd(int cpu)
{ {
struct ucode_cpu_info *uci = ucode_cpu_info + cpu; struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
...@@ -933,7 +926,6 @@ static void microcode_fini_cpu_amd(int cpu) ...@@ -933,7 +926,6 @@ static void microcode_fini_cpu_amd(int cpu)
} }
static struct microcode_ops microcode_amd_ops = { static struct microcode_ops microcode_amd_ops = {
.request_microcode_user = request_microcode_user,
.request_microcode_fw = request_microcode_amd, .request_microcode_fw = request_microcode_amd,
.collect_cpu_info = collect_cpu_info_amd, .collect_cpu_info = collect_cpu_info_amd,
.apply_microcode = apply_microcode_amd, .apply_microcode = apply_microcode_amd,
......
...@@ -336,155 +336,10 @@ void reload_early_microcode(void) ...@@ -336,155 +336,10 @@ void reload_early_microcode(void)
} }
} }
static void collect_cpu_info_local(void *arg)
{
struct cpu_info_ctx *ctx = arg;
ctx->err = microcode_ops->collect_cpu_info(smp_processor_id(),
ctx->cpu_sig);
}
static int collect_cpu_info_on_target(int cpu, struct cpu_signature *cpu_sig)
{
struct cpu_info_ctx ctx = { .cpu_sig = cpu_sig, .err = 0 };
int ret;
ret = smp_call_function_single(cpu, collect_cpu_info_local, &ctx, 1);
if (!ret)
ret = ctx.err;
return ret;
}
static int collect_cpu_info(int cpu)
{
struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
int ret;
memset(uci, 0, sizeof(*uci));
ret = collect_cpu_info_on_target(cpu, &uci->cpu_sig);
if (!ret)
uci->valid = 1;
return ret;
}
static void apply_microcode_local(void *arg)
{
enum ucode_state *err = arg;
*err = microcode_ops->apply_microcode(smp_processor_id());
}
static int apply_microcode_on_target(int cpu)
{
enum ucode_state err;
int ret;
ret = smp_call_function_single(cpu, apply_microcode_local, &err, 1);
if (!ret) {
if (err == UCODE_ERROR)
ret = 1;
}
return ret;
}
#ifdef CONFIG_MICROCODE_OLD_INTERFACE
static int do_microcode_update(const void __user *buf, size_t size)
{
int error = 0;
int cpu;
for_each_online_cpu(cpu) {
struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
enum ucode_state ustate;
if (!uci->valid)
continue;
ustate = microcode_ops->request_microcode_user(cpu, buf, size);
if (ustate == UCODE_ERROR) {
error = -1;
break;
} else if (ustate == UCODE_NEW) {
apply_microcode_on_target(cpu);
}
}
return error;
}
static int microcode_open(struct inode *inode, struct file *file)
{
return capable(CAP_SYS_RAWIO) ? stream_open(inode, file) : -EPERM;
}
static ssize_t microcode_write(struct file *file, const char __user *buf,
size_t len, loff_t *ppos)
{
ssize_t ret = -EINVAL;
unsigned long nr_pages = totalram_pages();
if ((len >> PAGE_SHIFT) > nr_pages) {
pr_err("too much data (max %ld pages)\n", nr_pages);
return ret;
}
get_online_cpus();
mutex_lock(&microcode_mutex);
if (do_microcode_update(buf, len) == 0)
ret = (ssize_t)len;
if (ret > 0)
perf_check_microcode();
mutex_unlock(&microcode_mutex);
put_online_cpus();
return ret;
}
static const struct file_operations microcode_fops = {
.owner = THIS_MODULE,
.write = microcode_write,
.open = microcode_open,
.llseek = no_llseek,
};
static struct miscdevice microcode_dev = {
.minor = MICROCODE_MINOR,
.name = "microcode",
.nodename = "cpu/microcode",
.fops = &microcode_fops,
};
static int __init microcode_dev_init(void)
{
int error;
error = misc_register(&microcode_dev);
if (error) {
pr_err("can't misc_register on minor=%d\n", MICROCODE_MINOR);
return error;
}
return 0;
}
static void __exit microcode_dev_exit(void)
{
misc_deregister(&microcode_dev);
}
#else
#define microcode_dev_init() 0
#define microcode_dev_exit() do { } while (0)
#endif
/* fake device for request_firmware */ /* fake device for request_firmware */
static struct platform_device *microcode_pdev; static struct platform_device *microcode_pdev;
#ifdef CONFIG_MICROCODE_LATE_LOADING
/* /*
* Late loading dance. Why the heavy-handed stomp_machine effort? * Late loading dance. Why the heavy-handed stomp_machine effort?
* *
...@@ -566,7 +421,7 @@ static int __reload_late(void *info) ...@@ -566,7 +421,7 @@ static int __reload_late(void *info)
* below. * below.
*/ */
if (cpumask_first(topology_sibling_cpumask(cpu)) == cpu) if (cpumask_first(topology_sibling_cpumask(cpu)) == cpu)
apply_microcode_local(&err); err = microcode_ops->apply_microcode(cpu);
else else
goto wait_for_siblings; goto wait_for_siblings;
...@@ -588,7 +443,7 @@ static int __reload_late(void *info) ...@@ -588,7 +443,7 @@ static int __reload_late(void *info)
* revision. * revision.
*/ */
if (cpumask_first(topology_sibling_cpumask(cpu)) != cpu) if (cpumask_first(topology_sibling_cpumask(cpu)) != cpu)
apply_microcode_local(&err); err = microcode_ops->apply_microcode(cpu);
return ret; return ret;
} }
...@@ -599,7 +454,10 @@ static int __reload_late(void *info) ...@@ -599,7 +454,10 @@ static int __reload_late(void *info)
*/ */
static int microcode_reload_late(void) static int microcode_reload_late(void)
{ {
int ret; int old = boot_cpu_data.microcode, ret;
pr_err("Attempting late microcode loading - it is dangerous and taints the kernel.\n");
pr_err("You should switch to early loading, if possible.\n");
atomic_set(&late_cpus_in, 0); atomic_set(&late_cpus_in, 0);
atomic_set(&late_cpus_out, 0); atomic_set(&late_cpus_out, 0);
...@@ -608,7 +466,8 @@ static int microcode_reload_late(void) ...@@ -608,7 +466,8 @@ static int microcode_reload_late(void)
if (ret == 0) if (ret == 0)
microcode_check(); microcode_check();
pr_info("Reload completed, microcode revision: 0x%x\n", boot_cpu_data.microcode); pr_info("Reload completed, microcode revision: 0x%x -> 0x%x\n",
old, boot_cpu_data.microcode);
return ret; return ret;
} }
...@@ -635,7 +494,7 @@ static ssize_t reload_store(struct device *dev, ...@@ -635,7 +494,7 @@ static ssize_t reload_store(struct device *dev,
if (ret) if (ret)
goto put; goto put;
tmp_ret = microcode_ops->request_microcode_fw(bsp, &microcode_pdev->dev, true); tmp_ret = microcode_ops->request_microcode_fw(bsp, &microcode_pdev->dev);
if (tmp_ret != UCODE_NEW) if (tmp_ret != UCODE_NEW)
goto put; goto put;
...@@ -649,9 +508,14 @@ static ssize_t reload_store(struct device *dev, ...@@ -649,9 +508,14 @@ static ssize_t reload_store(struct device *dev,
if (ret == 0) if (ret == 0)
ret = size; ret = size;
add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK);
return ret; return ret;
} }
static DEVICE_ATTR_WO(reload);
#endif
static ssize_t version_show(struct device *dev, static ssize_t version_show(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
...@@ -668,7 +532,6 @@ static ssize_t pf_show(struct device *dev, ...@@ -668,7 +532,6 @@ static ssize_t pf_show(struct device *dev,
return sprintf(buf, "0x%x\n", uci->cpu_sig.pf); return sprintf(buf, "0x%x\n", uci->cpu_sig.pf);
} }
static DEVICE_ATTR_WO(reload);
static DEVICE_ATTR(version, 0444, version_show, NULL); static DEVICE_ATTR(version, 0444, version_show, NULL);
static DEVICE_ATTR(processor_flags, 0444, pf_show, NULL); static DEVICE_ATTR(processor_flags, 0444, pf_show, NULL);
...@@ -689,91 +552,17 @@ static void microcode_fini_cpu(int cpu) ...@@ -689,91 +552,17 @@ static void microcode_fini_cpu(int cpu)
microcode_ops->microcode_fini_cpu(cpu); microcode_ops->microcode_fini_cpu(cpu);
} }
static enum ucode_state microcode_resume_cpu(int cpu) static enum ucode_state microcode_init_cpu(int cpu)
{
if (apply_microcode_on_target(cpu))
return UCODE_ERROR;
pr_debug("CPU%d updated upon resume\n", cpu);
return UCODE_OK;
}
static enum ucode_state microcode_init_cpu(int cpu, bool refresh_fw)
{
enum ucode_state ustate;
struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
if (uci->valid)
return UCODE_OK;
if (collect_cpu_info(cpu))
return UCODE_ERROR;
/* --dimm. Trigger a delayed update? */
if (system_state != SYSTEM_RUNNING)
return UCODE_NFOUND;
ustate = microcode_ops->request_microcode_fw(cpu, &microcode_pdev->dev, refresh_fw);
if (ustate == UCODE_NEW) {
pr_debug("CPU%d updated upon init\n", cpu);
apply_microcode_on_target(cpu);
}
return ustate;
}
static enum ucode_state microcode_update_cpu(int cpu)
{ {
struct ucode_cpu_info *uci = ucode_cpu_info + cpu; struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
/* Refresh CPU microcode revision after resume. */ memset(uci, 0, sizeof(*uci));
collect_cpu_info(cpu);
if (uci->valid)
return microcode_resume_cpu(cpu);
return microcode_init_cpu(cpu, false);
}
static int mc_device_add(struct device *dev, struct subsys_interface *sif)
{
int err, cpu = dev->id;
if (!cpu_online(cpu))
return 0;
pr_debug("CPU%d added\n", cpu);
err = sysfs_create_group(&dev->kobj, &mc_attr_group);
if (err)
return err;
if (microcode_init_cpu(cpu, true) == UCODE_ERROR)
return -EINVAL;
return err;
}
static void mc_device_remove(struct device *dev, struct subsys_interface *sif)
{
int cpu = dev->id;
if (!cpu_online(cpu)) microcode_ops->collect_cpu_info(cpu, &uci->cpu_sig);
return;
pr_debug("CPU%d removed\n", cpu); return microcode_ops->apply_microcode(cpu);
microcode_fini_cpu(cpu);
sysfs_remove_group(&dev->kobj, &mc_attr_group);
} }
static struct subsys_interface mc_cpu_interface = {
.name = "microcode",
.subsys = &cpu_subsys,
.add_dev = mc_device_add,
.remove_dev = mc_device_remove,
};
/** /**
* microcode_bsp_resume - Update boot CPU microcode during resume. * microcode_bsp_resume - Update boot CPU microcode during resume.
*/ */
...@@ -782,21 +571,23 @@ void microcode_bsp_resume(void) ...@@ -782,21 +571,23 @@ void microcode_bsp_resume(void)
int cpu = smp_processor_id(); int cpu = smp_processor_id();
struct ucode_cpu_info *uci = ucode_cpu_info + cpu; struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
if (uci->valid && uci->mc) if (uci->mc)
microcode_ops->apply_microcode(cpu); microcode_ops->apply_microcode(cpu);
else if (!uci->mc) else
reload_early_microcode(); reload_early_microcode();
} }
static struct syscore_ops mc_syscore_ops = { static struct syscore_ops mc_syscore_ops = {
.resume = microcode_bsp_resume, .resume = microcode_bsp_resume,
}; };
static int mc_cpu_starting(unsigned int cpu) static int mc_cpu_starting(unsigned int cpu)
{ {
microcode_update_cpu(cpu); enum ucode_state err = microcode_ops->apply_microcode(cpu);
pr_debug("CPU%d added\n", cpu);
return 0; pr_debug("%s: CPU%d, err: %d\n", __func__, cpu, err);
return err == UCODE_ERROR;
} }
static int mc_cpu_online(unsigned int cpu) static int mc_cpu_online(unsigned int cpu)
...@@ -813,15 +604,34 @@ static int mc_cpu_down_prep(unsigned int cpu) ...@@ -813,15 +604,34 @@ static int mc_cpu_down_prep(unsigned int cpu)
struct device *dev; struct device *dev;
dev = get_cpu_device(cpu); dev = get_cpu_device(cpu);
microcode_fini_cpu(cpu);
/* Suspend is in progress, only remove the interface */ /* Suspend is in progress, only remove the interface */
sysfs_remove_group(&dev->kobj, &mc_attr_group); sysfs_remove_group(&dev->kobj, &mc_attr_group);
pr_debug("CPU%d removed\n", cpu); pr_debug("%s: CPU%d\n", __func__, cpu);
return 0; return 0;
} }
static void setup_online_cpu(struct work_struct *work)
{
int cpu = smp_processor_id();
enum ucode_state err;
err = microcode_init_cpu(cpu);
if (err == UCODE_ERROR) {
pr_err("Error applying microcode on CPU%d\n", cpu);
return;
}
mc_cpu_online(cpu);
}
static struct attribute *cpu_root_microcode_attrs[] = { static struct attribute *cpu_root_microcode_attrs[] = {
#ifdef CONFIG_MICROCODE_LATE_LOADING
&dev_attr_reload.attr, &dev_attr_reload.attr,
#endif
NULL NULL
}; };
...@@ -848,34 +658,18 @@ int __init microcode_init(void) ...@@ -848,34 +658,18 @@ int __init microcode_init(void)
if (!microcode_ops) if (!microcode_ops)
return -ENODEV; return -ENODEV;
microcode_pdev = platform_device_register_simple("microcode", -1, microcode_pdev = platform_device_register_simple("microcode", -1, NULL, 0);
NULL, 0);
if (IS_ERR(microcode_pdev)) if (IS_ERR(microcode_pdev))
return PTR_ERR(microcode_pdev); return PTR_ERR(microcode_pdev);
get_online_cpus(); error = sysfs_create_group(&cpu_subsys.dev_root->kobj, &cpu_root_microcode_group);
mutex_lock(&microcode_mutex);
error = subsys_interface_register(&mc_cpu_interface);
if (!error)
perf_check_microcode();
mutex_unlock(&microcode_mutex);
put_online_cpus();
if (error)
goto out_pdev;
error = sysfs_create_group(&cpu_subsys.dev_root->kobj,
&cpu_root_microcode_group);
if (error) { if (error) {
pr_err("Error creating microcode group!\n"); pr_err("Error creating microcode group!\n");
goto out_driver; goto out_pdev;
} }
error = microcode_dev_init(); /* Do per-CPU setup */
if (error) schedule_on_each_cpu(setup_online_cpu);
goto out_ucode_group;
register_syscore_ops(&mc_syscore_ops); register_syscore_ops(&mc_syscore_ops);
cpuhp_setup_state_nocalls(CPUHP_AP_MICROCODE_LOADER, "x86/microcode:starting", cpuhp_setup_state_nocalls(CPUHP_AP_MICROCODE_LOADER, "x86/microcode:starting",
...@@ -887,19 +681,6 @@ int __init microcode_init(void) ...@@ -887,19 +681,6 @@ int __init microcode_init(void)
return 0; return 0;
out_ucode_group:
sysfs_remove_group(&cpu_subsys.dev_root->kobj,
&cpu_root_microcode_group);
out_driver:
get_online_cpus();
mutex_lock(&microcode_mutex);
subsys_interface_unregister(&mc_cpu_interface);
mutex_unlock(&microcode_mutex);
put_online_cpus();
out_pdev: out_pdev:
platform_device_unregister(microcode_pdev); platform_device_unregister(microcode_pdev);
return error; return error;
......
...@@ -45,34 +45,6 @@ static struct microcode_intel *intel_ucode_patch; ...@@ -45,34 +45,6 @@ static struct microcode_intel *intel_ucode_patch;
/* last level cache size per core */ /* last level cache size per core */
static int llc_size_per_core; static int llc_size_per_core;
/*
* Returns 1 if update has been found, 0 otherwise.
*/
static int find_matching_signature(void *mc, unsigned int csig, int cpf)
{
struct microcode_header_intel *mc_hdr = mc;
struct extended_sigtable *ext_hdr;
struct extended_signature *ext_sig;
int i;
if (intel_cpu_signatures_match(csig, cpf, mc_hdr->sig, mc_hdr->pf))
return 1;
/* Look for ext. headers: */
if (get_totalsize(mc_hdr) <= get_datasize(mc_hdr) + MC_HEADER_SIZE)
return 0;
ext_hdr = mc + get_datasize(mc_hdr) + MC_HEADER_SIZE;
ext_sig = (void *)ext_hdr + EXT_HEADER_SIZE;
for (i = 0; i < ext_hdr->count; i++) {
if (intel_cpu_signatures_match(csig, cpf, ext_sig->sig, ext_sig->pf))
return 1;
ext_sig++;
}
return 0;
}
/* /*
* Returns 1 if update has been found, 0 otherwise. * Returns 1 if update has been found, 0 otherwise.
*/ */
...@@ -83,7 +55,7 @@ static int has_newer_microcode(void *mc, unsigned int csig, int cpf, int new_rev ...@@ -83,7 +55,7 @@ static int has_newer_microcode(void *mc, unsigned int csig, int cpf, int new_rev
if (mc_hdr->rev <= new_rev) if (mc_hdr->rev <= new_rev)
return 0; return 0;
return find_matching_signature(mc, csig, cpf); return intel_find_matching_signature(mc, csig, cpf);
} }
static struct ucode_patch *memdup_patch(void *data, unsigned int size) static struct ucode_patch *memdup_patch(void *data, unsigned int size)
...@@ -117,7 +89,7 @@ static void save_microcode_patch(struct ucode_cpu_info *uci, void *data, unsigne ...@@ -117,7 +89,7 @@ static void save_microcode_patch(struct ucode_cpu_info *uci, void *data, unsigne
sig = mc_saved_hdr->sig; sig = mc_saved_hdr->sig;
pf = mc_saved_hdr->pf; pf = mc_saved_hdr->pf;
if (find_matching_signature(data, sig, pf)) { if (intel_find_matching_signature(data, sig, pf)) {
prev_found = true; prev_found = true;
if (mc_hdr->rev <= mc_saved_hdr->rev) if (mc_hdr->rev <= mc_saved_hdr->rev)
...@@ -149,7 +121,7 @@ static void save_microcode_patch(struct ucode_cpu_info *uci, void *data, unsigne ...@@ -149,7 +121,7 @@ static void save_microcode_patch(struct ucode_cpu_info *uci, void *data, unsigne
if (!p) if (!p)
return; return;
if (!find_matching_signature(p->data, uci->cpu_sig.sig, uci->cpu_sig.pf)) if (!intel_find_matching_signature(p->data, uci->cpu_sig.sig, uci->cpu_sig.pf))
return; return;
/* /*
...@@ -163,104 +135,6 @@ static void save_microcode_patch(struct ucode_cpu_info *uci, void *data, unsigne ...@@ -163,104 +135,6 @@ static void save_microcode_patch(struct ucode_cpu_info *uci, void *data, unsigne
intel_ucode_patch = p->data; intel_ucode_patch = p->data;
} }
static int microcode_sanity_check(void *mc, int print_err)
{
unsigned long total_size, data_size, ext_table_size;
struct microcode_header_intel *mc_header = mc;
struct extended_sigtable *ext_header = NULL;
u32 sum, orig_sum, ext_sigcount = 0, i;
struct extended_signature *ext_sig;
total_size = get_totalsize(mc_header);
data_size = get_datasize(mc_header);
if (data_size + MC_HEADER_SIZE > total_size) {
if (print_err)
pr_err("Error: bad microcode data file size.\n");
return -EINVAL;
}
if (mc_header->ldrver != 1 || mc_header->hdrver != 1) {
if (print_err)
pr_err("Error: invalid/unknown microcode update format.\n");
return -EINVAL;
}
ext_table_size = total_size - (MC_HEADER_SIZE + data_size);
if (ext_table_size) {
u32 ext_table_sum = 0;
u32 *ext_tablep;
if ((ext_table_size < EXT_HEADER_SIZE)
|| ((ext_table_size - EXT_HEADER_SIZE) % EXT_SIGNATURE_SIZE)) {
if (print_err)
pr_err("Error: truncated extended signature table.\n");
return -EINVAL;
}
ext_header = mc + MC_HEADER_SIZE + data_size;
if (ext_table_size != exttable_size(ext_header)) {
if (print_err)
pr_err("Error: extended signature table size mismatch.\n");
return -EFAULT;
}
ext_sigcount = ext_header->count;
/*
* Check extended table checksum: the sum of all dwords that
* comprise a valid table must be 0.
*/
ext_tablep = (u32 *)ext_header;
i = ext_table_size / sizeof(u32);
while (i--)
ext_table_sum += ext_tablep[i];
if (ext_table_sum) {
if (print_err)
pr_warn("Bad extended signature table checksum, aborting.\n");
return -EINVAL;
}
}
/*
* Calculate the checksum of update data and header. The checksum of
* valid update data and header including the extended signature table
* must be 0.
*/
orig_sum = 0;
i = (MC_HEADER_SIZE + data_size) / sizeof(u32);
while (i--)
orig_sum += ((u32 *)mc)[i];
if (orig_sum) {
if (print_err)
pr_err("Bad microcode data checksum, aborting.\n");
return -EINVAL;
}
if (!ext_table_size)
return 0;
/*
* Check extended signature checksum: 0 => valid.
*/
for (i = 0; i < ext_sigcount; i++) {
ext_sig = (void *)ext_header + EXT_HEADER_SIZE +
EXT_SIGNATURE_SIZE * i;
sum = (mc_header->sig + mc_header->pf + mc_header->cksum) -
(ext_sig->sig + ext_sig->pf + ext_sig->cksum);
if (sum) {
if (print_err)
pr_err("Bad extended signature checksum, aborting.\n");
return -EINVAL;
}
}
return 0;
}
/* /*
* Get microcode matching with BSP's model. Only CPUs with the same model as * Get microcode matching with BSP's model. Only CPUs with the same model as
* BSP can stay in the platform. * BSP can stay in the platform.
...@@ -281,13 +155,13 @@ scan_microcode(void *data, size_t size, struct ucode_cpu_info *uci, bool save) ...@@ -281,13 +155,13 @@ scan_microcode(void *data, size_t size, struct ucode_cpu_info *uci, bool save)
mc_size = get_totalsize(mc_header); mc_size = get_totalsize(mc_header);
if (!mc_size || if (!mc_size ||
mc_size > size || mc_size > size ||
microcode_sanity_check(data, 0) < 0) intel_microcode_sanity_check(data, false, MC_HEADER_TYPE_MICROCODE) < 0)
break; break;
size -= mc_size; size -= mc_size;
if (!find_matching_signature(data, uci->cpu_sig.sig, if (!intel_find_matching_signature(data, uci->cpu_sig.sig,
uci->cpu_sig.pf)) { uci->cpu_sig.pf)) {
data += mc_size; data += mc_size;
continue; continue;
} }
...@@ -614,7 +488,6 @@ void load_ucode_intel_ap(void) ...@@ -614,7 +488,6 @@ void load_ucode_intel_ap(void)
else else
iup = &intel_ucode_patch; iup = &intel_ucode_patch;
reget:
if (!*iup) { if (!*iup) {
patch = __load_ucode_intel(&uci); patch = __load_ucode_intel(&uci);
if (!patch) if (!patch)
...@@ -625,12 +498,7 @@ void load_ucode_intel_ap(void) ...@@ -625,12 +498,7 @@ void load_ucode_intel_ap(void)
uci.mc = *iup; uci.mc = *iup;
if (apply_microcode_early(&uci, true)) { apply_microcode_early(&uci, true);
/* Mixed-silicon system? Try to refetch the proper patch: */
*iup = NULL;
goto reget;
}
} }
static struct microcode_intel *find_patch(struct ucode_cpu_info *uci) static struct microcode_intel *find_patch(struct ucode_cpu_info *uci)
...@@ -645,9 +513,9 @@ static struct microcode_intel *find_patch(struct ucode_cpu_info *uci) ...@@ -645,9 +513,9 @@ static struct microcode_intel *find_patch(struct ucode_cpu_info *uci)
if (phdr->rev <= uci->cpu_sig.rev) if (phdr->rev <= uci->cpu_sig.rev)
continue; continue;
if (!find_matching_signature(phdr, if (!intel_find_matching_signature(phdr,
uci->cpu_sig.sig, uci->cpu_sig.sig,
uci->cpu_sig.pf)) uci->cpu_sig.pf))
continue; continue;
return iter->data; return iter->data;
...@@ -673,7 +541,6 @@ void reload_ucode_intel(void) ...@@ -673,7 +541,6 @@ void reload_ucode_intel(void)
static int collect_cpu_info(int cpu_num, struct cpu_signature *csig) static int collect_cpu_info(int cpu_num, struct cpu_signature *csig)
{ {
static struct cpu_signature prev;
struct cpuinfo_x86 *c = &cpu_data(cpu_num); struct cpuinfo_x86 *c = &cpu_data(cpu_num);
unsigned int val[2]; unsigned int val[2];
...@@ -689,13 +556,6 @@ static int collect_cpu_info(int cpu_num, struct cpu_signature *csig) ...@@ -689,13 +556,6 @@ static int collect_cpu_info(int cpu_num, struct cpu_signature *csig)
csig->rev = c->microcode; csig->rev = c->microcode;
/* No extra locking on prev, races are harmless. */
if (csig->sig != prev.sig || csig->pf != prev.pf || csig->rev != prev.rev) {
pr_info("sig=0x%x, pf=0x%x, revision=0x%x\n",
csig->sig, csig->pf, csig->rev);
prev = *csig;
}
return 0; return 0;
} }
...@@ -813,7 +673,7 @@ static enum ucode_state generic_load_microcode(int cpu, struct iov_iter *iter) ...@@ -813,7 +673,7 @@ static enum ucode_state generic_load_microcode(int cpu, struct iov_iter *iter)
memcpy(mc, &mc_header, sizeof(mc_header)); memcpy(mc, &mc_header, sizeof(mc_header));
data = mc + sizeof(mc_header); data = mc + sizeof(mc_header);
if (!copy_from_iter_full(data, data_size, iter) || if (!copy_from_iter_full(data, data_size, iter) ||
microcode_sanity_check(mc, 1) < 0) { intel_microcode_sanity_check(mc, true, MC_HEADER_TYPE_MICROCODE) < 0) {
break; break;
} }
...@@ -878,8 +738,7 @@ static bool is_blacklisted(unsigned int cpu) ...@@ -878,8 +738,7 @@ static bool is_blacklisted(unsigned int cpu)
return false; return false;
} }
static enum ucode_state request_microcode_fw(int cpu, struct device *device, static enum ucode_state request_microcode_fw(int cpu, struct device *device)
bool refresh_fw)
{ {
struct cpuinfo_x86 *c = &cpu_data(cpu); struct cpuinfo_x86 *c = &cpu_data(cpu);
const struct firmware *firmware; const struct firmware *firmware;
...@@ -909,24 +768,7 @@ static enum ucode_state request_microcode_fw(int cpu, struct device *device, ...@@ -909,24 +768,7 @@ static enum ucode_state request_microcode_fw(int cpu, struct device *device,
return ret; return ret;
} }
static enum ucode_state
request_microcode_user(int cpu, const void __user *buf, size_t size)
{
struct iov_iter iter;
struct iovec iov;
if (is_blacklisted(cpu))
return UCODE_NFOUND;
iov.iov_base = (void __user *)buf;
iov.iov_len = size;
iov_iter_init(&iter, WRITE, &iov, 1, size);
return generic_load_microcode(cpu, &iter);
}
static struct microcode_ops microcode_intel_ops = { static struct microcode_ops microcode_intel_ops = {
.request_microcode_user = request_microcode_user,
.request_microcode_fw = request_microcode_fw, .request_microcode_fw = request_microcode_fw,
.collect_cpu_info = collect_cpu_info, .collect_cpu_info = collect_cpu_info,
.apply_microcode = apply_microcode_intel, .apply_microcode = apply_microcode_intel,
......
config INTEL_IFS config INTEL_IFS
tristate "Intel In Field Scan" tristate "Intel In Field Scan"
depends on X86 && CPU_SUP_INTEL && 64BIT && SMP depends on X86 && CPU_SUP_INTEL && 64BIT && SMP
select INTEL_IFS_DEVICE
help help
Enable support for the In Field Scan capability in select Enable support for the In Field Scan capability in select
CPUs. The capability allows for running low level tests via CPUs. The capability allows for running low level tests via
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/kdev_t.h> #include <linux/kdev_t.h>
#include <linux/semaphore.h> #include <linux/semaphore.h>
#include <linux/slab.h>
#include <asm/cpu_device_id.h> #include <asm/cpu_device_id.h>
...@@ -22,6 +23,7 @@ MODULE_DEVICE_TABLE(x86cpu, ifs_cpu_ids); ...@@ -22,6 +23,7 @@ MODULE_DEVICE_TABLE(x86cpu, ifs_cpu_ids);
static struct ifs_device ifs_device = { static struct ifs_device ifs_device = {
.data = { .data = {
.integrity_cap_bit = MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT, .integrity_cap_bit = MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT,
.test_num = 0,
}, },
.misc = { .misc = {
.name = "intel_ifs_0", .name = "intel_ifs_0",
...@@ -34,6 +36,7 @@ static int __init ifs_init(void) ...@@ -34,6 +36,7 @@ static int __init ifs_init(void)
{ {
const struct x86_cpu_id *m; const struct x86_cpu_id *m;
u64 msrval; u64 msrval;
int ret;
m = x86_match_cpu(ifs_cpu_ids); m = x86_match_cpu(ifs_cpu_ids);
if (!m) if (!m)
...@@ -50,20 +53,26 @@ static int __init ifs_init(void) ...@@ -50,20 +53,26 @@ static int __init ifs_init(void)
ifs_device.misc.groups = ifs_get_groups(); ifs_device.misc.groups = ifs_get_groups();
if ((msrval & BIT(ifs_device.data.integrity_cap_bit)) && if (!(msrval & BIT(ifs_device.data.integrity_cap_bit)))
!misc_register(&ifs_device.misc)) { return -ENODEV;
down(&ifs_sem);
ifs_load_firmware(ifs_device.misc.this_device); ifs_device.data.pkg_auth = kmalloc_array(topology_max_packages(), sizeof(bool), GFP_KERNEL);
up(&ifs_sem); if (!ifs_device.data.pkg_auth)
return 0; return -ENOMEM;
ret = misc_register(&ifs_device.misc);
if (ret) {
kfree(ifs_device.data.pkg_auth);
return ret;
} }
return -ENODEV; return 0;
} }
static void __exit ifs_exit(void) static void __exit ifs_exit(void)
{ {
misc_deregister(&ifs_device.misc); misc_deregister(&ifs_device.misc);
kfree(ifs_device.data.pkg_auth);
} }
module_init(ifs_init); module_init(ifs_init);
......
...@@ -33,13 +33,23 @@ ...@@ -33,13 +33,23 @@
* The driver loads the tests into memory reserved BIOS local to each CPU * The driver loads the tests into memory reserved BIOS local to each CPU
* socket in a two step process using writes to MSRs to first load the * socket in a two step process using writes to MSRs to first load the
* SHA hashes for the test. Then the tests themselves. Status MSRs provide * SHA hashes for the test. Then the tests themselves. Status MSRs provide
* feedback on the success/failure of these steps. When a new test file * feedback on the success/failure of these steps.
* is installed it can be loaded by writing to the driver reload file::
* *
* # echo 1 > /sys/devices/virtual/misc/intel_ifs_0/reload * The test files are kept in a fixed location: /lib/firmware/intel/ifs_0/
* For e.g if there are 3 test files, they would be named in the following
* fashion:
* ff-mm-ss-01.scan
* ff-mm-ss-02.scan
* ff-mm-ss-03.scan
* (where ff refers to family, mm indicates model and ss indicates stepping)
* *
* Similar to microcode, the current version of the scan tests is stored * A different test file can be loaded by writing the numerical portion
* in a fixed location: /lib/firmware/intel/ifs.0/family-model-stepping.scan * (e.g 1, 2 or 3 in the above scenario) into the curent_batch file.
* To load ff-mm-ss-02.scan, the following command can be used::
*
* # echo 2 > /sys/devices/virtual/misc/intel_ifs_0/current_batch
*
* The above file can also be read to know the currently loaded image.
* *
* Running tests * Running tests
* ------------- * -------------
...@@ -191,20 +201,26 @@ union ifs_status { ...@@ -191,20 +201,26 @@ union ifs_status {
* struct ifs_data - attributes related to intel IFS driver * struct ifs_data - attributes related to intel IFS driver
* @integrity_cap_bit: MSR_INTEGRITY_CAPS bit enumerating this test * @integrity_cap_bit: MSR_INTEGRITY_CAPS bit enumerating this test
* @loaded_version: stores the currently loaded ifs image version. * @loaded_version: stores the currently loaded ifs image version.
* @pkg_auth: array of bool storing per package auth status
* @loaded: If a valid test binary has been loaded into the memory * @loaded: If a valid test binary has been loaded into the memory
* @loading_error: Error occurred on another CPU while loading image * @loading_error: Error occurred on another CPU while loading image
* @valid_chunks: number of chunks which could be validated. * @valid_chunks: number of chunks which could be validated.
* @status: it holds simple status pass/fail/untested * @status: it holds simple status pass/fail/untested
* @scan_details: opaque scan status code from h/w * @scan_details: opaque scan status code from h/w
* @cur_batch: number indicating the currently loaded test file
* @test_num: number indicating the test type
*/ */
struct ifs_data { struct ifs_data {
int integrity_cap_bit; int integrity_cap_bit;
bool *pkg_auth;
int loaded_version; int loaded_version;
bool loaded; bool loaded;
bool loading_error; bool loading_error;
int valid_chunks; int valid_chunks;
int status; int status;
u64 scan_details; u64 scan_details;
u32 cur_batch;
int test_num;
}; };
struct ifs_work { struct ifs_work {
...@@ -225,10 +241,8 @@ static inline struct ifs_data *ifs_get_data(struct device *dev) ...@@ -225,10 +241,8 @@ static inline struct ifs_data *ifs_get_data(struct device *dev)
return &d->data; return &d->data;
} }
void ifs_load_firmware(struct device *dev); int ifs_load_firmware(struct device *dev);
int do_core_test(int cpu, struct device *dev); int do_core_test(int cpu, struct device *dev);
const struct attribute_group **ifs_get_groups(void); const struct attribute_group **ifs_get_groups(void);
extern struct semaphore ifs_sem;
#endif #endif
...@@ -3,27 +3,30 @@ ...@@ -3,27 +3,30 @@
#include <linux/firmware.h> #include <linux/firmware.h>
#include <asm/cpu.h> #include <asm/cpu.h>
#include <linux/slab.h>
#include <asm/microcode_intel.h> #include <asm/microcode_intel.h>
#include "ifs.h" #include "ifs.h"
struct ifs_header { #define IFS_CHUNK_ALIGNMENT 256
u32 header_ver; union meta_data {
u32 blob_revision; struct {
u32 date; u32 meta_type; // metadata type
u32 processor_sig; u32 meta_size; // size of this entire struct including hdrs.
u32 check_sum; u32 test_type; // IFS test type
u32 loader_rev; u32 fusa_info; // Fusa info
u32 processor_flags; u32 total_images; // Total number of images
u32 metadata_size; u32 current_image; // Current Image #
u32 total_size; u32 total_chunks; // Total number of chunks in this image
u32 fusa_info; u32 starting_chunk; // Starting chunk number in this image
u64 reserved; u32 size_per_chunk; // size of each chunk
u32 chunks_per_stride; // number of chunks in a stride
};
u8 padding[IFS_CHUNK_ALIGNMENT];
}; };
#define IFS_HEADER_SIZE (sizeof(struct ifs_header)) #define IFS_HEADER_SIZE (sizeof(struct microcode_header_intel))
static struct ifs_header *ifs_header_ptr; /* pointer to the ifs image header */ #define META_TYPE_IFS 1
static struct microcode_header_intel *ifs_header_ptr; /* pointer to the ifs image header */
static u64 ifs_hash_ptr; /* Address of ifs metadata (hash) */ static u64 ifs_hash_ptr; /* Address of ifs metadata (hash) */
static u64 ifs_test_image_ptr; /* 256B aligned address of test pattern */ static u64 ifs_test_image_ptr; /* 256B aligned address of test pattern */
static DECLARE_COMPLETION(ifs_done); static DECLARE_COMPLETION(ifs_done);
...@@ -44,6 +47,38 @@ static const char * const scan_authentication_status[] = { ...@@ -44,6 +47,38 @@ static const char * const scan_authentication_status[] = {
[2] = "Chunk authentication error. The hash of chunk did not match expected value" [2] = "Chunk authentication error. The hash of chunk did not match expected value"
}; };
#define MC_HEADER_META_TYPE_END (0)
struct metadata_header {
unsigned int type;
unsigned int blk_size;
};
static struct metadata_header *find_meta_data(void *ucode, unsigned int meta_type)
{
struct metadata_header *meta_header;
unsigned long data_size, total_meta;
unsigned long meta_size = 0;
data_size = get_datasize(ucode);
total_meta = ((struct microcode_intel *)ucode)->hdr.metasize;
if (!total_meta)
return NULL;
meta_header = (ucode + MC_HEADER_SIZE + data_size) - total_meta;
while (meta_header->type != MC_HEADER_META_TYPE_END &&
meta_header->blk_size &&
meta_size < total_meta) {
meta_size += meta_header->blk_size;
if (meta_header->type == meta_type)
return meta_header;
meta_header = (void *)meta_header + meta_header->blk_size;
}
return NULL;
}
/* /*
* To copy scan hashes and authenticate test chunks, the initiating cpu must point * To copy scan hashes and authenticate test chunks, the initiating cpu must point
* to the EDX:EAX to the test image in linear address. * to the EDX:EAX to the test image in linear address.
...@@ -111,6 +146,41 @@ static void copy_hashes_authenticate_chunks(struct work_struct *work) ...@@ -111,6 +146,41 @@ static void copy_hashes_authenticate_chunks(struct work_struct *work)
complete(&ifs_done); complete(&ifs_done);
} }
static int validate_ifs_metadata(struct device *dev)
{
struct ifs_data *ifsd = ifs_get_data(dev);
union meta_data *ifs_meta;
char test_file[64];
int ret = -EINVAL;
snprintf(test_file, sizeof(test_file), "%02x-%02x-%02x-%02x.scan",
boot_cpu_data.x86, boot_cpu_data.x86_model,
boot_cpu_data.x86_stepping, ifsd->cur_batch);
ifs_meta = (union meta_data *)find_meta_data(ifs_header_ptr, META_TYPE_IFS);
if (!ifs_meta) {
dev_err(dev, "IFS Metadata missing in file %s\n", test_file);
return ret;
}
ifs_test_image_ptr = (u64)ifs_meta + sizeof(union meta_data);
/* Scan chunk start must be 256 byte aligned */
if (!IS_ALIGNED(ifs_test_image_ptr, IFS_CHUNK_ALIGNMENT)) {
dev_err(dev, "Scan pattern is not aligned on %d bytes aligned in %s\n",
IFS_CHUNK_ALIGNMENT, test_file);
return ret;
}
if (ifs_meta->current_image != ifsd->cur_batch) {
dev_warn(dev, "Mismatch between filename %s and batch metadata 0x%02x\n",
test_file, ifs_meta->current_image);
return ret;
}
return 0;
}
/* /*
* IFS requires scan chunks authenticated per each socket in the platform. * IFS requires scan chunks authenticated per each socket in the platform.
* Once the test chunk is authenticated, it is automatically copied to secured memory * Once the test chunk is authenticated, it is automatically copied to secured memory
...@@ -118,132 +188,83 @@ static void copy_hashes_authenticate_chunks(struct work_struct *work) ...@@ -118,132 +188,83 @@ static void copy_hashes_authenticate_chunks(struct work_struct *work)
*/ */
static int scan_chunks_sanity_check(struct device *dev) static int scan_chunks_sanity_check(struct device *dev)
{ {
int metadata_size, curr_pkg, cpu, ret = -ENOMEM;
struct ifs_data *ifsd = ifs_get_data(dev); struct ifs_data *ifsd = ifs_get_data(dev);
bool *package_authenticated;
struct ifs_work local_work; struct ifs_work local_work;
char *test_ptr; int curr_pkg, cpu, ret;
package_authenticated = kcalloc(topology_max_packages(), sizeof(bool), GFP_KERNEL); memset(ifsd->pkg_auth, 0, (topology_max_packages() * sizeof(bool)));
if (!package_authenticated) ret = validate_ifs_metadata(dev);
if (ret)
return ret; return ret;
metadata_size = ifs_header_ptr->metadata_size;
/* Spec says that if the Meta Data Size = 0 then it should be treated as 2000 */
if (metadata_size == 0)
metadata_size = 2000;
/* Scan chunk start must be 256 byte aligned */
if ((metadata_size + IFS_HEADER_SIZE) % 256) {
dev_err(dev, "Scan pattern offset within the binary is not 256 byte aligned\n");
return -EINVAL;
}
test_ptr = (char *)ifs_header_ptr + IFS_HEADER_SIZE + metadata_size;
ifsd->loading_error = false; ifsd->loading_error = false;
ifsd->loaded_version = ifs_header_ptr->rev;
ifs_test_image_ptr = (u64)test_ptr;
ifsd->loaded_version = ifs_header_ptr->blob_revision;
/* copy the scan hash and authenticate per package */ /* copy the scan hash and authenticate per package */
cpus_read_lock(); cpus_read_lock();
for_each_online_cpu(cpu) { for_each_online_cpu(cpu) {
curr_pkg = topology_physical_package_id(cpu); curr_pkg = topology_physical_package_id(cpu);
if (package_authenticated[curr_pkg]) if (ifsd->pkg_auth[curr_pkg])
continue; continue;
reinit_completion(&ifs_done); reinit_completion(&ifs_done);
local_work.dev = dev; local_work.dev = dev;
INIT_WORK(&local_work.w, copy_hashes_authenticate_chunks); INIT_WORK(&local_work.w, copy_hashes_authenticate_chunks);
schedule_work_on(cpu, &local_work.w); schedule_work_on(cpu, &local_work.w);
wait_for_completion(&ifs_done); wait_for_completion(&ifs_done);
if (ifsd->loading_error) if (ifsd->loading_error) {
ret = -EIO;
goto out; goto out;
package_authenticated[curr_pkg] = 1; }
ifsd->pkg_auth[curr_pkg] = 1;
} }
ret = 0; ret = 0;
out: out:
cpus_read_unlock(); cpus_read_unlock();
kfree(package_authenticated);
return ret; return ret;
} }
static int ifs_sanity_check(struct device *dev, static int image_sanity_check(struct device *dev, const struct microcode_header_intel *data)
const struct microcode_header_intel *mc_header)
{ {
unsigned long total_size, data_size; struct ucode_cpu_info uci;
u32 sum, *mc;
int i;
total_size = get_totalsize(mc_header);
data_size = get_datasize(mc_header);
if ((data_size + MC_HEADER_SIZE > total_size) || (total_size % sizeof(u32))) { /* Provide a specific error message when loading an older/unsupported image */
dev_err(dev, "bad ifs data file size.\n"); if (data->hdrver != MC_HEADER_TYPE_IFS) {
dev_err(dev, "Header version %d not supported\n", data->hdrver);
return -EINVAL; return -EINVAL;
} }
if (mc_header->ldrver != 1 || mc_header->hdrver != 1) { if (intel_microcode_sanity_check((void *)data, true, MC_HEADER_TYPE_IFS)) {
dev_err(dev, "invalid/unknown ifs update format.\n"); dev_err(dev, "sanity check failed\n");
return -EINVAL; return -EINVAL;
} }
mc = (u32 *)mc_header; intel_cpu_collect_info(&uci);
sum = 0;
for (i = 0; i < total_size / sizeof(u32); i++)
sum += mc[i];
if (sum) { if (!intel_find_matching_signature((void *)data,
dev_err(dev, "bad ifs data checksum, aborting.\n"); uci.cpu_sig.sig,
uci.cpu_sig.pf)) {
dev_err(dev, "cpu signature, processor flags not matching\n");
return -EINVAL; return -EINVAL;
} }
return 0; return 0;
} }
static bool find_ifs_matching_signature(struct device *dev, struct ucode_cpu_info *uci,
const struct microcode_header_intel *shdr)
{
unsigned int mc_size;
mc_size = get_totalsize(shdr);
if (!mc_size || ifs_sanity_check(dev, shdr) < 0) {
dev_err(dev, "ifs sanity check failure\n");
return false;
}
if (!intel_cpu_signatures_match(uci->cpu_sig.sig, uci->cpu_sig.pf, shdr->sig, shdr->pf)) {
dev_err(dev, "ifs signature, pf not matching\n");
return false;
}
return true;
}
static bool ifs_image_sanity_check(struct device *dev, const struct microcode_header_intel *data)
{
struct ucode_cpu_info uci;
intel_cpu_collect_info(&uci);
return find_ifs_matching_signature(dev, &uci, data);
}
/* /*
* Load ifs image. Before loading ifs module, the ifs image must be located * Load ifs image. Before loading ifs module, the ifs image must be located
* in /lib/firmware/intel/ifs and named as {family/model/stepping}.{testname}. * in /lib/firmware/intel/ifs_x/ and named as family-model-stepping-02x.{testname}.
*/ */
void ifs_load_firmware(struct device *dev) int ifs_load_firmware(struct device *dev)
{ {
struct ifs_data *ifsd = ifs_get_data(dev); struct ifs_data *ifsd = ifs_get_data(dev);
const struct firmware *fw; const struct firmware *fw;
char scan_path[32]; char scan_path[64];
int ret; int ret = -EINVAL;
snprintf(scan_path, sizeof(scan_path), "intel/ifs/%02x-%02x-%02x.scan", snprintf(scan_path, sizeof(scan_path), "intel/ifs_%d/%02x-%02x-%02x-%02x.scan",
boot_cpu_data.x86, boot_cpu_data.x86_model, boot_cpu_data.x86_stepping); ifsd->test_num, boot_cpu_data.x86, boot_cpu_data.x86_model,
boot_cpu_data.x86_stepping, ifsd->cur_batch);
ret = request_firmware_direct(&fw, scan_path, dev); ret = request_firmware_direct(&fw, scan_path, dev);
if (ret) { if (ret) {
...@@ -251,17 +272,21 @@ void ifs_load_firmware(struct device *dev) ...@@ -251,17 +272,21 @@ void ifs_load_firmware(struct device *dev)
goto done; goto done;
} }
if (!ifs_image_sanity_check(dev, (struct microcode_header_intel *)fw->data)) { ret = image_sanity_check(dev, (struct microcode_header_intel *)fw->data);
dev_err(dev, "ifs header sanity check failed\n"); if (ret)
goto release; goto release;
}
ifs_header_ptr = (struct ifs_header *)fw->data; ifs_header_ptr = (struct microcode_header_intel *)fw->data;
ifs_hash_ptr = (u64)(ifs_header_ptr + 1); ifs_hash_ptr = (u64)(ifs_header_ptr + 1);
ret = scan_chunks_sanity_check(dev); ret = scan_chunks_sanity_check(dev);
if (ret)
dev_err(dev, "Load failure for batch: %02x\n", ifsd->cur_batch);
release: release:
release_firmware(fw); release_firmware(fw);
done: done:
ifsd->loaded = (ret == 0); ifsd->loaded = (ret == 0);
return ret;
} }
...@@ -78,14 +78,16 @@ static void message_not_tested(struct device *dev, int cpu, union ifs_status sta ...@@ -78,14 +78,16 @@ static void message_not_tested(struct device *dev, int cpu, union ifs_status sta
static void message_fail(struct device *dev, int cpu, union ifs_status status) static void message_fail(struct device *dev, int cpu, union ifs_status status)
{ {
struct ifs_data *ifsd = ifs_get_data(dev);
/* /*
* control_error is set when the microcode runs into a problem * control_error is set when the microcode runs into a problem
* loading the image from the reserved BIOS memory, or it has * loading the image from the reserved BIOS memory, or it has
* been corrupted. Reloading the image may fix this issue. * been corrupted. Reloading the image may fix this issue.
*/ */
if (status.control_error) { if (status.control_error) {
dev_err(dev, "CPU(s) %*pbl: could not execute from loaded scan image\n", dev_err(dev, "CPU(s) %*pbl: could not execute from loaded scan image. Batch: %02x version: 0x%x\n",
cpumask_pr_args(cpu_smt_mask(cpu))); cpumask_pr_args(cpu_smt_mask(cpu)), ifsd->cur_batch, ifsd->loaded_version);
} }
/* /*
...@@ -96,8 +98,8 @@ static void message_fail(struct device *dev, int cpu, union ifs_status status) ...@@ -96,8 +98,8 @@ static void message_fail(struct device *dev, int cpu, union ifs_status status)
* the core being tested. * the core being tested.
*/ */
if (status.signature_error) { if (status.signature_error) {
dev_err(dev, "CPU(s) %*pbl: test signature incorrect.\n", dev_err(dev, "CPU(s) %*pbl: test signature incorrect. Batch: %02x version: 0x%x\n",
cpumask_pr_args(cpu_smt_mask(cpu))); cpumask_pr_args(cpu_smt_mask(cpu)), ifsd->cur_batch, ifsd->loaded_version);
} }
} }
......
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
* Protects against simultaneous tests on multiple cores, or * Protects against simultaneous tests on multiple cores, or
* reloading can file while a test is in progress * reloading can file while a test is in progress
*/ */
DEFINE_SEMAPHORE(ifs_sem); static DEFINE_SEMAPHORE(ifs_sem);
/* /*
* The sysfs interface to check additional details of last test * The sysfs interface to check additional details of last test
...@@ -87,33 +87,42 @@ static ssize_t run_test_store(struct device *dev, ...@@ -87,33 +87,42 @@ static ssize_t run_test_store(struct device *dev,
static DEVICE_ATTR_WO(run_test); static DEVICE_ATTR_WO(run_test);
/* static ssize_t current_batch_store(struct device *dev,
* Reload the IFS image. When user wants to install new IFS image struct device_attribute *attr,
*/ const char *buf, size_t count)
static ssize_t reload_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{ {
struct ifs_data *ifsd = ifs_get_data(dev); struct ifs_data *ifsd = ifs_get_data(dev);
bool res; unsigned int cur_batch;
int rc;
if (kstrtobool(buf, &res)) rc = kstrtouint(buf, 0, &cur_batch);
if (rc < 0 || cur_batch > 0xff)
return -EINVAL; return -EINVAL;
if (!res)
return count;
if (down_interruptible(&ifs_sem)) if (down_interruptible(&ifs_sem))
return -EINTR; return -EINTR;
ifs_load_firmware(dev); ifsd->cur_batch = cur_batch;
rc = ifs_load_firmware(dev);
up(&ifs_sem); up(&ifs_sem);
return ifsd->loaded ? count : -ENODEV; return (rc == 0) ? count : rc;
}
static ssize_t current_batch_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ifs_data *ifsd = ifs_get_data(dev);
if (!ifsd->loaded)
return sysfs_emit(buf, "none\n");
else
return sysfs_emit(buf, "0x%02x\n", ifsd->cur_batch);
} }
static DEVICE_ATTR_WO(reload); static DEVICE_ATTR_RW(current_batch);
/* /*
* Display currently loaded IFS image version. * Display currently loaded IFS image version.
...@@ -136,7 +145,7 @@ static struct attribute *plat_ifs_attrs[] = { ...@@ -136,7 +145,7 @@ static struct attribute *plat_ifs_attrs[] = {
&dev_attr_details.attr, &dev_attr_details.attr,
&dev_attr_status.attr, &dev_attr_status.attr,
&dev_attr_run_test.attr, &dev_attr_run_test.attr,
&dev_attr_reload.attr, &dev_attr_current_batch.attr,
&dev_attr_image_version.attr, &dev_attr_image_version.attr,
NULL NULL
}; };
......
...@@ -44,7 +44,7 @@ ...@@ -44,7 +44,7 @@
#define AGPGART_MINOR 175 #define AGPGART_MINOR 175
#define TOSH_MINOR_DEV 181 #define TOSH_MINOR_DEV 181
#define HWRNG_MINOR 183 #define HWRNG_MINOR 183
#define MICROCODE_MINOR 184 /*#define MICROCODE_MINOR 184 unused */
#define KEYPAD_MINOR 185 #define KEYPAD_MINOR 185
#define IRNET_MINOR 187 #define IRNET_MINOR 187
#define D7S_MINOR 193 #define D7S_MINOR 193
......
...@@ -72,7 +72,7 @@ if [ `expr $T % 2` -eq 0 ]; then ...@@ -72,7 +72,7 @@ if [ `expr $T % 2` -eq 0 ]; then
addout " " addout " "
else else
addout "S" addout "S"
echo " * SMP kernel oops on an officially SMP incapable processor (#2)" echo " * kernel running on an out of specification system (#2)"
fi fi
T=`expr $T / 2` T=`expr $T / 2`
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册