未验证 提交 6dc4499a 编写于 作者: O openeuler-ci-bot 提交者: Gitee

!580 Intel: Recover two microcode interfaces when support In Field Scan(IFS) multi-blob images

Merge Pull Request from: @allen-shi 
 
IFS is a hardware feature to run circuit level tests on a CPU core to detect problems that are not caught by parity or ECC checks.

Intel In Field Scan(IFS) with multi-blob images supported in [PR471](https://gitee.com/openeuler/kernel/pulls/471), but also introduced microcode interface changes:
1. Removed /dev/cpu/microcode interface(Used by iucode-tool, microcode_ctl).
2. Disabled microcode late loading as default(MICROCODE_LATE_LOADING), which removed /sys/devices/system/cpu/microcode/reload interface.

This PR includes 14 commits totally and is to recover the two microcode interfaces support by reverting related commits in [PR471](https://gitee.com/openeuler/kernel/pulls/471).

 **Intel-Kernel Issue** 
[#I6L337](https://gitee.com/openeuler/intel-kernel/issues/I6L337)

 **Test** 
Built and run the kernel successfully on openEuler 22.03 LTS SP1.
Test is PASS on SPR platform.

 **Known Issue** 
N/A

 **Default config change** 
N/A 
 
Link:https://gitee.com/openeuler/kernel/pulls/580 

Reviewed-by: Jason Zeng <jason.zeng@intel.com> 
Signed-off-by: Jialin Zhang <zhangjialin11@huawei.com> 
......@@ -1352,7 +1352,7 @@ ORed together. The letters are seen in "Tainted" line of Oops reports.
====== ===== ==============================================================
1 `(P)` proprietary module was loaded
2 `(F)` module was force loaded
4 `(S)` kernel running on an out of specification system
4 `(S)` SMP kernel oops on an officially SMP incapable processor
8 `(R)` module was force unloaded
16 `(M)` processor reported a Machine Check Exception (MCE)
32 `(B)` bad page referenced or some unexpected page flags
......
......@@ -84,7 +84,7 @@ Bit Log Number Reason that got the kernel tainted
=== === ====== ========================================================
0 G/P 1 proprietary module was loaded
1 _/F 2 module was force loaded
2 _/S 4 kernel running on an out of specification system
2 _/S 4 SMP kernel oops on an officially SMP incapable processor
3 _/R 8 module was force unloaded
4 _/M 16 processor reported a Machine Check Exception (MCE)
5 _/B 32 bad page referenced or some unexpected page flags
......@@ -116,29 +116,10 @@ More detailed explanation for tainting
1) ``F`` if any module was force loaded by ``insmod -f``, ``' '`` if all
modules were loaded normally.
2) ``S`` if the kernel is running on a processor or system that is out of
specification: hardware has been put into an unsupported configuration,
therefore proper execution cannot be guaranteed.
Kernel will be tainted if, for example:
- on x86: PAE is forced through forcepae on intel CPUs (such as Pentium M)
which do not report PAE but may have a functional implementation, an SMP
kernel is running on non officially capable SMP Athlon CPUs, MSRs are
being poked at from userspace.
- on arm: kernel running on certain CPUs (such as Keystone 2) without
having certain kernel features enabled.
- on arm64: there are mismatched hardware features between CPUs, the
bootloader has booted CPUs in different modes.
- certain drivers are being used on non supported architectures (such as
scsi/snic on something else than x86_64, scsi/ips on non
x86/x86_64/itanium, have broken firmware settings for the
irqchip/irq-gic on arm64 ...).
- x86/x86_64: Microcode late loading is dangerous and will result in
tainting the kernel. It requires that all CPUs rendezvous to make sure
the update happens when the system is as quiescent as possible. However,
a higher priority MCE/SMI/NMI can move control flow away from that
rendezvous and interrupt the update, which can be detrimental to the
machine.
2) ``S`` if the oops occurred on an SMP kernel running on hardware that
hasn't been certified as safe to run multiprocessor.
Currently this occurs only on various Athlons that are not
SMP capable.
3) ``R`` if a module was force unloaded by ``rmmod -f``, ``' '`` if all
modules were unloaded normally.
......
......@@ -6,7 +6,6 @@ The Linux Microcode Loader
:Authors: - Fenghua Yu <fenghua.yu@intel.com>
- Borislav Petkov <bp@suse.de>
- Ashok Raj <ashok.raj@intel.com>
The kernel has a x86 microcode loading facility which is supposed to
provide microcode loading methods in the OS. Potential use cases are
......@@ -93,8 +92,15 @@ vendor's site.
Late loading
============
You simply install the microcode packages your distro supplies and
run::
There are two legacy user space interfaces to load microcode, either through
/dev/cpu/microcode or through /sys/devices/system/cpu/microcode/reload file
in sysfs.
The /dev/cpu/microcode method is deprecated because it needs a special
userspace tool for that.
The easier method is simply installing the microcode packages your distro
supplies and running::
# echo 1 > /sys/devices/system/cpu/microcode/reload
......@@ -104,110 +110,6 @@ The loading mechanism looks for microcode blobs in
/lib/firmware/{intel-ucode,amd-ucode}. The default distro installation
packages already put them there.
Since kernel 5.19, late loading is not enabled by default.
The /dev/cpu/microcode method has been removed in 5.19.
Why is late loading dangerous?
==============================
Synchronizing all CPUs
----------------------
The microcode engine which receives the microcode update is shared
between the two logical threads in a SMT system. Therefore, when
the update is executed on one SMT thread of the core, the sibling
"automatically" gets the update.
Since the microcode can "simulate" MSRs too, while the microcode update
is in progress, those simulated MSRs transiently cease to exist. This
can result in unpredictable results if the SMT sibling thread happens to
be in the middle of an access to such an MSR. The usual observation is
that such MSR accesses cause #GPs to be raised to signal that former are
not present.
The disappearing MSRs are just one common issue which is being observed.
Any other instruction that's being patched and gets concurrently
executed by the other SMT sibling, can also result in similar,
unpredictable behavior.
To eliminate this case, a stop_machine()-based CPU synchronization was
introduced as a way to guarantee that all logical CPUs will not execute
any code but just wait in a spin loop, polling an atomic variable.
While this took care of device or external interrupts, IPIs including
LVT ones, such as CMCI etc, it cannot address other special interrupts
that can't be shut off. Those are Machine Check (#MC), System Management
(#SMI) and Non-Maskable interrupts (#NMI).
Machine Checks
--------------
Machine Checks (#MC) are non-maskable. There are two kinds of MCEs.
Fatal un-recoverable MCEs and recoverable MCEs. While un-recoverable
errors are fatal, recoverable errors can also happen in kernel context
are also treated as fatal by the kernel.
On certain Intel machines, MCEs are also broadcast to all threads in a
system. If one thread is in the middle of executing WRMSR, a MCE will be
taken at the end of the flow. Either way, they will wait for the thread
performing the wrmsr(0x79) to rendezvous in the MCE handler and shutdown
eventually if any of the threads in the system fail to check in to the
MCE rendezvous.
To be paranoid and get predictable behavior, the OS can choose to set
MCG_STATUS.MCIP. Since MCEs can be at most one in a system, if an
MCE was signaled, the above condition will promote to a system reset
automatically. OS can turn off MCIP at the end of the update for that
core.
System Management Interrupt
---------------------------
SMIs are also broadcast to all CPUs in the platform. Microcode update
requests exclusive access to the core before writing to MSR 0x79. So if
it does happen such that, one thread is in WRMSR flow, and the 2nd got
an SMI, that thread will be stopped in the first instruction in the SMI
handler.
Since the secondary thread is stopped in the first instruction in SMI,
there is very little chance that it would be in the middle of executing
an instruction being patched. Plus OS has no way to stop SMIs from
happening.
Non-Maskable Interrupts
-----------------------
When thread0 of a core is doing the microcode update, if thread1 is
pulled into NMI, that can cause unpredictable behavior due to the
reasons above.
OS can choose a variety of methods to avoid running into this situation.
Is the microcode suitable for late loading?
-------------------------------------------
Late loading is done when the system is fully operational and running
real workloads. Late loading behavior depends on what the base patch on
the CPU is before upgrading to the new patch.
This is true for Intel CPUs.
Consider, for example, a CPU has patch level 1 and the update is to
patch level 3.
Between patch1 and patch3, patch2 might have deprecated a software-visible
feature.
This is unacceptable if software is even potentially using that feature.
For instance, say MSR_X is no longer available after an update,
accessing that MSR will cause a #GP fault.
Basically there is no way to declare a new microcode update suitable
for late-loading. This is another one of the problems that caused late
loading to be not enabled by default.
Builtin microcode
=================
......
......@@ -1336,16 +1336,17 @@ config MICROCODE_AMD
If you select this option, microcode patch loading support for AMD
processors will be enabled.
config MICROCODE_LATE_LOADING
bool "Late microcode loading (DANGEROUS)"
config MICROCODE_OLD_INTERFACE
bool "Ancient loading interface (DEPRECATED)"
default n
depends on MICROCODE
help
Loading microcode late, when the system is up and executing instructions
is a tricky business and should be avoided if possible. Just the sequence
of synchronizing all cores and SMT threads is one fragile dance which does
not guarantee that cores might not softlock after the loading. Therefore,
use this at your own risk. Late loading taints the kernel too.
DO NOT USE THIS! This is the ancient /dev/cpu/microcode interface
which was used by userspace tools like iucode_tool and microcode.ctl.
It is inadequate because it runs too late to be able to properly
load microcode on a machine and it needs special tools. Instead, you
should've switched to the early loading method with the initrd or
builtin microcode by now: Documentation/x86/microcode.rst
config X86_MSR
tristate "/dev/cpu/*/msr - Model-specific register support"
......
......@@ -33,7 +33,11 @@ enum ucode_state {
};
struct microcode_ops {
enum ucode_state (*request_microcode_fw) (int cpu, struct device *);
enum ucode_state (*request_microcode_user) (int cpu,
const void __user *buf, size_t size);
enum ucode_state (*request_microcode_fw) (int cpu, struct device *,
bool refresh_fw);
void (*microcode_fini_cpu) (int cpu);
......@@ -49,6 +53,7 @@ struct microcode_ops {
struct ucode_cpu_info {
struct cpu_signature cpu_sig;
int valid;
void *mc;
};
extern struct ucode_cpu_info ucode_cpu_info[];
......
......@@ -2145,7 +2145,6 @@ void cpu_init(void)
load_fixmap_gdt(cpu);
}
#ifdef CONFIG_MICROCODE_LATE_LOADING
/*
* The microcode loader calls this upon late microcode load to recheck features,
* only when microcode has been updated. Caller holds microcode_mutex and CPU
......@@ -2175,7 +2174,6 @@ void microcode_check(void)
pr_warn("x86/CPU: CPU features have changed after loading microcode, but might not take effect.\n");
pr_warn("x86/CPU: Please consider either early loading through initrd/built-in or a potential BIOS update.\n");
}
#endif
/*
* Invoked from core CPU hotplug code after hotplug operations
......
......@@ -207,6 +207,7 @@ int intel_cpu_collect_info(struct ucode_cpu_info *uci)
csig.rev = intel_get_microcode_revision();
uci->cpu_sig = csig;
uci->valid = 1;
return 0;
}
......
......@@ -886,7 +886,8 @@ load_microcode_amd(bool save, u8 family, const u8 *data, size_t size)
*
* These might be larger than 2K.
*/
static enum ucode_state request_microcode_amd(int cpu, struct device *device)
static enum ucode_state request_microcode_amd(int cpu, struct device *device,
bool refresh_fw)
{
char fw_name[36] = "amd-ucode/microcode_amd.bin";
struct cpuinfo_x86 *c = &cpu_data(cpu);
......@@ -895,7 +896,7 @@ static enum ucode_state request_microcode_amd(int cpu, struct device *device)
const struct firmware *fw;
/* reload ucode container only on the boot cpu */
if (!bsp)
if (!refresh_fw || !bsp)
return UCODE_OK;
if (c->x86 >= 0x15)
......@@ -919,6 +920,12 @@ static enum ucode_state request_microcode_amd(int cpu, struct device *device)
return ret;
}
static enum ucode_state
request_microcode_user(int cpu, const void __user *buf, size_t size)
{
return UCODE_ERROR;
}
static void microcode_fini_cpu_amd(int cpu)
{
struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
......@@ -927,6 +934,7 @@ static void microcode_fini_cpu_amd(int cpu)
}
static struct microcode_ops microcode_amd_ops = {
.request_microcode_user = request_microcode_user,
.request_microcode_fw = request_microcode_amd,
.collect_cpu_info = collect_cpu_info_amd,
.apply_microcode = apply_microcode_amd,
......
......@@ -336,10 +336,155 @@ void reload_early_microcode(void)
}
}
static void collect_cpu_info_local(void *arg)
{
struct cpu_info_ctx *ctx = arg;
ctx->err = microcode_ops->collect_cpu_info(smp_processor_id(),
ctx->cpu_sig);
}
static int collect_cpu_info_on_target(int cpu, struct cpu_signature *cpu_sig)
{
struct cpu_info_ctx ctx = { .cpu_sig = cpu_sig, .err = 0 };
int ret;
ret = smp_call_function_single(cpu, collect_cpu_info_local, &ctx, 1);
if (!ret)
ret = ctx.err;
return ret;
}
static int collect_cpu_info(int cpu)
{
struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
int ret;
memset(uci, 0, sizeof(*uci));
ret = collect_cpu_info_on_target(cpu, &uci->cpu_sig);
if (!ret)
uci->valid = 1;
return ret;
}
static void apply_microcode_local(void *arg)
{
enum ucode_state *err = arg;
*err = microcode_ops->apply_microcode(smp_processor_id());
}
static int apply_microcode_on_target(int cpu)
{
enum ucode_state err;
int ret;
ret = smp_call_function_single(cpu, apply_microcode_local, &err, 1);
if (!ret) {
if (err == UCODE_ERROR)
ret = 1;
}
return ret;
}
#ifdef CONFIG_MICROCODE_OLD_INTERFACE
static int do_microcode_update(const void __user *buf, size_t size)
{
int error = 0;
int cpu;
for_each_online_cpu(cpu) {
struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
enum ucode_state ustate;
if (!uci->valid)
continue;
ustate = microcode_ops->request_microcode_user(cpu, buf, size);
if (ustate == UCODE_ERROR) {
error = -1;
break;
} else if (ustate == UCODE_NEW) {
apply_microcode_on_target(cpu);
}
}
return error;
}
static int microcode_open(struct inode *inode, struct file *file)
{
return capable(CAP_SYS_RAWIO) ? stream_open(inode, file) : -EPERM;
}
static ssize_t microcode_write(struct file *file, const char __user *buf,
size_t len, loff_t *ppos)
{
ssize_t ret = -EINVAL;
unsigned long nr_pages = totalram_pages();
if ((len >> PAGE_SHIFT) > nr_pages) {
pr_err("too much data (max %ld pages)\n", nr_pages);
return ret;
}
get_online_cpus();
mutex_lock(&microcode_mutex);
if (do_microcode_update(buf, len) == 0)
ret = (ssize_t)len;
if (ret > 0)
perf_check_microcode();
mutex_unlock(&microcode_mutex);
put_online_cpus();
return ret;
}
static const struct file_operations microcode_fops = {
.owner = THIS_MODULE,
.write = microcode_write,
.open = microcode_open,
.llseek = no_llseek,
};
static struct miscdevice microcode_dev = {
.minor = MICROCODE_MINOR,
.name = "microcode",
.nodename = "cpu/microcode",
.fops = &microcode_fops,
};
static int __init microcode_dev_init(void)
{
int error;
error = misc_register(&microcode_dev);
if (error) {
pr_err("can't misc_register on minor=%d\n", MICROCODE_MINOR);
return error;
}
return 0;
}
static void __exit microcode_dev_exit(void)
{
misc_deregister(&microcode_dev);
}
#else
#define microcode_dev_init() 0
#define microcode_dev_exit() do { } while (0)
#endif
/* fake device for request_firmware */
static struct platform_device *microcode_pdev;
#ifdef CONFIG_MICROCODE_LATE_LOADING
/*
* Late loading dance. Why the heavy-handed stomp_machine effort?
*
......@@ -421,7 +566,7 @@ static int __reload_late(void *info)
* below.
*/
if (cpumask_first(topology_sibling_cpumask(cpu)) == cpu)
err = microcode_ops->apply_microcode(cpu);
apply_microcode_local(&err);
else
goto wait_for_siblings;
......@@ -443,7 +588,7 @@ static int __reload_late(void *info)
* revision.
*/
if (cpumask_first(topology_sibling_cpumask(cpu)) != cpu)
err = microcode_ops->apply_microcode(cpu);
apply_microcode_local(&err);
return ret;
}
......@@ -454,10 +599,7 @@ static int __reload_late(void *info)
*/
static int microcode_reload_late(void)
{
int old = boot_cpu_data.microcode, ret;
pr_err("Attempting late microcode loading - it is dangerous and taints the kernel.\n");
pr_err("You should switch to early loading, if possible.\n");
int ret;
atomic_set(&late_cpus_in, 0);
atomic_set(&late_cpus_out, 0);
......@@ -466,8 +608,7 @@ static int microcode_reload_late(void)
if (ret == 0)
microcode_check();
pr_info("Reload completed, microcode revision: 0x%x -> 0x%x\n",
old, boot_cpu_data.microcode);
pr_info("Reload completed, microcode revision: 0x%x\n", boot_cpu_data.microcode);
return ret;
}
......@@ -494,7 +635,7 @@ static ssize_t reload_store(struct device *dev,
if (ret)
goto put;
tmp_ret = microcode_ops->request_microcode_fw(bsp, &microcode_pdev->dev);
tmp_ret = microcode_ops->request_microcode_fw(bsp, &microcode_pdev->dev, true);
if (tmp_ret != UCODE_NEW)
goto put;
......@@ -508,14 +649,9 @@ static ssize_t reload_store(struct device *dev,
if (ret == 0)
ret = size;
add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK);
return ret;
}
static DEVICE_ATTR_WO(reload);
#endif
static ssize_t version_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
......@@ -532,6 +668,7 @@ static ssize_t pf_show(struct device *dev,
return sprintf(buf, "0x%x\n", uci->cpu_sig.pf);
}
static DEVICE_ATTR_WO(reload);
static DEVICE_ATTR(version, 0444, version_show, NULL);
static DEVICE_ATTR(processor_flags, 0444, pf_show, NULL);
......@@ -552,17 +689,91 @@ static void microcode_fini_cpu(int cpu)
microcode_ops->microcode_fini_cpu(cpu);
}
static enum ucode_state microcode_init_cpu(int cpu)
static enum ucode_state microcode_resume_cpu(int cpu)
{
if (apply_microcode_on_target(cpu))
return UCODE_ERROR;
pr_debug("CPU%d updated upon resume\n", cpu);
return UCODE_OK;
}
static enum ucode_state microcode_init_cpu(int cpu, bool refresh_fw)
{
enum ucode_state ustate;
struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
memset(uci, 0, sizeof(*uci));
if (uci->valid)
return UCODE_OK;
if (collect_cpu_info(cpu))
return UCODE_ERROR;
/* --dimm. Trigger a delayed update? */
if (system_state != SYSTEM_RUNNING)
return UCODE_NFOUND;
ustate = microcode_ops->request_microcode_fw(cpu, &microcode_pdev->dev, refresh_fw);
if (ustate == UCODE_NEW) {
pr_debug("CPU%d updated upon init\n", cpu);
apply_microcode_on_target(cpu);
}
return ustate;
}
static enum ucode_state microcode_update_cpu(int cpu)
{
struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
/* Refresh CPU microcode revision after resume. */
collect_cpu_info(cpu);
if (uci->valid)
return microcode_resume_cpu(cpu);
return microcode_init_cpu(cpu, false);
}
microcode_ops->collect_cpu_info(cpu, &uci->cpu_sig);
static int mc_device_add(struct device *dev, struct subsys_interface *sif)
{
int err, cpu = dev->id;
if (!cpu_online(cpu))
return 0;
pr_debug("CPU%d added\n", cpu);
return microcode_ops->apply_microcode(cpu);
err = sysfs_create_group(&dev->kobj, &mc_attr_group);
if (err)
return err;
if (microcode_init_cpu(cpu, true) == UCODE_ERROR)
return -EINVAL;
return err;
}
static void mc_device_remove(struct device *dev, struct subsys_interface *sif)
{
int cpu = dev->id;
if (!cpu_online(cpu))
return;
pr_debug("CPU%d removed\n", cpu);
microcode_fini_cpu(cpu);
sysfs_remove_group(&dev->kobj, &mc_attr_group);
}
static struct subsys_interface mc_cpu_interface = {
.name = "microcode",
.subsys = &cpu_subsys,
.add_dev = mc_device_add,
.remove_dev = mc_device_remove,
};
/**
* microcode_bsp_resume - Update boot CPU microcode during resume.
*/
......@@ -571,23 +782,21 @@ void microcode_bsp_resume(void)
int cpu = smp_processor_id();
struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
if (uci->mc)
if (uci->valid && uci->mc)
microcode_ops->apply_microcode(cpu);
else
else if (!uci->mc)
reload_early_microcode();
}
static struct syscore_ops mc_syscore_ops = {
.resume = microcode_bsp_resume,
.resume = microcode_bsp_resume,
};
static int mc_cpu_starting(unsigned int cpu)
{
enum ucode_state err = microcode_ops->apply_microcode(cpu);
pr_debug("%s: CPU%d, err: %d\n", __func__, cpu, err);
return err == UCODE_ERROR;
microcode_update_cpu(cpu);
pr_debug("CPU%d added\n", cpu);
return 0;
}
static int mc_cpu_online(unsigned int cpu)
......@@ -604,34 +813,15 @@ static int mc_cpu_down_prep(unsigned int cpu)
struct device *dev;
dev = get_cpu_device(cpu);
microcode_fini_cpu(cpu);
/* Suspend is in progress, only remove the interface */
sysfs_remove_group(&dev->kobj, &mc_attr_group);
pr_debug("%s: CPU%d\n", __func__, cpu);
pr_debug("CPU%d removed\n", cpu);
return 0;
}
static void setup_online_cpu(struct work_struct *work)
{
int cpu = smp_processor_id();
enum ucode_state err;
err = microcode_init_cpu(cpu);
if (err == UCODE_ERROR) {
pr_err("Error applying microcode on CPU%d\n", cpu);
return;
}
mc_cpu_online(cpu);
}
static struct attribute *cpu_root_microcode_attrs[] = {
#ifdef CONFIG_MICROCODE_LATE_LOADING
&dev_attr_reload.attr,
#endif
NULL
};
......@@ -658,18 +848,34 @@ int __init microcode_init(void)
if (!microcode_ops)
return -ENODEV;
microcode_pdev = platform_device_register_simple("microcode", -1, NULL, 0);
microcode_pdev = platform_device_register_simple("microcode", -1,
NULL, 0);
if (IS_ERR(microcode_pdev))
return PTR_ERR(microcode_pdev);
error = sysfs_create_group(&cpu_subsys.dev_root->kobj, &cpu_root_microcode_group);
get_online_cpus();
mutex_lock(&microcode_mutex);
error = subsys_interface_register(&mc_cpu_interface);
if (!error)
perf_check_microcode();
mutex_unlock(&microcode_mutex);
put_online_cpus();
if (error)
goto out_pdev;
error = sysfs_create_group(&cpu_subsys.dev_root->kobj,
&cpu_root_microcode_group);
if (error) {
pr_err("Error creating microcode group!\n");
goto out_pdev;
goto out_driver;
}
/* Do per-CPU setup */
schedule_on_each_cpu(setup_online_cpu);
error = microcode_dev_init();
if (error)
goto out_ucode_group;
register_syscore_ops(&mc_syscore_ops);
cpuhp_setup_state_nocalls(CPUHP_AP_MICROCODE_LOADER, "x86/microcode:starting",
......@@ -681,6 +887,19 @@ int __init microcode_init(void)
return 0;
out_ucode_group:
sysfs_remove_group(&cpu_subsys.dev_root->kobj,
&cpu_root_microcode_group);
out_driver:
get_online_cpus();
mutex_lock(&microcode_mutex);
subsys_interface_unregister(&mc_cpu_interface);
mutex_unlock(&microcode_mutex);
put_online_cpus();
out_pdev:
platform_device_unregister(microcode_pdev);
return error;
......
......@@ -738,7 +738,8 @@ static bool is_blacklisted(unsigned int cpu)
return false;
}
static enum ucode_state request_microcode_fw(int cpu, struct device *device)
static enum ucode_state request_microcode_fw(int cpu, struct device *device,
bool refresh_fw)
{
struct cpuinfo_x86 *c = &cpu_data(cpu);
const struct firmware *firmware;
......@@ -768,7 +769,24 @@ static enum ucode_state request_microcode_fw(int cpu, struct device *device)
return ret;
}
static enum ucode_state
request_microcode_user(int cpu, const void __user *buf, size_t size)
{
struct iov_iter iter;
struct iovec iov;
if (is_blacklisted(cpu))
return UCODE_NFOUND;
iov.iov_base = (void __user *)buf;
iov.iov_len = size;
iov_iter_init(&iter, WRITE, &iov, 1, size);
return generic_load_microcode(cpu, &iter);
}
static struct microcode_ops microcode_intel_ops = {
.request_microcode_user = request_microcode_user,
.request_microcode_fw = request_microcode_fw,
.collect_cpu_info = collect_cpu_info,
.apply_microcode = apply_microcode_intel,
......
......@@ -44,7 +44,7 @@
#define AGPGART_MINOR 175
#define TOSH_MINOR_DEV 181
#define HWRNG_MINOR 183
/*#define MICROCODE_MINOR 184 unused */
#define MICROCODE_MINOR 184
#define KEYPAD_MINOR 185
#define IRNET_MINOR 187
#define D7S_MINOR 193
......
......@@ -72,7 +72,7 @@ if [ `expr $T % 2` -eq 0 ]; then
addout " "
else
addout "S"
echo " * kernel running on an out of specification system (#2)"
echo " * SMP kernel oops on an officially SMP incapable processor (#2)"
fi
T=`expr $T / 2`
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册