- 24 4月, 2021 1 次提交
-
-
由 Wanpeng Li 提交于
kvm_memslots() will be called by kvm_write_guest_offset_cached() so we should take the srcu lock. Let's pull the srcu lock operation from kvm_steal_time_set_preempted() again to fix xen part. Fixes: 30b5c851 ("KVM: x86/xen: Add support for vCPU runstate information") Signed-off-by: NWanpeng Li <wanpengli@tencent.com> Message-Id: <1619166200-9215-1-git-send-email-wanpengli@tencent.com> Reviewed-by: NSean Christopherson <seanjc@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 22 4月, 2021 1 次提交
-
-
由 Jim Mattson 提交于
The only stepping of Broadwell Xeon parts is stepping 1. Fix the relevant isolation_ucodes[] entry, which previously enumerated stepping 2. Although the original commit was characterized as an optimization, it is also a workaround for a correctness issue. If a PMI arrives between kvm's call to perf_guest_get_msrs() and the subsequent VM-entry, a stale value for the IA32_PEBS_ENABLE MSR may be restored at the next VM-exit. This is because, unbeknownst to kvm, PMI throttling may clear bits in the IA32_PEBS_ENABLE MSR. CPUs with "PEBS isolation" don't suffer from this issue, because perf_guest_get_msrs() doesn't report the IA32_PEBS_ENABLE value. Fixes: 9b545c04 ("perf/x86/kvm: Avoid unnecessary work in guest filtering") Signed-off-by: NJim Mattson <jmattson@google.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NPeter Shier <pshier@google.com> Acked-by: NAndi Kleen <ak@linux.intel.com> Link: https://lkml.kernel.org/r/20210422001834.1748319-1-jmattson@google.com
-
- 21 4月, 2021 1 次提交
-
-
由 Kan Liang 提交于
There may be a kernel panic on the Haswell server and the Broadwell server, if the snbep_pci2phy_map_init() return error. The uncore_extra_pci_dev[HSWEP_PCI_PCU_3] is used in the cpu_init() to detect the existence of the SBOX, which is a MSR type of PMON unit. The uncore_extra_pci_dev is allocated in the uncore_pci_init(). If the snbep_pci2phy_map_init() returns error, perf doesn't initialize the PCI type of the PMON units, so the uncore_extra_pci_dev will not be allocated. But perf may continue initializing the MSR type of PMON units. A null dereference kernel panic will be triggered. The sockets in a Haswell server or a Broadwell server are identical. Only need to detect the existence of the SBOX once. Current perf probes all available PCU devices and stores them into the uncore_extra_pci_dev. It's unnecessary. Use the pci_get_device() to replace the uncore_extra_pci_dev. Only detect the existence of the SBOX on the first available PCU device once. Factor out hswep_has_limit_sbox(), since the Haswell server and the Broadwell server uses the same way to detect the existence of the SBOX. Add some macros to replace the magic number. Fixes: 5306c31c ("perf/x86/uncore/hsw-ep: Handle systems with only two SBOXes") Reported-by: NSteve Wahl <steve.wahl@hpe.com> Signed-off-by: NKan Liang <kan.liang@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Tested-by: NSteve Wahl <steve.wahl@hpe.com> Link: https://lkml.kernel.org/r/1618521764-100923-1-git-send-email-kan.liang@linux.intel.com
-
- 20 4月, 2021 1 次提交
-
-
由 Mike Galbraith 提交于
Commit in Fixes: added support for kexec-ing a kernel on panic using a new system call. As part of it, it does prepare a memory map for the new kernel. However, while doing so, it wrongly accesses memory it has not allocated: it accesses the first element of the cmem->ranges[] array in memmap_exclude_ranges() but it has not allocated the memory for it in crash_setup_memmap_entries(). As KASAN reports: BUG: KASAN: vmalloc-out-of-bounds in crash_setup_memmap_entries+0x17e/0x3a0 Write of size 8 at addr ffffc90000426008 by task kexec/1187 (gdb) list *crash_setup_memmap_entries+0x17e 0xffffffff8107cafe is in crash_setup_memmap_entries (arch/x86/kernel/crash.c:322). 317 unsigned long long mend) 318 { 319 unsigned long start, end; 320 321 cmem->ranges[0].start = mstart; 322 cmem->ranges[0].end = mend; 323 cmem->nr_ranges = 1; 324 325 /* Exclude elf header region */ 326 start = image->arch.elf_load_addr; (gdb) Make sure the ranges array becomes a single element allocated. [ bp: Write a proper commit message. ] Fixes: dd5f7260 ("kexec: support for kexec on panic using new system call") Signed-off-by: NMike Galbraith <efault@gmx.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NDave Young <dyoung@redhat.com> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/725fa3dc1da2737f0f6188a1a9701bead257ea9d.camel@gmx.de
-
- 19 4月, 2021 1 次提交
-
-
由 Maciej W. Rozycki 提交于
Fix a regression caused by making the 486SX separately selectable in Kconfig, for which the HIGHMEM64G setting has not been updated and therefore has become exposed as a user-selectable option for the M486SX configuration setting unlike with original M486 and all the other settings that choose non-PAE-enabled processors: High Memory Support > 1. off (NOHIGHMEM) 2. 4GB (HIGHMEM4G) 3. 64GB (HIGHMEM64G) choice[1-3?]: With the fix in place the setting is now correctly removed: High Memory Support > 1. off (NOHIGHMEM) 2. 4GB (HIGHMEM4G) choice[1-2?]: [ bp: Massage commit message. ] Fixes: 87d6021b ("x86/math-emu: Limit MATH_EMULATION to 486SX compatibles") Signed-off-by: NMaciej W. Rozycki <macro@orcam.me.uk> Signed-off-by: NBorislav Petkov <bp@suse.de> Cc: stable@vger.kernel.org # v5.5+ Link: https://lkml.kernel.org/r/alpine.DEB.2.21.2104141221340.44318@angie.orcam.me.uk
-
- 17 4月, 2021 1 次提交
-
-
由 Walter Wu 提交于
CONFIG_KASAN_STACK and CONFIG_KASAN_STACK_ENABLE both enable KASAN stack instrumentation, but we should only need one config, so that we remove CONFIG_KASAN_STACK_ENABLE and make CONFIG_KASAN_STACK workable. see [1]. When enable KASAN stack instrumentation, then for gcc we could do no prompt and default value y, and for clang prompt and default value n. This patch fixes the following compilation warning: include/linux/kasan.h:333:30: warning: 'CONFIG_KASAN_STACK' is not defined, evaluates to 0 [-Wundef] [akpm@linux-foundation.org: fix merge snafu] Link: https://bugzilla.kernel.org/show_bug.cgi?id=210221 [1] Link: https://lkml.kernel.org/r/20210226012531.29231-1-walter-zh.wu@mediatek.com Fixes: d9b571c8 ("kasan: fix KASAN_STACK dependency for HW_TAGS") Signed-off-by: NWalter Wu <walter-zh.wu@mediatek.com> Suggested-by: NDmitry Vyukov <dvyukov@google.com> Reviewed-by: NNathan Chancellor <natechancellor@gmail.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Reviewed-by: NAndrey Konovalov <andreyknvl@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 14 4月, 2021 3 次提交
-
-
由 Jan Kiszka 提交于
Those are already provided by linux/io.h as stubs. The conflict remains invisible until someone would pull linux/io.h into memtype.c. This fixes a build error when this file is used outside of the kernel tree. [ bp: Massage commit message. ] Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/a9351615-7a0d-9d47-af65-d9e2fffe8192@siemens.com
-
由 Mike Rapoport 提交于
Commit a799c2bd ("x86/setup: Consolidate early memory reservations") moved reservation of the memory inaccessible by Sandy Bride integrated graphics very early, and, as a result, on systems with such devices the first 1M was reserved by trim_snb_memory() which prevented the allocation of the real mode trampoline and made the boot hang very early. Since the purpose of trim_snb_memory() is to prevent problematic pages ever reaching the graphics device, it is safe to reserve these pages after memblock allocations are possible. Move trim_snb_memory() later in boot so that it will be called after reserve_real_mode() and make comments describing trim_snb_memory() operation more elaborate. [ bp: Massage a bit. ] Fixes: a799c2bd ("x86/setup: Consolidate early memory reservations") Reported-by: NRandy Dunlap <rdunlap@infradead.org> Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Tested-by: NRandy Dunlap <rdunlap@infradead.org> Tested-by: NHugh Dickins <hughd@google.com> Link: https://lkml.kernel.org/r/f67d3e03-af90-f790-baf4-8d412fe055af@infradead.org
-
由 Reiji Watanabe 提交于
__vmx_handle_exit() uses vcpu->run->internal.ndata as an index for an array access. Since vcpu->run is (can be) mapped to a user address space with a writer permission, the 'ndata' could be updated by the user process at anytime (the user process can set it to outside the bounds of the array). So, it is not safe that __vmx_handle_exit() uses the 'ndata' that way. Fixes: 1aa561b1 ("kvm: x86: Add "last CPU" to some KVM_EXIT information") Signed-off-by: NReiji Watanabe <reijiw@google.com> Reviewed-by: NJim Mattson <jmattson@google.com> Message-Id: <20210413154739.490299-1-reijiw@google.com> Cc: stable@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 13 4月, 2021 2 次提交
-
-
由 Rafael J. Wysocki 提交于
Commit 1a1c130a ("ACPI: tables: x86: Reserve memory occupied by ACPI tables") attempted to address an issue with reserving the memory occupied by ACPI tables, but it broke the initrd-based table override mechanism relied on by multiple users. To restore the initrd-based ACPI table override functionality, move the acpi_boot_table_init() invocation in setup_arch() on x86 after the acpi_table_upgrade() one. Fixes: 1a1c130a ("ACPI: tables: x86: Reserve memory occupied by ACPI tables") Reported-by: NHans de Goede <hdegoede@redhat.com> Tested-by: NHans de Goede <hdegoede@redhat.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Wei Yongjun 提交于
Fix the following sparse warning: arch/x86/kernel/cpu/sgx/virt.c:95:35: warning: symbol 'sgx_vepc_vm_ops' was not declared. Should it be static? This symbol is not used outside of virt.c so mark it static. [ bp: Massage commit message. ] Reported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210412160023.193850-1-weiyongjun1@huawei.com
-
- 12 4月, 2021 1 次提交
-
-
由 Jan Kiszka 提交于
Fix: ../arch/x86/include/asm/proto.h:14:30: warning: ‘struct task_struct’ declared \ inside parameter list will not be visible outside of this definition or declaration long do_arch_prctl_64(struct task_struct *task, int option, unsigned long arg2); ^~~~~~~~~~~ .../arch/x86/include/asm/proto.h:40:34: warning: ‘struct task_struct’ declared \ inside parameter list will not be visible outside of this definition or declaration long do_arch_prctl_common(struct task_struct *task, int option, ^~~~~~~~~~~ if linux/sched.h hasn't be included previously. This fixes a build error when this header is used outside of the kernel tree. [ bp: Massage commit message. ] Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/b76b4be3-cf66-f6b2-9a6c-3e7ef54f9845@web.de
-
- 10 4月, 2021 2 次提交
-
-
由 Aditya Srivastava 提交于
The opening comment mark '/**' is used for highlighting the beginning of kernel-doc comments. There are certain files in arch/x86/platform/intel-quark, which follow this syntax, but the content inside does not comply with kernel-doc. Such lines were probably not meant for kernel-doc parsing, but are parsed due to the presence of kernel-doc like comment syntax(i.e, '/**'), which causes unexpected warnings from kernel-doc. E.g., presence of kernel-doc like comment in the header lines for arch/x86/platform/intel-quark/imr.c causes these warnings: "warning: Function parameter or member 'fmt' not described in 'pr_fmt'" "warning: expecting prototype for c(). Prototype was for pr_fmt() instead" Similarly for arch/x86/platform/intel-quark/imr_selftest.c too. Provide a simple fix by replacing these occurrences with general comment format, i.e. '/*', to prevent kernel-doc from parsing it. Signed-off-by: NAditya Srivastava <yashsri421@gmail.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NRandy Dunlap <rdunlap@infradead.org> Link: https://lore.kernel.org/r/20210330213022.28769-1-yashsri421@gmail.com
-
由 Marco Elver 提交于
On systems with KPTI enabled, we can currently observe the following warning: BUG: using smp_processor_id() in preemptible caller is invalidate_user_asid+0x13/0x50 CPU: 6 PID: 1075 Comm: dmesg Not tainted 5.12.0-rc4-gda4a2b1a5479-kfence_1+ #1 Hardware name: Hewlett-Packard HP Pro 3500 Series/2ABF, BIOS 8.11 10/24/2012 Call Trace: dump_stack+0x7f/0xad check_preemption_disabled+0xc8/0xd0 invalidate_user_asid+0x13/0x50 flush_tlb_one_kernel+0x5/0x20 kfence_protect+0x56/0x80 ... While it normally makes sense to require preemption to be off, so that the expected CPU's TLB is flushed and not another, in our case it really is best-effort (see comments in kfence_protect_page()). Avoid the warning by disabling preemption around flush_tlb_one_kernel(). Link: https://lore.kernel.org/lkml/YGIDBAboELGgMgXy@elver.google.com/ Link: https://lkml.kernel.org/r/20210330065737.652669-1-elver@google.comSigned-off-by: NMarco Elver <elver@google.com> Reported-by: NTomi Sarvela <tomi.p.sarvela@intel.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Jann Horn <jannh@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 4月, 2021 1 次提交
-
-
由 Thomas Tai 提交于
Commit 334872a0 ("x86/traps: Attempt to fixup exceptions in vDSO before signaling") added return statements which bypass calling cond_local_irq_disable(). According to ca4c6a98 ("x86/traps: Make interrupt enable/disable symmetric in C code"), cond_local_irq_disable() is needed because the asm return code no longer disables interrupts. Follow the existing code as an example to use "goto exit" instead of "return" statement. [ bp: Massage commit message. ] Fixes: 334872a0 ("x86/traps: Attempt to fixup exceptions in vDSO before signaling") Signed-off-by: NThomas Tai <thomas.tai@oracle.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NAlexandre Chartre <alexandre.chartre@oracle.com> Link: https://lkml.kernel.org/r/1617902914-83245-1-git-send-email-thomas.tai@oracle.com
-
- 08 4月, 2021 7 次提交
-
-
由 Jarkko Sakkinen 提交于
The commit in Fixes: changed the SGX EPC page sanitization to end up in sgx_free_epc_page() which puts clean and sanitized pages on the free list. This was done for the reason that it is best to keep the logic to assign available-for-use EPC pages to the correct NUMA lists in a single location. sgx_nr_free_pages is also incremented by sgx_free_epc_pages() but those pages which are being added there per EPC section do not belong to the free list yet because they haven't been sanitized yet - they land on the dirty list first and the sanitization happens later when ksgxd starts massaging them. So remove that addition there and have sgx_free_epc_page() do that solely. [ bp: Sanitize commit message too. ] Fixes: 51ab30eb ("x86/sgx: Replace section->init_laundry_list with sgx_dirty_page_list") Signed-off-by: NJarkko Sakkinen <jarkko@kernel.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210408092924.7032-1-jarkko@kernel.org
-
由 Piotr Krysiuk 提交于
The branch displacement logic in the BPF JIT compilers for x86 assumes that, for any generated branch instruction, the distance cannot increase between optimization passes. But this assumption can be violated due to how the distances are computed. Specifically, whenever a backward branch is processed in do_jit(), the distance is computed by subtracting the positions in the machine code from different optimization passes. This is because part of addrs[] is already updated for the current optimization pass, before the branch instruction is visited. And so the optimizer can expand blocks of machine code in some cases. This can confuse the optimizer logic, where it assumes that a fixed point has been reached for all machine code blocks once the total program size stops changing. And then the JIT compiler can output abnormal machine code containing incorrect branch displacements. To mitigate this issue, we assert that a fixed point is reached while populating the output image. This rejects any problematic programs. The issue affects both x86-32 and x86-64. We mitigate separately to ease backporting. Signed-off-by: NPiotr Krysiuk <piotras@gmail.com> Reviewed-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
由 Piotr Krysiuk 提交于
The branch displacement logic in the BPF JIT compilers for x86 assumes that, for any generated branch instruction, the distance cannot increase between optimization passes. But this assumption can be violated due to how the distances are computed. Specifically, whenever a backward branch is processed in do_jit(), the distance is computed by subtracting the positions in the machine code from different optimization passes. This is because part of addrs[] is already updated for the current optimization pass, before the branch instruction is visited. And so the optimizer can expand blocks of machine code in some cases. This can confuse the optimizer logic, where it assumes that a fixed point has been reached for all machine code blocks once the total program size stops changing. And then the JIT compiler can output abnormal machine code containing incorrect branch displacements. To mitigate this issue, we assert that a fixed point is reached while populating the output image. This rejects any problematic programs. The issue affects both x86-32 and x86-64. We mitigate separately to ease backporting. Signed-off-by: NPiotr Krysiuk <piotras@gmail.com> Reviewed-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
由 Paolo Bonzini 提交于
Right now, if a call to kvm_tdp_mmu_zap_sp returns false, the caller will skip the TLB flush, which is wrong. There are two ways to fix it: - since kvm_tdp_mmu_zap_sp will not yield and therefore will not flush the TLB itself, we could change the call to kvm_tdp_mmu_zap_sp to use "flush |= ..." - or we can chain the flush argument through kvm_tdp_mmu_zap_sp down to __kvm_tdp_mmu_zap_gfn_range. Note that kvm_tdp_mmu_zap_sp will neither yield nor flush, so flush would never go from true to false. This patch does the former to simplify application to stable kernels, and to make it further clearer that kvm_tdp_mmu_zap_sp will not flush. Cc: seanjc@google.com Fixes: 048f4980 ("KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zapping") Cc: <stable@vger.kernel.org> # 5.10.x: 048f4980: KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zapping Cc: <stable@vger.kernel.org> # 5.10.x: 33a31641: KVM: x86/mmu: Don't allow TDP MMU to yield when recovering NX pages Cc: <stable@vger.kernel.org> Reviewed-by: NSean Christopherson <seanjc@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Zhao Xuehui 提交于
The functions msr_read() and msr_write() are not used outside of msr.c, make them static. [ bp: Massage commit message. ] Signed-off-by: NZhao Xuehui <zhaoxuehui1@huawei.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210408095218.152264-1-zhaoxuehui1@huawei.com
-
由 Yang Li 提交于
$ make CC=clang clang-analyzer (needs clang-tidy installed on the system too) on x86_64 defconfig triggers: arch/x86/kernel/cpu/cacheinfo.c:880:24: warning: Value stored to 'this_cpu_ci' \ during its initialization is never read [clang-analyzer-deadcode.DeadStores] struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); ^ arch/x86/kernel/cpu/cacheinfo.c:880:24: note: Value stored to 'this_cpu_ci' \ during its initialization is never read So simply remove this unneeded dead-store initialization. As compilers will detect this unneeded assignment and optimize this anyway the resulting object code is identical before and after this change. No functional change. No change to object code. [ bp: Massage commit message. ] Reported-by: NAbaci Robot <abaci@linux.alibaba.com> Signed-off-by: NYang Li <yang.lee@linux.alibaba.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Link: https://lkml.kernel.org/r/1617177624-24670-1-git-send-email-yang.lee@linux.alibaba.com
-
由 Vitaly Kuznetsov 提交于
Commit 8cdddd18 ("ACPI: processor: Fix CPU0 wakeup in acpi_idle_play_dead()") tried to fix CPU0 hotplug breakage by copying wakeup_cpu0() + start_cpu0() logic from hlt_play_dead()//mwait_play_dead() into acpi_idle_play_dead(). The problem is that these functions are not exported to modules so when CONFIG_ACPI_PROCESSOR=m build fails. The issue could've been fixed by exporting both wakeup_cpu0()/start_cpu0() (the later from assembly) but it seems putting the whole pattern into a new function and exporting it instead is better. Reported-by: Nkernel test robot <lkp@intel.com> Fixes: 8cdddd18 ("CPI: processor: Fix CPU0 wakeup in acpi_idle_play_dead()") Cc: <stable@vger.kernel.org> # 5.10+ Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 07 4月, 2021 2 次提交
-
-
由 Sean Christopherson 提交于
And extract sgx_set_attribute() out of sgx_ioc_enclave_provision() and export it as symbol for KVM to use. The provisioning key is sensitive. The SGX driver only allows to create an enclave which can access the provisioning key when the enclave creator has permission to open /dev/sgx_provision. It should apply to a VM as well, as the provisioning key is platform-specific, thus an unrestricted VM can also potentially compromise the provisioning key. Move the provisioning device creation out of sgx_drv_init() to sgx_init() as a preparation for adding SGX virtualization support, so that even if the SGX driver is not enabled due to flexible launch control not being available, SGX virtualization can still be enabled, and use it to restrict a VM's capability of being able to access the provisioning key. [ bp: Massage commit message. ] Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NKai Huang <kai.huang@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NJarkko Sakkinen <jarkko@kernel.org> Acked-by: NDave Hansen <dave.hansen@intel.com> Link: https://lkml.kernel.org/r/0f4d044d621561f26d5f4ef73e8dc6cd18cc7e79.1616136308.git.kai.huang@intel.com
-
由 Sean Christopherson 提交于
The host kernel must intercept ECREATE to impose policies on guests, and intercept EINIT to be able to write guest's virtual SGX_LEPUBKEYHASH MSR values to hardware before running guest's EINIT so it can run correctly according to hardware behavior. Provide wrappers around __ecreate() and __einit() to hide the ugliness of overloading the ENCLS return value to encode multiple error formats in a single int. KVM will trap-and-execute ECREATE and EINIT as part of SGX virtualization, and reflect ENCLS execution result to guest by setting up guest's GPRs, or on an exception, injecting the correct fault based on return value of __ecreate() and __einit(). Use host userspace addresses (provided by KVM based on guest physical address of ENCLS parameters) to execute ENCLS/EINIT when possible. Accesses to both EPC and memory originating from ENCLS are subject to segmentation and paging mechanisms. It's also possible to generate kernel mappings for ENCLS parameters by resolving PFN but using __uaccess_xx() is simpler. [ bp: Return early if the __user memory accesses fail, use cpu_feature_enabled(). ] Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NKai Huang <kai.huang@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NJarkko Sakkinen <jarkko@kernel.org> Link: https://lkml.kernel.org/r/20e09daf559aa5e9e680a0b4b5fba940f1bad86e.1616136308.git.kai.huang@intel.com
-
- 06 4月, 2021 8 次提交
-
-
由 Kai Huang 提交于
Add a helper to update SGX_LEPUBKEYHASHn MSRs. SGX virtualization also needs to update those MSRs based on guest's "virtual" SGX_LEPUBKEYHASHn before EINIT from guest. Signed-off-by: NKai Huang <kai.huang@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NDave Hansen <dave.hansen@intel.com> Acked-by: NJarkko Sakkinen <jarkko@kernel.org> Link: https://lkml.kernel.org/r/dfb7cd39d4dd62ea27703b64afdd8bccb579f623.1616136308.git.kai.huang@intel.com
-
由 Sean Christopherson 提交于
Add a helper to extract the fault indicator from an encoded ENCLS return value. SGX virtualization will also need to detect ENCLS faults. Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NKai Huang <kai.huang@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NJarkko Sakkinen <jarkko@kernel.org> Acked-by: NDave Hansen <dave.hansen@intel.com> Link: https://lkml.kernel.org/r/c1f955898110de2f669da536fc6cf62e003dff88.1616136308.git.kai.huang@intel.com
-
由 Sean Christopherson 提交于
Define the ENCLS leafs that are available with SGX2, also referred to as Enclave Dynamic Memory Management (EDMM). The leafs will be used by KVM to conditionally expose SGX2 capabilities to guests. Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NKai Huang <kai.huang@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NJarkko Sakkinen <jarkko@kernel.org> Acked-by: NDave Hansen <dave.hansen@intel.com> Link: https://lkml.kernel.org/r/5f0970c251ebcc6d5add132f0d750cc753b7060f.1616136308.git.kai.huang@intel.com
-
由 Sean Christopherson 提交于
Move the ENCLS leaf definitions to sgx.h so that they can be used by KVM. Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NKai Huang <kai.huang@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NJarkko Sakkinen <jarkko@kernel.org> Acked-by: NDave Hansen <dave.hansen@intel.com> Link: https://lkml.kernel.org/r/2e6cd7c5c1ced620cfcd292c3c6c382827fde6b2.1616136308.git.kai.huang@intel.com
-
由 Sean Christopherson 提交于
Expose SGX architectural structures, as KVM will use many of the architectural constants and structs to virtualize SGX. Name the new header file as asm/sgx.h, rather than asm/sgx_arch.h, to have single header to provide SGX facilities to share with other kernel componments. Also update MAINTAINERS to include asm/sgx.h. Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Co-developed-by: NKai Huang <kai.huang@intel.com> Signed-off-by: NKai Huang <kai.huang@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NJarkko Sakkinen <jarkko@kernel.org> Acked-by: NDave Hansen <dave.hansen@intel.com> Link: https://lkml.kernel.org/r/6bf47acd91ab4d709e66ad1692c7803e4c9063a0.1616136308.git.kai.huang@intel.com
-
由 Kai Huang 提交于
Modify sgx_init() to always try to initialize the virtual EPC driver, even if the SGX driver is disabled. The SGX driver might be disabled if SGX Launch Control is in locked mode, or not supported in the hardware at all. This allows (non-Linux) guests that support non-LC configurations to use SGX. [ bp: De-silli-fy the test. ] Signed-off-by: NKai Huang <kai.huang@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NSean Christopherson <seanjc@google.com> Acked-by: NJarkko Sakkinen <jarkko@kernel.org> Acked-by: NDave Hansen <dave.hansen@intel.com> Link: https://lkml.kernel.org/r/d35d17a02bbf8feef83a536cec8b43746d4ea557.1616136308.git.kai.huang@intel.com
-
由 Sean Christopherson 提交于
The kernel will currently disable all SGX support if the hardware does not support launch control. Make it more permissive to allow SGX virtualization on systems without Launch Control support. This will allow KVM to expose SGX to guests that have less-strict requirements on the availability of flexible launch control. Improve error message to distinguish between three cases. There are two cases where SGX support is completely disabled: 1) SGX has been disabled completely by the BIOS 2) SGX LC is locked by the BIOS. Bare-metal support is disabled because of LC unavailability. SGX virtualization is unavailable (because of Kconfig). One where it is partially available: 3) SGX LC is locked by the BIOS. Bare-metal support is disabled because of LC unavailability. SGX virtualization is supported. Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Co-developed-by: NKai Huang <kai.huang@intel.com> Signed-off-by: NKai Huang <kai.huang@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NJarkko Sakkinen <jarkko@kernel.org> Acked-by: NDave Hansen <dave.hansen@intel.com> Link: https://lkml.kernel.org/r/b3329777076509b3b601550da288c8f3c406a865.1616136308.git.kai.huang@intel.com
-
由 Sean Christopherson 提交于
Add a misc device /dev/sgx_vepc to allow userspace to allocate "raw" Enclave Page Cache (EPC) without an associated enclave. The intended and only known use case for raw EPC allocation is to expose EPC to a KVM guest, hence the 'vepc' moniker, virt.{c,h} files and X86_SGX_KVM Kconfig. The SGX driver uses the misc device /dev/sgx_enclave to support userspace in creating an enclave. Each file descriptor returned from opening /dev/sgx_enclave represents an enclave. Unlike the SGX driver, KVM doesn't control how the guest uses the EPC, therefore EPC allocated to a KVM guest is not associated with an enclave, and /dev/sgx_enclave is not suitable for allocating EPC for a KVM guest. Having separate device nodes for the SGX driver and KVM virtual EPC also allows separate permission control for running host SGX enclaves and KVM SGX guests. To use /dev/sgx_vepc to allocate a virtual EPC instance with particular size, the hypervisor opens /dev/sgx_vepc, and uses mmap() with the intended size to get an address range of virtual EPC. Then it may use the address range to create one KVM memory slot as virtual EPC for a guest. Implement the "raw" EPC allocation in the x86 core-SGX subsystem via /dev/sgx_vepc rather than in KVM. Doing so has two major advantages: - Does not require changes to KVM's uAPI, e.g. EPC gets handled as just another memory backend for guests. - EPC management is wholly contained in the SGX subsystem, e.g. SGX does not have to export any symbols, changes to reclaim flows don't need to be routed through KVM, SGX's dirty laundry doesn't have to get aired out for the world to see, and so on and so forth. The virtual EPC pages allocated to guests are currently not reclaimable. Reclaiming an EPC page used by enclave requires a special reclaim mechanism separate from normal page reclaim, and that mechanism is not supported for virutal EPC pages. Due to the complications of handling reclaim conflicts between guest and host, reclaiming virtual EPC pages is significantly more complex than basic support for SGX virtualization. [ bp: - Massage commit message and comments - use cpu_feature_enabled() - vertically align struct members init - massage Virtual EPC clarification text - move Kconfig prompt to Virtualization ] Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Co-developed-by: NKai Huang <kai.huang@intel.com> Signed-off-by: NKai Huang <kai.huang@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NDave Hansen <dave.hansen@intel.com> Acked-by: NJarkko Sakkinen <jarkko@kernel.org> Link: https://lkml.kernel.org/r/0c38ced8c8e5a69872db4d6a1c0dabd01e07cad7.1616136308.git.kai.huang@intel.com
-
- 02 4月, 2021 1 次提交
-
-
由 Arnd Bergmann 提交于
gcc-11 points out a mismatch between the declaration and the definition of poly1305_core_setkey(): lib/crypto/poly1305-donna32.c:13:67: error: argument 2 of type ‘const u8[16]’ {aka ‘const unsigned char[16]’} with mismatched bound [-Werror=array-parameter=] 13 | void poly1305_core_setkey(struct poly1305_core_key *key, const u8 raw_key[16]) | ~~~~~~~~~^~~~~~~~~~~ In file included from lib/crypto/poly1305-donna32.c:11: include/crypto/internal/poly1305.h:21:68: note: previously declared as ‘const u8 *’ {aka ‘const unsigned char *’} 21 | void poly1305_core_setkey(struct poly1305_core_key *key, const u8 *raw_key); This is harmless in principle, as the calling conventions are the same, but the more specific prototype allows better type checking in the caller. Change the declaration to match the actual function definition. The poly1305_simd_init() is a bit suspicious here, as it previously had a 32-byte argument type, but looks like it needs to take the 16-byte POLY1305_BLOCK_SIZE array instead. Fixes: 1c08a104 ("crypto: poly1305 - add new 32 and 64-bit generic versions") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Reviewed-by: NArd Biesheuvel <ardb@kernel.org> Reviewed-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 01 4月, 2021 6 次提交
-
-
由 Vitaly Kuznetsov 提交于
Commit 496121c0 ("ACPI: processor: idle: Allow probing on platforms with one ACPI C-state") broke CPU0 hotplug on certain systems, e.g. I'm observing the following on AWS Nitro (e.g r5b.xlarge but other instance types are affected as well): # echo 0 > /sys/devices/system/cpu/cpu0/online # echo 1 > /sys/devices/system/cpu/cpu0/online <10 seconds delay> -bash: echo: write error: Input/output error In fact, the above mentioned commit only revealed the problem and did not introduce it. On x86, to wakeup CPU an NMI is being used and hlt_play_dead()/mwait_play_dead() loops are prepared to handle it: /* * If NMI wants to wake up CPU0, start CPU0. */ if (wakeup_cpu0()) start_cpu0(); cpuidle_play_dead() -> acpi_idle_play_dead() (which is now being called on systems where it wasn't called before the above mentioned commit) serves the same purpose but it doesn't have a path for CPU0. What happens now on wakeup is: - NMI is sent to CPU0 - wakeup_cpu0_nmi() works as expected - we get back to while (1) loop in acpi_idle_play_dead() - safe_halt() puts CPU0 to sleep again. The straightforward/minimal fix is add the special handling for CPU0 on x86 and that's what the patch is doing. Fixes: 496121c0 ("ACPI: processor: idle: Allow probing on platforms with one ACPI C-state") Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com> Cc: 5.10+ <stable@vger.kernel.org> # 5.10+ Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Vitaly Kuznetsov 提交于
When guest time is reset with KVM_SET_CLOCK(0), it is possible for 'hv_clock->system_time' to become a small negative number. This happens because in KVM_SET_CLOCK handling we set 'kvm->arch.kvmclock_offset' based on get_kvmclock_ns(kvm) but when KVM_REQ_CLOCK_UPDATE is handled, kvm_guest_time_update() does (masterclock in use case): hv_clock.system_time = ka->master_kernel_ns + v->kvm->arch.kvmclock_offset; And 'master_kernel_ns' represents the last time when masterclock got updated, it can precede KVM_SET_CLOCK() call. Normally, this is not a problem, the difference is very small, e.g. I'm observing hv_clock.system_time = -70 ns. The issue comes from the fact that 'hv_clock.system_time' is stored as unsigned and 'system_time / 100' in compute_tsc_page_parameters() becomes a very big number. Use 'master_kernel_ns' instead of get_kvmclock_ns() when masterclock is in use and get_kvmclock_base_ns() when it's not to prevent 'system_time' from going negative. Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210331124130.337992-2-vkuznets@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
pvclock_gtod_sync_lock can be taken with interrupts disabled if the preempt notifier calls get_kvmclock_ns to update the Xen runstate information: spin_lock include/linux/spinlock.h:354 [inline] get_kvmclock_ns+0x25/0x390 arch/x86/kvm/x86.c:2587 kvm_xen_update_runstate+0x3d/0x2c0 arch/x86/kvm/xen.c:69 kvm_xen_update_runstate_guest+0x74/0x320 arch/x86/kvm/xen.c:100 kvm_xen_runstate_set_preempted arch/x86/kvm/xen.h:96 [inline] kvm_arch_vcpu_put+0x2d8/0x5a0 arch/x86/kvm/x86.c:4062 So change the users of the spinlock to spin_lock_irqsave and spin_unlock_irqrestore. Reported-by: syzbot+b282b65c2c68492df769@syzkaller.appspotmail.com Fixes: 30b5c851 ("KVM: x86/xen: Add support for vCPU runstate information") Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
There is no need to include changes to vcpu->requests into the pvclock_gtod_sync_lock critical section. The changes to the shared data structures (in pvclock_update_vm_gtod_copy) already occur under the lock. Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Fixing nested_vmcb_check_save to avoid all TOC/TOU races is a bit harder in released kernels, so do the bare minimum by avoiding that EFER.SVME is cleared. This is problematic because svm_set_efer frees the data structures for nested virtualization if EFER.SVME is cleared. Also check that EFER.SVME remains set after a nested vmexit; clearing it could happen if the bit is zero in the save area that is passed to KVM_SET_NESTED_STATE (the save area of the nested state corresponds to the nested hypervisor's state and is restored on the next nested vmexit). Cc: stable@vger.kernel.org Fixes: 2fcf4876 ("KVM: nSVM: implement on demand allocation of the nested state") Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Avoid races between check and use of the nested VMCB controls. This for example ensures that the VMRUN intercept is always reflected to the nested hypervisor, instead of being processed by the host. Without this patch, it is possible to end up with svm->nested.hsave pointing to the MSR permission bitmap for nested guests. This bug is CVE-2021-29657. Reported-by: NFelix Wilhelm <fwilhelm@google.com> Cc: stable@vger.kernel.org Fixes: 2fcf4876 ("KVM: nSVM: implement on demand allocation of the nested state") Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 31 3月, 2021 1 次提交
-
-
由 Sean Christopherson 提交于
Prevent the TDP MMU from yielding when zapping a gfn range during NX page recovery. If a flush is pending from a previous invocation of the zapping helper, either in the TDP MMU or the legacy MMU, but the TDP MMU has not accumulated a flush for the current invocation, then yielding will release mmu_lock with stale TLB entries. That being said, this isn't technically a bug fix in the current code, as the TDP MMU will never yield in this case. tdp_mmu_iter_cond_resched() will yield if and only if it has made forward progress, as defined by the current gfn vs. the last yielded (or starting) gfn. Because zapping a single shadow page is guaranteed to (a) find that page and (b) step sideways at the level of the shadow page, the TDP iter will break its loop before getting a chance to yield. But that is all very, very subtle, and will break at the slightest sneeze, e.g. zapping while holding mmu_lock for read would break as the TDP MMU wouldn't be guaranteed to see the present shadow page, and thus could step sideways at a lower level. Cc: Ben Gardon <bgardon@google.com> Signed-off-by: NSean Christopherson <seanjc@google.com> Message-Id: <20210325200119.1359384-4-seanjc@google.com> [Add lockdep assertion. - Paolo] Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-