- 17 2月, 2021 3 次提交
-
-
由 Michal Hocko 提交于
Preemption mode selection is currently hardcoded on Kconfig choices. Introduce a dedicated option to tune preemption flavour at boot time, This will be only available on architectures efficiently supporting static calls in order not to tempt with the feature against additional overhead that might be prohibitive or undesirable. CONFIG_PREEMPT_DYNAMIC is automatically selected by CONFIG_PREEMPT if the architecture provides the necessary support (CONFIG_STATIC_CALL_INLINE, CONFIG_GENERIC_ENTRY, and provide with __preempt_schedule_function() / __preempt_schedule_notrace_function()). Suggested-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> [peterz: relax requirement to HAVE_STATIC_CALL] Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-5-frederic@kernel.org
-
由 Peter Zijlstra 提交于
Provide a stub function that return 0 and wire up the static call site patching to replace the CALL with a single 5 byte instruction that clears %RAX, the return value register. The function can be cast to any function pointer type that has a single %RAX return (including pointers). Also provide a version that returns an int for convenience. We are clearing the entire %RAX register in any case, whether the return value is 32 or 64 bits, since %RAX is always a scratch register anyway. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-2-frederic@kernel.org
-
由 Dietmar Eggemann 提交于
The only remaining use of MAX_USER_PRIO (and USER_PRIO) is the SCALE_PRIO() definition in the PowerPC Cell architecture's Synergistic Processor Unit (SPU) scheduler. TASK_USER_PRIO isn't used anymore. Commit fe443ef2 ("[POWERPC] spusched: Dynamic timeslicing for SCHED_OTHER") copied SCALE_PRIO() from the task scheduler in v2.6.23. Commit a4ec24b4 ("sched: tidy up SCHED_RR") removed it from the task scheduler in v2.6.24. Commit 3ee237dd ("sched/prio: Add 3 macros of MAX_NICE, MIN_NICE and NICE_WIDTH in prio.h") introduced NICE_WIDTH much later. With: MAX_USER_PRIO = USER_PRIO(MAX_PRIO) = MAX_PRIO - MAX_RT_PRIO MAX_PRIO = MAX_RT_PRIO + NICE_WIDTH MAX_USER_PRIO = MAX_RT_PRIO + NICE_WIDTH - MAX_RT_PRIO MAX_USER_PRIO = NICE_WIDTH MAX_USER_PRIO can be replaced by NICE_WIDTH to be able to remove all the {*_}USER_PRIO defines. Signed-off-by: NDietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210128131040.296856-3-dietmar.eggemann@arm.com
-
- 15 2月, 2021 1 次提交
-
-
由 Masahiro Yamada 提交于
These are NOT exported to userspace. The headers listed in arch/sparc/include/uapi/asm/Kbuild are exported. Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org>
-
- 14 2月, 2021 2 次提交
-
-
由 Randy Dunlap 提交于
Fix a build error for undefined 'TI_PRE_COUNT' by adding it to asm-offsets.c. h8300-linux-ld: arch/h8300/kernel/entry.o: in function `resume_kernel': (.text+0x29a): undefined reference to `TI_PRE_COUNT' Link: https://lkml.kernel.org/r/20210212021650.22740-1-rdunlap@infradead.org Fixes: df2078b8 ("h8300: Low level entry") Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Reported-by: Nkernel test robot <lkp@intel.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mike Rapoport 提交于
Recent changes that obsoleted DISCONTIGMEM on m68k switched the MMU variant to use generic definitions of __pfn_to_phys() and __phys_to_pfn(), but missed the !MMU variant which caused a build failure: drivers/media/common/videobuf2/videobuf2-dma-contig.c: In function 'vb2_dc_get_userptr': drivers/media/common/videobuf2/videobuf2-dma-contig.c:509:5: error: implicit declaration of function '__pfn_to_phys' [-Werror=implicit-function-declaration] 509 | __pfn_to_phys(nums[0]), size, buf->dma_dir, 0); | ^~~~~~~~~~~~~ cc1: some warnings being treated as errors Enable __pfn_to_phys() and __phys_to_pfn() on !MMU builds. Link: https://lkml.kernel.org/r/20210211232202.GS299309@linux.ibm.com Fixes: 4bfc848e ("m68k/mm: enable use of generic memory_model.h for !DISCONTIGMEM") Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Reported-by: Nkernel test robot <lkp@intel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greg Ungerer <gerg@linux-m68k.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 2月, 2021 1 次提交
-
-
由 Catalin Marinas 提交于
The ptrace(PTRACE_PEEKMTETAGS) implementation checks whether the user page has valid tags (mapped with PROT_MTE) by testing the PG_mte_tagged page flag. If this bit is cleared, ptrace(PTRACE_PEEKMTETAGS) returns -EIO. A newly created (PROT_MTE) mapping points to the zero page which had its tags zeroed during cpu_enable_mte(). If there were no prior writes to this mapping, ptrace(PTRACE_PEEKMTETAGS) fails with -EIO since the zero page does not have the PG_mte_tagged flag set. Set PG_mte_tagged on the zero page when its tags are cleared during boot. In addition, to avoid ptrace(PTRACE_PEEKMTETAGS) succeeding on !PROT_MTE mappings pointing to the zero page, change the __access_remote_tags() check to (vm_flags & VM_MTE) instead of PG_mte_tagged. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Fixes: 34bfeea4 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE") Cc: <stable@vger.kernel.org> # 5.10.x Cc: Will Deacon <will@kernel.org> Reported-by: NLuis Machado <luis.machado@linaro.org> Tested-by: NLuis Machado <luis.machado@linaro.org> Reviewed-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210210180316.23654-1-catalin.marinas@arm.com
-
- 11 2月, 2021 3 次提交
-
-
由 Julien Grall 提交于
After Commit 3499ba81 ("xen: Fix event channel callback via INTX/GSI"), xenbus_probe() will be called too early on Arm. This will recent to a guest hang during boot. If the hang wasn't there, we would have ended up to call xenbus_probe() twice (the second time is in xenbus_probe_initcall()). We don't need to initialize xenbus_probe() early for Arm guest. Therefore, the call in xen_guest_init() is now removed. After this change, there is no more external caller for xenbus_probe(). So the function is turned to a static one. Interestingly there were two prototypes for it. Cc: stable@vger.kernel.org Fixes: 3499ba81 ("xen: Fix event channel callback via INTX/GSI") Reported-by: NIan Jackson <iwj@xenproject.org> Signed-off-by: NJulien Grall <jgrall@amazon.com> Reviewed-by: NDavid Woodhouse <dwmw@amazon.co.uk> Reviewed-by: NStefano Stabellini <sstabellini@kernel.org> Link: https://lore.kernel.org/r/20210210170654.5377-1-julien@xen.orgSigned-off-by: NJuergen Gross <jgross@suse.com>
-
由 Palmer Dabbelt 提交于
VSC8541 phys need a special reset sequence, which the driver doesn't currentlny support. As a result enabling the reset via GPIO essentially guarnteees that the device won't work correctly. We've been relying on bootloaders to reset the device for years, with this revert we'll go back to doing so until we can sort out how to get the reset sequence into the kernel. This reverts commit a0fa9d72. Fixes: a0fa9d72 ("dts: phy: add GPIO number and active state used for phy reset") Cc: stable@vger.kernel.org Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Thomas Gleixner 提交于
Invoking x86_init.irqs.create_pci_msi_domain() before x86_init.pci.arch_init() breaks XEN PV. The XEN_PV specific pci.arch_init() function overrides the default create_pci_msi_domain() which is obviously too late. As a consequence the XEN PV PCI/MSI allocation goes through the native path which runs out of vectors and causes malfunction. Invoke it after x86_init.pci.arch_init(). Fixes: 6b15ffa0 ("x86/irq: Initialize PCI/MSI domain at PCI init time") Reported-by: NJuergen Gross <jgross@suse.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NJuergen Gross <jgross@suse.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/87pn18djte.fsf@nanos.tec.linutronix.de
-
- 09 2月, 2021 2 次提交
-
-
由 Borislav Petkov 提交于
Commit 20bf2b37 ("x86/build: Disable CET instrumentation in the kernel") disabled CET instrumentation which gets added by default by the Ubuntu gcc9 and 10 by default, but did that only for 64-bit builds. It would still fail when building a 32-bit target. So disable CET for all x86 builds. Fixes: 20bf2b37 ("x86/build: Disable CET instrumentation in the kernel") Reported-by: NAC <achirvasub@gmail.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com> Tested-by: NAC <achirvasub@gmail.com> Link: https://lkml.kernel.org/r/YCCIgMHkzh/xT4ex@arch-chirva.localdomain
-
由 Jarkko Sakkinen 提交于
This has been shown in tests: [ +0.000008] WARNING: CPU: 3 PID: 7620 at kernel/rcu/srcutree.c:374 cleanup_srcu_struct+0xed/0x100 This is essentially a use-after free, although SRCU notices it as an SRCU cleanup in an invalid context. == Background == SGX has a data structure (struct sgx_encl_mm) which keeps per-mm SGX metadata. This is separate from struct sgx_encl because, in theory, an enclave can be mapped from more than one mm. sgx_encl_mm includes a pointer back to the sgx_encl. This means that sgx_encl must have a longer lifetime than all of the sgx_encl_mm's that point to it. That's usually the case: sgx_encl_mm is freed only after the mmu_notifier is unregistered in sgx_release(). However, there's a race. If the process is exiting, sgx_mmu_notifier_release() can be called in parallel with sgx_release() instead of being called *by* it. The mmu_notifier path keeps encl_mm alive past when sgx_encl can be freed. This inverts the lifetime rules and means that sgx_mmu_notifier_release() can access a freed sgx_encl. == Fix == Increase encl->refcount when encl_mm->encl is established. Release this reference when encl_mm is freed. This ensures that encl outlives encl_mm. [ bp: Massage commit message. ] Fixes: 1728ab54 ("x86/sgx: Add a page reclaimer") Reported-by: NHaitao Huang <haitao.huang@linux.intel.com> Signed-off-by: NJarkko Sakkinen <jarkko@kernel.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NDave Hansen <dave.hansen@linux.intel.com> Link: https://lkml.kernel.org/r/20210207221401.29933-1-jarkko@kernel.org
-
- 08 2月, 2021 1 次提交
-
-
由 Rafael J. Wysocki 提交于
If the maximum performance level taken for computing the arch_max_freq_ratio value used in the x86 scale-invariance code is higher than the one corresponding to the cpuinfo.max_freq value coming from the acpi_cpufreq driver, the scale-invariant utilization falls below 100% even if the CPU runs at cpuinfo.max_freq or slightly faster, which causes the schedutil governor to select a frequency below cpuinfo.max_freq. That frequency corresponds to a frequency table entry below the maximum performance level necessary to get to the "boost" range of CPU frequencies which prevents "boost" frequencies from being used in some workloads. While this issue is related to scale-invariance, it may be amplified by commit db865272 ("cpufreq: Avoid configuring old governors as default with intel_pstate") from the 5.10 development cycle which made it extremely easy to default to schedutil even if the preferred driver is acpi_cpufreq as long as intel_pstate is built too, because the mere presence of the latter effectively removes the ondemand governor from the defaults. Distro kernels are likely to include both intel_pstate and acpi_cpufreq on x86, so their users who cannot use intel_pstate or choose to use acpi_cpufreq may easily be affectecd by this issue. If CPPC is available, it can be used to address this issue by extending the frequency tables created by acpi_cpufreq to cover the entire available frequency range (including "boost" frequencies) for each CPU, but if CPPC is not there, acpi_cpufreq has no idea what the maximum "boost" frequency is and the frequency tables created by it cannot be extended in a meaningful way, so in that case make it ask the arch scale-invariance code to to use the "nominal" performance level for CPU utilization scaling in order to avoid the issue at hand. Fixes: db865272 ("cpufreq: Avoid configuring old governors as default with intel_pstate") Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NGiovanni Gherdovich <ggherdovich@suse.cz> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
-
- 06 2月, 2021 5 次提交
-
-
由 Borislav Petkov 提交于
With CONFIG_X86_5LEVEL, CONFIG_UBSAN and CONFIG_UBSAN_UNSIGNED_OVERFLOW enabled, clang fails the build with x86_64-linux-ld: arch/x86/platform/efi/efi_64.o: in function `efi_sync_low_kernel_mappings': efi_64.c:(.text+0x22c): undefined reference to `__compiletime_assert_354' which happens due to -fsanitize=unsigned-integer-overflow being enabled: -fsanitize=unsigned-integer-overflow: Unsigned integer overflow, where the result of an unsigned integer computation cannot be represented in its type. Unlike signed integer overflow, this is not undefined behavior, but it is often unintentional. This sanitizer does not check for lossy implicit conversions performed before such a computation (see -fsanitize=implicit-conversion). and that fires when the (intentional) EFI_VA_START/END defines overflow an unsigned long, leading to the assertion expressions not getting optimized away (on GCC they do)... However, those checks are superfluous: the runtime services mapping code already makes sure the ranges don't overshoot EFI_VA_END as the EFI mapping range is hardcoded. On each runtime services call, it is switched to the EFI-specific PGD and even if mappings manage to escape that last PGD, this won't remain unnoticed for long. So rip them out. See https://github.com/ClangBuiltLinux/linux/issues/256 for more info. Reported-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NNathan Chancellor <nathan@kernel.org> Acked-by: NArd Biesheuvel <ardb@kernel.org> Tested-by: NNick Desaulniers <ndesaulniers@google.com> Tested-by: NNathan Chancellor <nathan@kernel.org> Link: http://lkml.kernel.org/r/20210107223424.4135538-1-arnd@kernel.org
-
由 Aneesh Kumar K.V 提交于
This fix the bad fault reported by KUAP when io_wqe_worker access userspace. Bug: Read fault blocked by KUAP! WARNING: CPU: 1 PID: 101841 at arch/powerpc/mm/fault.c:229 __do_page_fault+0x6b4/0xcd0 NIP [c00000000009e7e4] __do_page_fault+0x6b4/0xcd0 LR [c00000000009e7e0] __do_page_fault+0x6b0/0xcd0 .......... Call Trace: [c000000016367330] [c00000000009e7e0] __do_page_fault+0x6b0/0xcd0 (unreliable) [c0000000163673e0] [c00000000009ee3c] do_page_fault+0x3c/0x120 [c000000016367430] [c00000000000c848] handle_page_fault+0x10/0x2c --- interrupt: 300 at iov_iter_fault_in_readable+0x148/0x6f0 .......... NIP [c0000000008e8228] iov_iter_fault_in_readable+0x148/0x6f0 LR [c0000000008e834c] iov_iter_fault_in_readable+0x26c/0x6f0 interrupt: 300 [c0000000163677e0] [c0000000007154a0] iomap_write_actor+0xc0/0x280 [c000000016367880] [c00000000070fc94] iomap_apply+0x1c4/0x780 [c000000016367990] [c000000000710330] iomap_file_buffered_write+0xa0/0x120 [c0000000163679e0] [c00800000040791c] xfs_file_buffered_aio_write+0x314/0x5e0 [xfs] [c000000016367a90] [c0000000006d74bc] io_write+0x10c/0x460 [c000000016367bb0] [c0000000006d80e4] io_issue_sqe+0x8d4/0x1200 [c000000016367c70] [c0000000006d8ad0] io_wq_submit_work+0xc0/0x250 [c000000016367cb0] [c0000000006e2578] io_worker_handle_work+0x498/0x800 [c000000016367d40] [c0000000006e2cdc] io_wqe_worker+0x3fc/0x4f0 [c000000016367da0] [c0000000001cb0a4] kthread+0x1c4/0x1d0 [c000000016367e10] [c00000000000dbf0] ret_from_kernel_thread+0x5c/0x6c The kernel consider thread AMR value for kernel thread to be AMR_KUAP_BLOCKED. Hence access to userspace is denied. This of course not correct and we should allow userspace access after kthread_use_mm(). To be precise, kthread_use_mm() should inherit the AMR value of the operating address space. But, the AMR value is thread-specific and we inherit the address space and not thread access restrictions. Because of this ignore AMR value when accessing userspace via kernel thread. current_thread_amr/iamr() are updated, because we use them in the below stack. .... [ 530.710838] CPU: 13 PID: 5587 Comm: io_wqe_worker-0 Tainted: G D 5.11.0-rc6+ #3 .... NIP [c0000000000aa0c8] pkey_access_permitted+0x28/0x90 LR [c0000000004b9278] gup_pte_range+0x188/0x420 --- interrupt: 700 [c00000001c4ef3f0] [0000000000000000] 0x0 (unreliable) [c00000001c4ef490] [c0000000004bd39c] gup_pgd_range+0x3ac/0xa20 [c00000001c4ef5a0] [c0000000004bdd44] internal_get_user_pages_fast+0x334/0x410 [c00000001c4ef620] [c000000000852028] iov_iter_get_pages+0xf8/0x5c0 [c00000001c4ef6a0] [c0000000007da44c] bio_iov_iter_get_pages+0xec/0x700 [c00000001c4ef770] [c0000000006a325c] iomap_dio_bio_actor+0x2ac/0x4f0 [c00000001c4ef810] [c00000000069cd94] iomap_apply+0x2b4/0x740 [c00000001c4ef920] [c0000000006a38b8] __iomap_dio_rw+0x238/0x5c0 [c00000001c4ef9d0] [c0000000006a3c60] iomap_dio_rw+0x20/0x80 [c00000001c4ef9f0] [c008000001927a30] xfs_file_dio_aio_write+0x1f8/0x650 [xfs] [c00000001c4efa60] [c0080000019284dc] xfs_file_write_iter+0xc4/0x130 [xfs] [c00000001c4efa90] [c000000000669984] io_write+0x104/0x4b0 [c00000001c4efbb0] [c00000000066cea4] io_issue_sqe+0x3d4/0xf50 [c00000001c4efc60] [c000000000670200] io_wq_submit_work+0xb0/0x2f0 [c00000001c4efcb0] [c000000000674268] io_worker_handle_work+0x248/0x4a0 [c00000001c4efd30] [c0000000006746e8] io_wqe_worker+0x228/0x2a0 [c00000001c4efda0] [c00000000019d994] kthread+0x1b4/0x1c0 Fixes: 48a8ab4e ("powerpc/book3s64/pkeys: Don't update SPRN_AMR when in kernel mode.") Reported-by: NZorro Lang <zlang@redhat.com> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210206025634.521979-1-aneesh.kumar@linux.ibm.com
-
由 Gabriel Krisman Bertazi 提交于
Commit 29915524 ("entry: Drop usage of TIF flags in the generic syscall code") introduced a bug on architectures using the generic syscall entry code, in which processes stopped by PTRACE_SYSCALL do not trap on syscall return after receiving a TIF_SINGLESTEP. The reason is that the meaning of TIF_SINGLESTEP flag is overloaded to cause the trap after a system call is executed, but since the above commit, the syscall call handler only checks for the SYSCALL_WORK flags on the exit work. Split the meaning of TIF_SINGLESTEP such that it only means single-step mode, and create a new type of SYSCALL_WORK to request a trap immediately after a syscall in single-step mode. In the current implementation, the SYSCALL_WORK flag shadows the TIF_SINGLESTEP flag for simplicity. Update x86 to flip this bit when a tracer enables single stepping. Fixes: 29915524 ("entry: Drop usage of TIF flags in the generic syscall code") Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NGabriel Krisman Bertazi <krisman@collabora.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NKyle Huey <me@kylehuey.com> Link: https://lore.kernel.org/r/87h7mtc9pr.fsf_-_@collabora.com
-
由 Lai Jiangshan 提交于
local_db_save() is called at the start of exc_debug_kernel(), reads DR7 and disables breakpoints to prevent recursion. When running in a guest (X86_FEATURE_HYPERVISOR), local_db_save() reads the per-cpu variable cpu_dr7 to check whether a breakpoint is active or not before it accesses DR7. A data breakpoint on cpu_dr7 therefore results in infinite #DB recursion. Disallow data breakpoints on cpu_dr7 to prevent that. Fixes: 84b6a349("x86/entry: Optimize local_db_save() for virt") Signed-off-by: NLai Jiangshan <laijs@linux.alibaba.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210204152708.21308-2-jiangshanlai@gmail.com
-
由 Lai Jiangshan 提交于
When FSGSBASE is enabled, paranoid_entry() fetches the per-CPU GSBASE value via __per_cpu_offset or pcpu_unit_offsets. When a data breakpoint is set on __per_cpu_offset[cpu] (read-write operation), the specific CPU will be stuck in an infinite #DB loop. RCU will try to send an NMI to the specific CPU, but it is not working either since NMI also relies on paranoid_entry(). Which means it's undebuggable. Fixes: eaad9812("x86/entry/64: Introduce the FIND_PERCPU_BASE macro") Signed-off-by: NLai Jiangshan <laijs@linux.alibaba.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210204152708.21308-1-jiangshanlai@gmail.com
-
- 05 2月, 2021 5 次提交
-
-
由 Russell King 提交于
Giancarlo Ferrari reports the following oops while trying to use kexec: Unable to handle kernel paging request at virtual address 80112f38 pgd = fd7ef03e [80112f38] *pgd=0001141e(bad) Internal error: Oops: 80d [#1] PREEMPT SMP ARM ... This is caused by machine_kexec() trying to set the kernel text to be read/write, so it can poke values into the relocation code before copying it - and an interrupt occuring which changes the page tables. The subsequent writes then hit read-only sections that trigger a data abort resulting in the above oops. Fix this by copying the relocation code, and then writing the variables into the destination, thereby avoiding the need to make the kernel text read/write. Reported-by: NGiancarlo Ferrari <giancarlo.ferrari89@gmail.com> Tested-by: NGiancarlo Ferrari <giancarlo.ferrari89@gmail.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Russell King 提交于
Ensure that the signal page contains our poison instruction to increase the protection against ROP attacks and also contains well defined contents. Acked-by: NWill Deacon <will@kernel.org> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Dave Hansen 提交于
Jan Kiszka reported that the x2apic_wrmsr_fence() function uses a plain MFENCE while the Intel SDM (10.12.3 MSR Access in x2APIC Mode) calls for MFENCE; LFENCE. Short summary: we have special MSRs that have weaker ordering than all the rest. Add fencing consistent with current SDM recommendations. This is not known to cause any issues in practice, only in theory. Longer story below: The reason the kernel uses a different semantic is that the SDM changed (roughly in late 2017). The SDM changed because folks at Intel were auditing all of the recommended fences in the SDM and realized that the x2apic fences were insufficient. Why was the pain MFENCE judged insufficient? WRMSR itself is normally a serializing instruction. No fences are needed because the instruction itself serializes everything. But, there are explicit exceptions for this serializing behavior written into the WRMSR instruction documentation for two classes of MSRs: IA32_TSC_DEADLINE and the X2APIC MSRs. Back to x2apic: WRMSR is *not* serializing in this specific case. But why is MFENCE insufficient? MFENCE makes writes visible, but only affects load/store instructions. WRMSR is unfortunately not a load/store instruction and is unaffected by MFENCE. This means that a non-serializing WRMSR could be reordered by the CPU to execute before the writes made visible by the MFENCE have even occurred in the first place. This means that an x2apic IPI could theoretically be triggered before there is any (visible) data to process. Does this affect anything in practice? I honestly don't know. It seems quite possible that by the time an interrupt gets to consume the (not yet) MFENCE'd data, it has become visible, mostly by accident. To be safe, add the SDM-recommended fences for all x2apic WRMSRs. This also leaves open the question of the _other_ weakly-ordered WRMSR: MSR_IA32_TSC_DEADLINE. While it has the same ordering architecture as the x2APIC MSRs, it seems substantially less likely to be a problem in practice. While writes to the in-memory Local Vector Table (LVT) might theoretically be reordered with respect to a weakly-ordered WRMSR like TSC_DEADLINE, the SDM has this to say: In x2APIC mode, the WRMSR instruction is used to write to the LVT entry. The processor ensures the ordering of this write and any subsequent WRMSR to the deadline; no fencing is required. But, that might still leave xAPIC exposed. The safest thing to do for now is to add the extra, recommended LFENCE. [ bp: Massage commit message, fix typos, drop accidentally added newline to tools/arch/x86/include/asm/barrier.h. ] Reported-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NThomas Gleixner <tglx@linutronix.de> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/20200305174708.F77040DD@viggo.jf.intel.com
-
由 Mike Rapoport 提交于
This reverts commit bde9cfa3. Changing the first memory page type from E820_TYPE_RESERVED to E820_TYPE_RAM makes it a part of "System RAM" resource rather than a reserved resource and this in turn causes devmem_is_allowed() to treat is as area that can be accessed but it is filled with zeroes instead of the actual data as previously. The change in /dev/mem output causes lilo to fail as was reported at slakware users forum, and probably other legacy applications will experience similar problems. Link: https://www.linuxquestions.org/questions/slackware-14/slackware-current-lilo-vesa-warnings-after-recent-updates-4175689617/#post6214439Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Cc: stable@kernel.org Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Sean Christopherson 提交于
Set cr3_lm_rsvd_bits, which is effectively an invalid GPA mask, at vCPU reset. The reserved bits check needs to be done even if userspace never configures the guest's CPUID model. Cc: stable@vger.kernel.org Fixes: 0107973a ("KVM: x86: Introduce cr3_lm_rsvd_bits in kvm_vcpu_arch") Signed-off-by: NSean Christopherson <seanjc@google.com> Message-Id: <20210204000117.3303214-2-seanjc@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 04 2月, 2021 1 次提交
-
-
由 Ben Gardon 提交于
There is a bug in the TDP MMU function to zap SPTEs which could be replaced with a larger mapping which prevents the function from doing anything. Fix this by correctly zapping the last level SPTEs. Cc: stable@vger.kernel.org Fixes: 14881998 ("kvm: x86/mmu: Support disabling dirty logging for the tdp MMU") Signed-off-by: NBen Gardon <bgardon@google.com> Message-Id: <20210202185734.1680553-11-bgardon@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 03 2月, 2021 11 次提交
-
-
由 Alexandre Belloni 提交于
This reverts commit c17e9377. The lpc32xx clock driver is not able to actually change the PLL rate as this would require reparenting ARM_CLK, DDRAM_CLK, PERIPH_CLK to SYSCLK, then stop the PLL, update the register, restart the PLL and wait for the PLL to lock and finally reparent ARM_CLK, DDRAM_CLK, PERIPH_CLK to HCLK PLL. Currently, the HCLK driver simply updates the registers but this has no real effect and all the clock rate calculation end up being wrong. This is especially annoying for the peripheral (e.g. UARTs, I2C, SPI). Signed-off-by: NAlexandre Belloni <alexandre.belloni@bootlin.com> Tested-by: NGregory CLEMENT <gregory.clement@bootlin.com> Link: https://lore.kernel.org/r/20210203090320.GA3760268@piout.net' Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Hermann Lauer 提交于
BPi Pro needs TX and RX delay for Gbit to work reliable and avoid high packet loss rates. The realtek phy driver overrides the settings of the pull ups for the delays, so fix this for BananaPro. Fix the phy-mode description to correctly reflect this so that the implementation doesn't reconfigure the delays incorrectly. This happened with commit bbc4d71d ("net: phy: realtek: fix rtl8211e rx/tx delay config"). Fixes: 10662a33 ("ARM: dts: sun7i: Add dts file for Bananapro board") Signed-off-by: NHermann Lauer <Hermann.Lauer@uni-heidelberg.de> Signed-off-by: NMaxime Ripard <maxime@cerno.tech> Link: https://lore.kernel.org/r/20210128111842.GA11919@lemon.iwr.uni-heidelberg.de
-
由 Paolo Bonzini 提交于
If not in long mode, the low bits of CR3 are reserved but not enforced to be zero, so remove those checks. If in long mode, however, the MBZ bits extend down to the highest physical address bit of the guest, excluding the encryption bit. Make the checks consistent with the above, and match them between nested_vmcb_checks and KVM_SET_SREGS. Cc: stable@vger.kernel.org Fixes: 761e4169 ("KVM: nSVM: Check that MBZ bits in CR3 and CR4 are not set on vmrun of nested guests") Fixes: a780a3ea ("KVM: X86: Fix reserved bits check for MOV to CR3") Reviewed-by: NSean Christopherson <seanjc@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Sean Christopherson 提交于
Don't let KVM load when running as an SEV guest, regardless of what CPUID says. Memory is encrypted with a key that is not accessible to the host (L0), thus it's impossible for L0 to emulate SVM, e.g. it'll see garbage when reading the VMCB. Technically, KVM could decrypt all memory that needs to be accessible to the L0 and use shadow paging so that L0 does not need to shadow NPT, but exposing such information to L0 largely defeats the purpose of running as an SEV guest. This can always be revisited if someone comes up with a use case for running VMs inside SEV guests. Note, VMLOAD, VMRUN, etc... will also #GP on GPAs with C-bit set, i.e. KVM is doomed even if the SEV guest is debuggable and the hypervisor is willing to decrypt the VMCB. This may or may not be fixed on CPUs that have the SVME_ADDR_CHK fix. Cc: stable@vger.kernel.org Signed-off-by: NSean Christopherson <seanjc@google.com> Message-Id: <20210202212017.2486595-1-seanjc@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Atish Patra 提交于
MAXPHYSMEM_1GB option was added for RV32 because RV32 only supports 1GB of maximum physical memory. This lead to few compilation errors reported by kernel test robot which created the following configuration combination which are not useful but can be configured. 1. MAXPHYSMEM_1GB & RV64 2, MAXPHYSMEM_2GB & RV32 Fix this by restricting MAXPHYSMEM_1GB for RV32 and MAXPHYSMEM_2GB only for RV64. Fixes: e5577937 ("RISC-V: Fix maximum allowed phsyical memory for RV32") Cc: stable@vger.kernel.org Reported-by: NRandy Dunlap <rdunlap@infradead.org> Acked-by: NRandy Dunlap <rdunlap@infradead.org> Tested-by: NGeert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NAtish Patra <atish.patra@wdc.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
Allows the sections to be aligned on smaller boundaries and therefore results in a smaller kernel image size. Signed-off-by: NSebastien Van Cauwenberghe <svancau@gmail.com> Reviewed-by: NAtish Patra <atish.patra@wdc.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Atish Patra 提交于
.init section permission should only updated to non-execute if STRICT_KERNEL_RWX is enabled. Otherwise, this will lead to a kernel hang. Fixes: 19a00869 ("RISC-V: Protect all kernel sections including init early") Cc: stable@vger.kernel.org Suggested-by: NGeert Uytterhoeven <geert@linux-m68k.org> Reported-by: NGeert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NAtish Patra <atish.patra@wdc.com> Reviewed-by: NAtish Patra <atish.patra@wdc.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Alexandre Ghiti 提交于
virt_addr_valid macro checks that a virtual address is valid, ie that the address belongs to the linear mapping and that the corresponding physical page exists. Add the missing check that ensures the virtual address belongs to the linear mapping, otherwise __virt_to_phys, when compiled with CONFIG_DEBUG_VIRTUAL enabled, raises a WARN that is interpreted as a kernel bug by syzbot. Signed-off-by: NAlexandre Ghiti <alex@ghiti.fr> Reviewed-by: NAtish Patra <atish.patra@wdc.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
由 Catalin Marinas 提交于
Because of the tagged addresses, the __is_lm_address() and __lm_to_phys() macros grew to some harder to understand bitwise operations using PAGE_OFFSET. Since these macros only accept untagged addresses, use a simple subtract operation. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NArd Biesheuvel <ardb@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210201190634.22942-3-catalin.marinas@arm.com
-
由 Catalin Marinas 提交于
Commit 519ea6f1 ("arm64: Fix kernel address detection of __is_lm_address()") fixed the incorrect validation of addresses below PAGE_OFFSET. However, it no longer allowed tagged addresses to be passed to virt_addr_valid(). Fix this by explicitly resetting the pointer tag prior to invoking __is_lm_address(). This is consistent with the __lm_to_phys() macro. Fixes: 519ea6f1 ("arm64: Fix kernel address detection of __is_lm_address()") Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NArd Biesheuvel <ardb@kernel.org> Cc: <stable@vger.kernel.org> # 5.4.x Cc: Will Deacon <will@kernel.org> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210201190634.22942-2-catalin.marinas@arm.com
-
由 Sean Christopherson 提交于
Set the emulator context to PROT64 if SYSENTER transitions from 32-bit userspace (compat mode) to a 64-bit kernel, otherwise the RIP update at the end of x86_emulate_insn() will incorrectly truncate the new RIP. Note, this bug is mostly limited to running an Intel virtual CPU model on an AMD physical CPU, as other combinations of virtual and physical CPUs do not trigger full emulation. On Intel CPUs, SYSENTER in compatibility mode is legal, and unconditionally transitions to 64-bit mode. On AMD CPUs, SYSENTER is illegal in compatibility mode and #UDs. If the vCPU is AMD, KVM injects a #UD on SYSENTER in compat mode. If the pCPU is Intel, SYSENTER will execute natively and not trigger #UD->VM-Exit (ignoring guest TLB shenanigans). Fixes: fede8076 ("KVM: x86: handle wrap around 32-bit address space") Cc: stable@vger.kernel.org Signed-off-by: NJonny Barker <jonny@jonnybarker.com> [sean: wrote changelog] Signed-off-by: NSean Christopherson <seanjc@google.com> Message-Id: <20210202165546.2390296-1-seanjc@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 02 2月, 2021 5 次提交
-
-
由 Raoni Fassina Firmino 提交于
Commit 0138ba57 ("powerpc/64/signal: Balance return predictor stack in signal trampoline") changed __kernel_sigtramp_rt64() VDSO and trampoline code, and introduced a regression in the way glibc's backtrace()[1] detects the signal-handler stack frame. Apart from the practical implications, __kernel_sigtramp_rt64() was a VDSO function with the semantics that it is a function you can call from userspace to end a signal handling. Now this semantics are no longer valid. I believe the aforementioned change affects all releases since 5.9. This patch tries to fix both the semantics and practical aspect of __kernel_sigtramp_rt64() returning it to the previous code, whilst keeping the intended behaviour of 0138ba57 by adding a new symbol to serve as the jump target from the kernel to the trampoline. Now the trampoline has two parts, a new entry point and the old return point. [1] https://lists.ozlabs.org/pipermail/linuxppc-dev/2021-January/223194.html Fixes: 0138ba57 ("powerpc/64/signal: Balance return predictor stack in signal trampoline") Cc: stable@vger.kernel.org # v5.9+ Signed-off-by: NRaoni Fassina Firmino <raoni@linux.ibm.com> Acked-by: NNicholas Piggin <npiggin@gmail.com> [mpe: Minor tweaks to change log formatting, add stable tag] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210201200505.iz46ubcizipnkcxe@work-tp
-
由 Fenghua Yu 提交于
Add Alder Lake mobile processor to CPU list to enumerate and enable the split lock feature. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NTony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/20210201190007.4031869-1-fenghua.yu@intel.com
-
由 Vitaly Kuznetsov 提交于
Commit 7a873e45 ("KVM: selftests: Verify supported CR4 bits can be set before KVM_SET_CPUID2") reveals that KVM allows to set X86_CR4_PCIDE even when PCID support is missing: ==== Test Assertion Failure ==== x86_64/set_sregs_test.c:41: rc pid=6956 tid=6956 - Invalid argument 1 0x000000000040177d: test_cr4_feature_bit at set_sregs_test.c:41 2 0x00000000004014fc: main at set_sregs_test.c:119 3 0x00007f2d9346d041: ?? ??:0 4 0x000000000040164d: _start at ??:? KVM allowed unsupported CR4 bit (0x20000) Add X86_FEATURE_PCID feature check to __cr4_reserved_bits() to make kvm_is_valid_cr4() fail. Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210201142843.108190-1-vkuznets@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Zheng Zhan Liang 提交于
Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Wanpeng Li <wanpengli@tencent.com> Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: NZheng Zhan Liang <zhengzhanliang@huorong.cn> Message-Id: <20210201055310.267029-1-zhengzhanliang@huorong.cn> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Userspace that does not know about KVM_GET_MSR_FEATURE_INDEX_LIST will generally use the default value for MSR_IA32_ARCH_CAPABILITIES. When this happens and the host has tsx=on, it is possible to end up with virtual machines that have HLE and RTM disabled, but TSX_CTRL available. If the fleet is then switched to tsx=off, kvm_get_arch_capabilities() will clear the ARCH_CAP_TSX_CTRL_MSR bit and it will not be possible to use the tsx=off hosts as migration destinations, even though the guests do not have TSX enabled. To allow this migration, allow guests to write to their TSX_CTRL MSR, while keeping the host MSR unchanged for the entire life of the guests. This ensures that TSX remains disabled and also saves MSR reads and writes, and it's okay to do because with tsx=off we know that guests will not have the HLE and RTM features in their CPUID. (If userspace sets bogus CPUID data, we do not expect HLE and RTM to work in guests anyway). Cc: stable@vger.kernel.org Fixes: cbbaa272 ("KVM: x86: fix presentation of TSX feature in ARCH_CAPABILITIES") Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-