- 03 11月, 2020 4 次提交
-
-
由 Heiko Carstens 提交于
Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
-
由 Heiko Carstens 提交于
Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
-
由 Heiko Carstens 提交于
Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
-
由 Gerald Schaefer 提交于
pmd/pud_deref() assume that they will never operate on large pmd/pud entries, and therefore only use the non-large _xxx_ENTRY_ORIGIN mask. With commit 9ec8fa8d ("s390/vmemmap: extend modify_pagetable() to handle vmemmap"), that assumption is no longer true, at least for pmd_deref(). In theory, we could end up with wrong addresses because some of the non-address bits of a large entry would not be masked out. In practice, this does not (yet) show any impact, because vmemmap_free() is currently never used for s390. Fix pmd/pud_deref() to check for the entry type and use the _xxx_ENTRY_ORIGIN_LARGE mask for large entries. While at it, also move pmd/pud_pfn() around, in order to avoid code duplication, because they do the same thing. Fixes: 9ec8fa8d ("s390/vmemmap: extend modify_pagetable() to handle vmemmap") Cc: <stable@vger.kernel.org> # 5.9 Signed-off-by: NGerald Schaefer <gerald.schaefer@linux.ibm.com> Reviewed-by: NAlexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
-
- 31 10月, 2020 5 次提交
-
-
由 Paolo Bonzini 提交于
Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Vitaly Kuznetsov 提交于
It was noticed that evmcs_sanitize_exec_ctrls() is not being executed nowadays despite the code checking 'enable_evmcs' static key looking correct. Turns out, static key magic doesn't work in '__init' section (and it is unclear when things changed) but setup_vmcs_config() is called only once per CPU so we don't really need it to. Switch to checking 'enlightened_vmcs' instead, it is supposed to be in sync with 'enable_evmcs'. Opportunistically make evmcs_sanitize_exec_ctrls '__init' and drop unneeded extra newline from it. Reported-by: NYang Weijiang <weijiang.yang@intel.com> Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20201014143346.2430936-1-vkuznets@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Takashi Iwai 提交于
The newly introduced kvm_msr_ignored_check() tries to print error or debug messages via vcpu_*() macros, but those may cause Oops when NULL vcpu is passed for KVM_GET_MSRS ioctl. Fix it by replacing the print calls with kvm_*() macros. (Note that this will leave vcpu argument completely unused in the function, but I didn't touch it to make the fix as small as possible. A clean up may be applied later.) Fixes: 12bc2132 ("KVM: X86: Do the same ignore_msrs check for feature msrs") BugLink: https://bugzilla.suse.com/show_bug.cgi?id=1178280 Cc: <stable@vger.kernel.org> Signed-off-by: NTakashi Iwai <tiwai@suse.de> Message-Id: <20201030151414.20165-1-tiwai@suse.de> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Even though the compiler is able to replace static const variables with their value, it will warn about them being unused when Linux is built with W=1. Use good old macros instead, this is not C++. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Qais Yousef 提交于
On a system without uniform support for AArch32 at EL0, it is possible for the guest to force run AArch32 at EL0 and potentially cause an illegal exception if running on a core without AArch32. Add an extra check so that if we catch the guest doing that, then we prevent it from running again by resetting vcpu->arch.target and return ARM_EXCEPTION_IL. We try to catch this misbehaviour as early as possible and not rely on an illegal exception occuring to signal the problem. Attempting to run a 32bit app in the guest will produce an error from QEMU if the guest exits while running in AArch32 EL0. Tested on Juno by instrumenting the host to fake asym aarch32 and instrumenting KVM to make the asymmetry visible to the guest. [will: Incorporated feedback from Marc] Signed-off-by: NQais Yousef <qais.yousef@arm.com> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201021104611.2744565-2-qais.yousef@arm.com Link: https://lore.kernel.org/r/20201027215118.27003-2-will@kernel.org
-
- 30 10月, 2020 13 次提交
-
-
由 Mark Rutland 提交于
We finalize caps before initializing kvm hyp code, and any use of cpus_have_const_cap() in kvm hyp code generates redundant and potentially unsound code to read the cpu_hwcaps array. A number of helper functions used in both hyp context and regular kernel context use cpus_have_const_cap(), as some regular kernel code runs before the capabilities are finalized. It's tedious and error-prone to write separate copies of these for hyp and non-hyp code. So that we can avoid the redundant code, let's automatically upgrade cpus_have_const_cap() to cpus_have_final_cap() when used in hyp context. With this change, there's never a reason to access to cpu_hwcaps array from hyp code, and we don't need to create an NVHE alias for this. This should have no effect on non-hyp code. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Cc: David Brazdil <dbrazdil@google.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201026134931.28246-4-mark.rutland@arm.com
-
由 Mark Rutland 提交于
In a subsequent patch we'll modify cpus_have_const_cap() to call cpus_have_final_cap(), and hence we need to define cpus_have_final_cap() first. To make subsequent changes easier to follow, this patch reorders the two without making any other changes. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Cc: David Brazdil <dbrazdil@google.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201026134931.28246-3-mark.rutland@arm.com
-
由 Mark Rutland 提交于
Currently has_vhe() detects whether it is being compiled for VHE/NVHE hyp code based on preprocessor definitions, and uses this knowledge to avoid redundant runtime checks. There are other cases where we'd like to use this knowledge, so let's factor the preprocessor checks out into separate helpers. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Cc: David Brazdil <dbrazdil@google.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201026134931.28246-2-mark.rutland@arm.com
-
由 Fangrui Song 提交于
Commit 39d114dd ("arm64: add KASAN support") added .weak directives to arch/arm64/lib/mem*.S instead of changing the existing SYM_FUNC_START_PI macros. This can lead to the assembly snippet `.weak memcpy ... .globl memcpy` which will produce a STB_WEAK memcpy with GNU as but STB_GLOBAL memcpy with LLVM's integrated assembler before LLVM 12. LLVM 12 (since https://reviews.llvm.org/D90108) will error on such an overridden symbol binding. Use the appropriate SYM_FUNC_START_WEAK_PI instead. Fixes: 39d114dd ("arm64: add KASAN support") Reported-by: NSami Tolvanen <samitolvanen@google.com> Signed-off-by: NFangrui Song <maskray@google.com> Tested-by: NSami Tolvanen <samitolvanen@google.com> Tested-by: NNick Desaulniers <ndesaulniers@google.com> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20201029181951.1866093-1-maskray@google.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Qian Cai 提交于
The call to rcu_cpu_starting() in secondary_start_kernel() is not early enough in the CPU-hotplug onlining process, which results in lockdep splats as follows: WARNING: suspicious RCU usage ----------------------------- kernel/locking/lockdep.c:3497 RCU-list traversed in non-reader section!! other info that might help us debug this: RCU used illegally from offline CPU! rcu_scheduler_active = 1, debug_locks = 1 no locks held by swapper/1/0. Call trace: dump_backtrace+0x0/0x3c8 show_stack+0x14/0x60 dump_stack+0x14c/0x1c4 lockdep_rcu_suspicious+0x134/0x14c __lock_acquire+0x1c30/0x2600 lock_acquire+0x274/0xc48 _raw_spin_lock+0xc8/0x140 vprintk_emit+0x90/0x3d0 vprintk_default+0x34/0x40 vprintk_func+0x378/0x590 printk+0xa8/0xd4 __cpuinfo_store_cpu+0x71c/0x868 cpuinfo_store_cpu+0x2c/0xc8 secondary_start_kernel+0x244/0x318 This is avoided by moving the call to rcu_cpu_starting up near the beginning of the secondary_start_kernel() function. Signed-off-by: NQian Cai <cai@redhat.com> Acked-by: NPaul E. McKenney <paulmck@kernel.org> Link: https://lore.kernel.org/lkml/160223032121.7002.1269740091547117869.tip-bot2@tip-bot2/ Link: https://lore.kernel.org/r/20201028182614.13655-1-cai@redhat.comSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Santosh Shukla 提交于
VFIO allows a device driver to resolve a fault by mapping a MMIO range. This can be subsequently result in user_mem_abort() to try and compute a huge mapping based on the MMIO pfn, which is a sure recipe for things to go wrong. Instead, force a PTE mapping when the pfn faulted in has a device mapping. Fixes: 6d674e28 ("KVM: arm/arm64: Properly handle faulting of device mappings") Suggested-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NSantosh Shukla <sashukla@nvidia.com> [maz: rewritten commit message] Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NGavin Shan <gshan@redhat.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/1603711447-11998-2-git-send-email-sashukla@nvidia.com
-
由 Gavin Shan 提交于
Although huge pages can be created out of multiple contiguous PMDs or PTEs, the corresponding sizes are not supported at Stage-2 yet. Instead of failing the mapping, fall back to the nearer supported mapping size (CONT_PMD to PMD and CONT_PTE to PTE respectively). Suggested-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NGavin Shan <gshan@redhat.com> [maz: rewritten commit message] Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201025230626.18501-1-gshan@redhat.com
-
由 Will Deacon 提交于
stage2_pte_cacheable() tries to figure out whether the mapping installed in its 'pte' parameter is cacheable or not. Unfortunately, it fails miserably because it extracts the memory attributes from the entry using FIELD_GET(), which returns the attributes shifted down to bit 0, but then compares this with the unshifted value generated by the PAGE_S2_MEMATTR() macro. A direct consequence of this bug is that cache maintenance is silently skipped, which in turn causes 32-bit guests to crash early on when their set/way maintenance is trapped but not emulated correctly. Fix the broken masks by avoiding the use of FIELD_GET() altogether. Fixes: 6d9d2115 ("KVM: arm64: Add support for stage-2 map()/unmap() in generic page-table") Reported-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Cc: Quentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20201029144716.30476-1-will@kernel.org
-
由 Marc Zyngier 提交于
The DBGD{CCINT,SCRext} and DBGVCR register entries in the cp14 array are missing their target register, resulting in all accesses being targetted at the guard sysreg (indexed by __INVALID_SYSREG__). Point the emulation code at the actual register entries. Fixes: bdfb4b38 ("arm64: KVM: add trap handlers for AArch32 debug registers") Signed-off-by: NMarc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20201029172409.2768336-1-maz@kernel.org
-
由 Will Deacon 提交于
For consistency with the rest of the stage-2 page-table page allocations (performing using a kvm_mmu_memory_cache), ensure that __GFP_ACCOUNT is included in the GFP flags for the PGD pages. Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NGavin Shan <gshan@redhat.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Quentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20201026144423.24683-1-will@kernel.org
-
由 Marc Zyngier 提交于
Setting PSTATE.PAN when entering EL2 on nVHE doesn't make much sense as this bit only means something for translation regimes that include EL0. This obviously isn't the case in the nVHE case, so let's drop this setting. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NVladimir Murzin <vladimir.murzin@arm.com> Link: https://lore.kernel.org/r/20201026095116.72051-4-maz@kernel.org
-
由 Marc Zyngier 提交于
The new calling convention says that pointers coming from the SMCCC interface are turned into their HYP version in the host HVC handler. However, there is still a stray kern_hyp_va() in the TLB invalidation code, which could result in a corrupted pointer. Drop the spurious conversion. Fixes: a071261d ("KVM: arm64: nVHE: Fix pointers during SMCCC convertion") Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201026095116.72051-3-maz@kernel.org
-
由 Marc Zyngier 提交于
The hyp-init code starts by stashing a register in TPIDR_EL2 in in order to free a register. This happens no matter if the HVC call is legal or not. Although nothing wrong seems to come out of it, it feels odd to alter the EL2 state for something that eventually returns an error. Instead, use the fact that we know exactly which bits of the __kvm_hyp_init call are non-zero to perform the check with a series of EOR/ROR instructions, combined with a build-time check that the value is the one we expect. Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201026095116.72051-2-maz@kernel.org
-
- 29 10月, 2020 3 次提交
-
-
由 Rob Herring 提交于
On Cortex-A77 r0p0 and r1p0, a sequence of a non-cacheable or device load and a store exclusive or PAR_EL1 read can cause a deadlock. The workaround requires a DMB SY before and after a PAR_EL1 register read. In addition, it's possible an interrupt (doing a device read) or KVM guest exit could be taken between the DMB and PAR read, so we also need a DMB before returning from interrupt and before returning to a guest. A deadlock is still possible with the workaround as KVM guests must also have the workaround. IOW, a malicious guest can deadlock an affected systems. This workaround also depends on a firmware counterpart to enable the h/w to insert DMB SY after load and store exclusive instructions. See the errata document SDEN-1152370 v10 [1] for more information. [1] https://static.docs.arm.com/101992/0010/Arm_Cortex_A77_MP074_Software_Developer_Errata_Notice_v10.pdfSigned-off-by: NRob Herring <robh@kernel.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMarc Zyngier <maz@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: kvmarm@lists.cs.columbia.edu Link: https://lore.kernel.org/r/20201028182839.166037-2-robh@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Rob Herring 提交于
Add the MIDR part number info for the Arm Cortex-A77. Signed-off-by: NRob Herring <robh@kernel.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201028182839.166037-1-robh@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 David Woodhouse 提交于
No functional change; just reserve the feature bit for now so that VMMs can start to implement it. This will allow the host to indicate that MSI emulation supports 15-bit destination IDs, allowing up to 32768 CPUs without interrupt remapping. cf. https://patchwork.kernel.org/patch/11816693/ for qemu Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk> Acked-by: NPaolo Bonzini <pbonzini@redhat.com> Message-Id: <4cd59bed05f4b7410d3d1ffd1e997ab53683874d.camel@infradead.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 28 10月, 2020 10 次提交
-
-
由 Pascal Paillet 提交于
Add description for Vin power supply and for peripherals that are supplied by Vin. Signed-off-by: NPascal Paillet <p.paillet@st.com> Signed-off-by: NAlexandre Torgue <alexandre.torgue@st.com>
-
由 Pascal Paillet 提交于
Add description for Vin power supply and for peripherals that are supplied by Vin. Signed-off-by: NPascal Paillet <p.paillet@st.com> Signed-off-by: NAlexandre Torgue <alexandre.torgue@st.com>
-
由 Ard Biesheuvel 提交于
Commit 76085aff ("efi/libstub/arm64: align PE/COFF sections to segment alignment") increased the PE/COFF section alignment to match the minimum segment alignment of the kernel image, which ensures that the kernel does not need to be moved around in memory by the EFI stub if it was built as relocatable. However, the first PE/COFF section starts at _stext, which is only 4 KB aligned, and so the section layout is inconsistent. Existing EFI loaders seem to care little about this, but it is better to clean this up. So let's pad the header to 64 KB to match the PE/COFF section alignment. Fixes: 76085aff ("efi/libstub/arm64: align PE/COFF sections to segment alignment") Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20201027073209.2897-2-ardb@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Ard Biesheuvel 提交于
Now that we started making the linker warn about orphan sections (input sections that are not explicitly consumed by an output section), some configurations produce the following warning: aarch64-linux-gnu-ld: warning: orphan section `.igot.plt' from `arch/arm64/kernel/head.o' being placed in section `.igot.plt' It could be any file that triggers this - head.o is simply the first input file in the link - and the resulting .igot.plt section never actually appears in vmlinux as it turns out to be empty. So let's add .igot.plt to our collection of input sections to disregard unless they are empty. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Jessica Yu <jeyu@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20201028133332.5571-1-ardb@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Arnd Bergmann 提交于
The icache_policy_str[] definition causes a warning when extra warning flags are enabled: arch/arm64/kernel/cpuinfo.c:38:26: warning: initialized field overwritten [-Woverride-init] 38 | [ICACHE_POLICY_VIPT] = "VIPT", | ^~~~~~ arch/arm64/kernel/cpuinfo.c:38:26: note: (near initialization for 'icache_policy_str[2]') arch/arm64/kernel/cpuinfo.c:39:26: warning: initialized field overwritten [-Woverride-init] 39 | [ICACHE_POLICY_PIPT] = "PIPT", | ^~~~~~ arch/arm64/kernel/cpuinfo.c:39:26: note: (near initialization for 'icache_policy_str[3]') arch/arm64/kernel/cpuinfo.c:40:27: warning: initialized field overwritten [-Woverride-init] 40 | [ICACHE_POLICY_VPIPT] = "VPIPT", | ^~~~~~~ arch/arm64/kernel/cpuinfo.c:40:27: note: (near initialization for 'icache_policy_str[0]') There is no real need for the default initializer here, as printing a NULL string is harmless. Rewrite the logic to have an explicit reserved value for the only one that uses the default value. This partially reverts the commit that removed ICACHE_POLICY_AIVIVT. Fixes: 155433cb ("arm64: cache: Remove support for ASID-tagged VIVT I-caches") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20201026193807.3816388-1-arnd@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Stephen Boyd 提交于
According to the SMCCC spec[1](7.5.2 Discovery) the ARM_SMCCC_ARCH_WORKAROUND_1 function id only returns 0, 1, and SMCCC_RET_NOT_SUPPORTED. 0 is "workaround required and safe to call this function" 1 is "workaround not required but safe to call this function" SMCCC_RET_NOT_SUPPORTED is "might be vulnerable or might not be, who knows, I give up!" SMCCC_RET_NOT_SUPPORTED might as well mean "workaround required, except calling this function may not work because it isn't implemented in some cases". Wonderful. We map this SMC call to 0 is SPECTRE_MITIGATED 1 is SPECTRE_UNAFFECTED SMCCC_RET_NOT_SUPPORTED is SPECTRE_VULNERABLE For KVM hypercalls (hvc), we've implemented this function id to return SMCCC_RET_NOT_SUPPORTED, 0, and SMCCC_RET_NOT_REQUIRED. One of those isn't supposed to be there. Per the code we call arm64_get_spectre_v2_state() to figure out what to return for this feature discovery call. 0 is SPECTRE_MITIGATED SMCCC_RET_NOT_REQUIRED is SPECTRE_UNAFFECTED SMCCC_RET_NOT_SUPPORTED is SPECTRE_VULNERABLE Let's clean this up so that KVM tells the guest this mapping: 0 is SPECTRE_MITIGATED 1 is SPECTRE_UNAFFECTED SMCCC_RET_NOT_SUPPORTED is SPECTRE_VULNERABLE Note: SMCCC_RET_NOT_AFFECTED is 1 but isn't part of the SMCCC spec Fixes: c118bbb5 ("arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests") Signed-off-by: NStephen Boyd <swboyd@chromium.org> Acked-by: NMarc Zyngier <maz@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Steven Price <steven.price@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Link: https://developer.arm.com/documentation/den0028/latest [1] Link: https://lore.kernel.org/r/20201023154751.1973872-1-swboyd@chromium.orgSigned-off-by: NWill Deacon <will@kernel.org>
-
由 Peter Zijlstra 提交于
Commit d53d9bc0 ("x86/debug: Change thread.debugreg6 to thread.virtual_dr6") changed the semantics of the variable from random collection of bits, to exactly only those bits that ptrace() needs. Unfortunately this lost DR_STEP for PTRACE_{BLOCK,SINGLE}STEP. Furthermore, it turns out that userspace expects DR_STEP to be unconditionally available, even for manual TF usage outside of PTRACE_{BLOCK,SINGLE}_STEP. Fixes: d53d9bc0 ("x86/debug: Change thread.debugreg6 to thread.virtual_dr6") Reported-by: NKyle Huey <me@kylehuey.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: Kyle Huey <me@kylehuey.com> Link: https://lore.kernel.org/r/20201027183330.GM2628@hirez.programming.kicks-ass.net
-
由 Peter Zijlstra 提交于
The ->virtual_dr6 is the value used by ptrace_{get,set}_debugreg(6). A kernel #DB clearing it could mean spurious malfunction of ptrace() expectations. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: Kyle Huey <me@kylehuey.com> Link: https://lore.kernel.org/r/20201027093608.028952500@infradead.org
-
由 Peter Zijlstra 提交于
The SDM states that #DB clears DEBUGCTLMSR_BTF, this means that when the bit is set for userspace (TIF_BLOCKSTEP) and a kernel #DB happens first, the BTF bit meant for userspace execution is lost. Have the kernel #DB handler restore the BTF bit when it was requested for userspace. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: Kyle Huey <me@kylehuey.com> Link: https://lore.kernel.org/r/20201027093607.956147736@infradead.org
-
由 Nathan Chancellor 提交于
After turning on warnings for orphan section placement, enabling CONFIG_UNWINDER_FRAME_POINTER instead of CONFIG_UNWINDER_ARM causes thousands of warnings when clang + ld.lld are used: $ scripts/config --file arch/arm/configs/multi_v7_defconfig \ -d CONFIG_UNWINDER_ARM \ -e CONFIG_UNWINDER_FRAME_POINTER $ make -skj"$(nproc)" ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- LLVM=1 defconfig zImage ld.lld: warning: init/built-in.a(main.o):(.ARM.extab) is being placed in '.ARM.extab' ld.lld: warning: init/built-in.a(main.o):(.ARM.extab.init.text) is being placed in '.ARM.extab.init.text' ld.lld: warning: init/built-in.a(main.o):(.ARM.extab.ref.text) is being placed in '.ARM.extab.ref.text' ld.lld: warning: init/built-in.a(do_mounts.o):(.ARM.extab.init.text) is being placed in '.ARM.extab.init.text' ld.lld: warning: init/built-in.a(do_mounts.o):(.ARM.extab) is being placed in '.ARM.extab' ld.lld: warning: init/built-in.a(do_mounts_rd.o):(.ARM.extab.init.text) is being placed in '.ARM.extab.init.text' ld.lld: warning: init/built-in.a(do_mounts_rd.o):(.ARM.extab) is being placed in '.ARM.extab' ld.lld: warning: init/built-in.a(do_mounts_initrd.o):(.ARM.extab.init.text) is being placed in '.ARM.extab.init.text' ld.lld: warning: init/built-in.a(initramfs.o):(.ARM.extab.init.text) is being placed in '.ARM.extab.init.text' ld.lld: warning: init/built-in.a(initramfs.o):(.ARM.extab) is being placed in '.ARM.extab' ld.lld: warning: init/built-in.a(calibrate.o):(.ARM.extab.init.text) is being placed in '.ARM.extab.init.text' ld.lld: warning: init/built-in.a(calibrate.o):(.ARM.extab) is being placed in '.ARM.extab' These sections are handled by the ARM_UNWIND_SECTIONS define, which is only added to the list of sections when CONFIG_ARM_UNWIND is set. CONFIG_ARM_UNWIND is a hidden symbol that is only selected when CONFIG_UNWINDER_ARM is set so CONFIG_UNWINDER_FRAME_POINTER never handles these sections. According to the help text of CONFIG_UNWINDER_ARM, these sections should be discarded so that the kernel image size is not affected. Fixes: 5a17850e ("arm/build: Warn on orphan section placement") Link: https://github.com/ClangBuiltLinux/linux/issues/1152Reported-by: Nkernel test robot <lkp@intel.com> Signed-off-by: NNathan Chancellor <natechancellor@gmail.com> Review-by: NNick Desaulniers <ndesaulniers@google.com> Tested-by: NNick Desaulniers <ndesaulniers@google.com> [kees: Made the discard slightly more specific] Signed-off-by: NKees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20200928224854.3224862-1-natechancellor@gmail.com
-
- 27 10月, 2020 3 次提交
-
-
由 Fabio Estevam 提交于
Since commit 12d16b39 ("gpio: mxc: Support module build") the CONFIG_GPIO_MXC option needs to be explicitly selected. Select it to avoid boot issues on imx25/imx27 due to the lack of the GPIO driver. Signed-off-by: NFabio Estevam <festevam@gmail.com> Signed-off-by: NShawn Guo <shawnguo@kernel.org>
-
由 Fabio Estevam 提交于
Since commit 12d16b39 ("gpio: mxc: Support module build") the CONFIG_GPIO_MXC option needs to be explicitly selected. Select it to avoid boot issues on imx25/imx27 due to the lack of the GPIO driver. Signed-off-by: NFabio Estevam <festevam@gmail.com> Signed-off-by: NShawn Guo <shawnguo@kernel.org>
-
由 Linus Torvalds 提交于
A couple of um files ended up not including the header file that defines the __section() macro, and the simplest fix is to just revert the change for those files. Fixes: 33def849 treewide: Convert macro and uses of __section(foo) to __section("foo") Reported-and-tested-by: NGuenter Roeck <linux@roeck-us.net> Cc: Joe Perches <joe@perches.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 26 10月, 2020 2 次提交
-
-
由 Vasily Gorbik 提交于
Currently s390 build is broken. SECTCMP .boot.data error: section .boot.data differs between vmlinux and arch/s390/boot/compressed/vmlinux make[2]: *** [arch/s390/boot/section_cmp.boot.data] Error 1 SECTCMP .boot.preserved.data error: section .boot.preserved.data differs between vmlinux and arch/s390/boot/compressed/vmlinux make[2]: *** [arch/s390/boot/section_cmp.boot.preserved.data] Error 1 make[1]: *** [bzImage] Error 2 Commit 33def849 ("treewide: Convert macro and uses of __section(foo) to __section("foo")") converted all __section(foo) to __section("foo"). This is wrong for __bootdata / __bootdata_preserved macros which want variable names to be a part of intermediate section names .boot.data.<var name> and .boot.preserved.data.<var name>. Those sections are later sorted by alignment + name and merged together into final .boot.data / .boot.preserved.data sections. Those sections must be identical in the decompressor and the decompressed kernel (that is checked during the build). Fixes: 33def849 ("treewide: Convert macro and uses of __section(foo) to __section("foo")") Signed-off-by: NVasily Gorbik <gor@linux.ibm.com> Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
-
由 Nathan Chancellor 提交于
As it stands now, the vdso32 Makefile hardcodes the linker to ld.bfd using -fuse-ld=bfd with $(CC). This was taken from the arm vDSO Makefile, as the comment notes, done in commit d2b30cd4 ("ARM: 8384/1: VDSO: force use of BFD linker"). Commit fe00e50b ("ARM: 8858/1: vdso: use $(LD) instead of $(CC) to link VDSO") changed that Makefile to use $(LD) directly instead of through $(CC), which matches how the rest of the kernel operates. Since then, LD=ld.lld means that the arm vDSO will be linked with ld.lld, which has shown no problems so far. Allow ld.lld to link this vDSO as we do the regular arm vDSO. To do this, we need to do a few things: * Add a LD_COMPAT variable, which defaults to $(CROSS_COMPILE_COMPAT)ld with gcc and $(LD) if LLVM is 1, which will be ld.lld, or $(CROSS_COMPILE_COMPAT)ld if not, which matches the logic of the main Makefile. It is overrideable for further customization and avoiding breakage. * Eliminate cc32-ldoption, which matches commit 055efab3 ("kbuild: drop support for cc-ldoption"). With those, we can use $(LD_COMPAT) in cmd_ldvdso and change the flags from compiler linker flags to linker flags directly. We eliminate -mfloat-abi=soft because it is not handled by the linker. Reported-by: NNick Desaulniers <ndesaulniers@google.com> Signed-off-by: NNathan Chancellor <natechancellor@gmail.com> Tested-by: NNick Desaulniers <ndesaulniers@google.com> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Reviewed-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Link: https://github.com/ClangBuiltLinux/linux/issues/1033 Link: https://lore.kernel.org/r/20201020011406.1818918-1-natechancellor@gmail.comSigned-off-by: NWill Deacon <will@kernel.org>
-