- 31 5月, 2018 8 次提交
-
-
由 Russell King 提交于
In order to prevent aliasing attacks on the branch predictor, invalidate the BTB or instruction cache on CPUs that are known to be affected when taking an abort on a address that is outside of a user task limit: Cortex A8, A9, A12, A17, A73, A75: flush BTB. Cortex A15, Brahma B15: invalidate icache. If the IBE bit is not set, then there is little point to enabling the workaround. Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Boot-tested-by: NTony Lindgren <tony@atomide.com> Reviewed-by: NTony Lindgren <tony@atomide.com>
-
由 Russell King 提交于
When the branch predictor hardening is enabled, firmware must have set the IBE bit in the auxiliary control register. If this bit has not been set, the Spectre workarounds will not be functional. Add validation that this bit is set, and print a warning at alert level if this is not the case. Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Boot-tested-by: NTony Lindgren <tony@atomide.com> Reviewed-by: NTony Lindgren <tony@atomide.com>
-
由 Russell King 提交于
Harden the branch predictor against Spectre v2 attacks on context switches for ARMv7 and later CPUs. We do this by: Cortex A9, A12, A17, A73, A75: invalidating the BTB. Cortex A15, Brahma B15: invalidating the instruction cache. Cortex A57 and Cortex A72 are not addressed in this patch. Cortex R7 and Cortex R8 are also not addressed as we do not enforce memory protection on these cores. Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Boot-tested-by: NTony Lindgren <tony@atomide.com> Reviewed-by: NTony Lindgren <tony@atomide.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Russell King 提交于
Add a Kconfig symbol for CPUs which are vulnerable to the Spectre attacks. Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Boot-tested-by: NTony Lindgren <tony@atomide.com> Reviewed-by: NTony Lindgren <tony@atomide.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Russell King 提交于
Add support for per-processor bug checking - each processor function descriptor gains a function pointer for this check, which must not be an __init function. If non-NULL, this will be called whenever a CPU enters the kernel via which ever path (boot CPU, secondary CPU startup, CPU resuming, etc.) This allows processor specific bug checks to validate that workaround bits are properly enabled by firmware via all entry paths to the kernel. Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Boot-tested-by: NTony Lindgren <tony@atomide.com> Reviewed-by: NTony Lindgren <tony@atomide.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Russell King 提交于
Check for CPU bugs when secondary processors are being brought online, and also when CPUs are resuming from a low power mode. This gives an opportunity to check that processor specific bug workarounds are correctly enabled for all paths that a CPU re-enters the kernel. Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Boot-tested-by: NTony Lindgren <tony@atomide.com> Reviewed-by: NTony Lindgren <tony@atomide.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Russell King 提交于
Prepare the processor bug infrastructure so that it can be expanded to check for per-processor bugs. Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Boot-tested-by: NTony Lindgren <tony@atomide.com> Reviewed-by: NTony Lindgren <tony@atomide.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Russell King 提交于
Add CPU part numbers for Cortex A53, A57, A72, A73, A75 and the Broadcom Brahma B15 CPU. Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Acked-by: NFlorian Fainelli <f.fainelli@gmail.com> Boot-tested-by: NTony Lindgren <tony@atomide.com> Reviewed-by: NTony Lindgren <tony@atomide.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 29 3月, 2018 1 次提交
-
-
由 mike.travis@hpe.com 提交于
A critical error was found testing the fixed UV4 HUB in that an MMR address was found to be incorrect. This causes the virtual address space for accessing the MMIOH1 region to be allocated with the incorrect size. Fixes: 673aa20c ("x86/platform/UV: Update uv_mmrs.h to prepare for UV4A fixes") Signed-off-by: NMike Travis <travis@sgi.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Dimitri Sivanich <dimitri.sivanich@hpe.com> Cc: Russ Anderson <russ.anderson@hpe.com> Cc: Andrew Banman <andrew.banman@hpe.com> Link: https://lkml.kernel.org/r/20180328174011.041801248@stormcage.americas.sgi.com
-
- 28 3月, 2018 2 次提交
-
-
由 Wanpeng Li 提交于
PV TLB FLUSH can only be turned on when steal time is enabled. The condition got reversed during conflict resolution. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: NWanpeng Li <wanpengli@tencent.com> Fixes: 4f2f61fc ("KVM: X86: Avoid traversing all the cpus for pv tlb flush when steal time is disabled") [Rebased on top of kvm/master and reworded the commit message. - Radim] Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
-
由 Andrew Banman 提交于
BAU uses the old alloc_initr_gate90 method to setup its interrupt. This fails silently as the BAU vector is in the range of APIC vectors that are registered to the spurious interrupt handler. As a consequence BAU broadcasts are not handled, and the broadcast source CPU hangs. Update BAU to use new idt structure. Fixes: dc20b2d5 ("x86/idt: Move interrupt gate initialization to IDT code") Signed-off-by: NAndrew Banman <abanman@hpe.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NMike Travis <mike.travis@hpe.com> Cc: Dimitri Sivanich <sivanich@hpe.com> Cc: Russ Anderson <rja@hpe.com> Cc: stable@vger.kernel.org Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lkml.kernel.org/r/1522188546-196177-1-git-send-email-abanman@hpe.com
-
- 27 3月, 2018 2 次提交
-
-
由 Alexey Dobriyan 提交于
The following pattern fails to compile while the same pattern with alternative_call() does: if (...) alternative_call_2(...); else alternative_call_2(...); as it expands into if (...) { }; <=== else { }; Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20180114120504.GA11368@avx2
-
由 Stephane Eranian 提交于
this patch fix a bug in how the pebs->real_ip is handled in the PEBS handler. real_ip only exists in Haswell and later processor. It is actually the eventing IP, i.e., where the event occurred. As opposed to the pebs->ip which is the PEBS interrupt IP which is always off by one. The problem is that the real_ip just like the IP needs to be fixed up because PEBS does not record all the machine state registers, and in particular the code segement (cs). This is why we have the set_linear_ip() function. The problem was that set_linear_ip() was only used on the pebs->ip and not the pebs->real_ip. We have profiles which ran into invalid callstacks because of this. Here is an example: ..... 0: ffffffffffffff80 recent entry, marker kernel v ..... 1: 000000000040044d <= user address in kernel space! ..... 2: fffffffffffffe00 marker enter user v ..... 3: 000000000040044d ..... 4: 00000000004004b6 oldest entry Debugging output in get_perf_callchain(): [ 857.769909] CALLCHAIN: CPU8 ip=40044d regs->cs=10 user_mode(regs)=0 The problem is that the kernel entry in 1: points to a user level address. How can that be? The reason is that with PEBS sampling the instruction that caused the event to occur and the instruction where the CPU was when the interrupt was posted may be far apart. And sometime during that time window, the privilege level may change. This happens, for instance, when the PEBS sample is taken close to a kernel entry point. Here PEBS, eventing IP (real_ip) captured a user level instruction. But by the time the PMU interrupt fired, the processor had already entered kernel space. This is why the debug output shows a user address with user_mode() false. The problem comes from PEBS not recording the code segment (cs) register. The register is used in x86_64 to determine if executing in kernel vs user space. This is okay because the kernel has a software workaround called set_linear_ip(). But the issue in setup_pebs_sample_data() is that set_linear_ip() is never called on the real_ip value when it is available (Haswell and later) and precise_ip > 1. This patch fixes this problem and eliminates the callchain discrepancy. The patch restructures the code around set_linear_ip() to minimize the number of times the IP has to be set. Signed-off-by: NStephane Eranian <eranian@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: kan.liang@intel.com Link: http://lkml.kernel.org/r/1521788507-10231-1-git-send-email-eranian@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 26 3月, 2018 1 次提交
-
-
由 Nicholas Piggin 提交于
The SLB bad address handler's trap number fixup does not preserve the low bit that indicates nonvolatile GPRs have not been saved. This leads save_nvgprs to skip saving them, and subsequent functions and return from interrupt will think they are saved. This causes kernel branch-to-garbage debugging to not have correct registers, can also cause userspace to have its registers clobbered after a segfault. Fixes: f0f558b1 ("powerpc/mm: Preserve CFAR value on SLB miss caused by access to bogus address") Cc: stable@vger.kernel.org # v4.9+ Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 25 3月, 2018 1 次提交
-
-
由 Sven Wegener 提交于
The kernel build system already takes care of generating the dependency files. Having the additional -MD in KBUILD_CFLAGS leads to stray .<pid>.d files in the build directory when we call the cc-option macro. Signed-off-by: NSven Wegener <sven.wegener@stealer.net> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matthias Kaehlcke <mka@chromium.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vivek Goyal <vgoyal@redhat.com> Link: http://lkml.kernel.org/r/alpine.LNX.2.21.1803242219380.30139@titan.int.lan.stealer.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 24 3月, 2018 7 次提交
-
-
由 Nicolas Pitre 提交于
Send nm complaints about broken pipe (when sed exits early) to /dev/null. All errors should be printed to stderr. Don't trap on normal exit so the trap can return an error code. Signed-off-by: NNicolas Pitre <nico@linaro.org> Tested-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Jinbum Park 提交于
Define vdso_start, vdso_end as array to avoid compile-time analysis error for the case of built with CONFIG_FORTIFY_SOURCE. and, since vdso_start, vdso_end are used in vdso.c only, move extern-declaration from vdso.h to vdso.c. If kernel is built with CONFIG_FORTIFY_SOURCE, compile-time error happens at this code. - if (memcmp(&vdso_start, "177ELF", 4)) The size of "&vdso_start" is recognized as 1 byte, but n is 4, So that compile-time error is reported. Acked-by: NKees Cook <keescook@chromium.org> Signed-off-by: NJinbum Park <jinb.park7@gmail.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Arnd Bergmann 提交于
Without CONFIG_MMU, this results in a build failure: ./arch/arm/include/asm/memory.h:92:23: error: initializer element is not constant #define VECTORS_BASE vectors_base arch/arm/mm/dump.c:32:4: note: in expansion of macro 'VECTORS_BASE' { VECTORS_BASE, "Vectors" }, arch/arm/mm/dump.c:71:11: error: 'L_PTE_USER' undeclared here (not in a function); did you mean 'VTIME_USER'? .mask = L_PTE_USER, ^~~~~~~~~~ Obviously the feature only makes sense with an MMU, so let's add the dependency here. Fixes: a8e53c15 ("ARM: 8737/1: mm: dump: add checking for writable and executable") Acked-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Fabio Estevam 提交于
Commit 384b38b6 ("ARM: 7873/1: vfp: clear vfp_current_hw_state for dying cpu") fixed the cpu dying notifier by clearing vfp_current_hw_state[]. However commit e5b61baf ("arm: Convert VFP hotplug notifiers to state machine") incorrectly used the original vfp_force_reload() function in the cpu dying notifier. Fix it by going back to clearing vfp_current_hw_state[]. Fixes: e5b61baf ("arm: Convert VFP hotplug notifiers to state machine") Cc: linux-stable <stable@vger.kernel.org> Reported-by: NKohji Okuno <okuno.kohji@jp.panasonic.com> Signed-off-by: NFabio Estevam <fabio.estevam@nxp.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
-
由 Andy Lutomirski 提交于
There's nothing IST-worthy about #BP/int3. We don't allow kprobes in the small handful of places in the kernel that run at CPL0 with an invalid stack, and 32-bit kernels have used normal interrupt gates for #BP forever. Furthermore, we don't allow kprobes in places that have usergs while in kernel mode, so "paranoid" is also unnecessary. Signed-off-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org
-
由 Waiman Long 提交于
The efi_pgd is allocated as PGD_ALLOCATION_ORDER pages and therefore must also be freed as PGD_ALLOCATION_ORDER pages with free_pages(). Fixes: d9e9a641 ("x86/mm/pti: Allocate a separate user PGD") Signed-off-by: NWaiman Long <longman@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/1521746333-19593-1-git-send-email-longman@redhat.com
-
由 Sean Christopherson 提交于
Segment registers must be synchronized prior to any code that may trigger a call to emulation_required()/guest_state_valid(), e.g. vmx_set_cr0(). Because preparing vmcs02 writes segmentation fields directly, i.e. doesn't use vmx_set_segment(), emulation_required will not be re-evaluated when synchronizing the segment registers, which can result in L0 incorrectly starting emulation of L2. Fixes: 8665c3f9 ("KVM: nVMX: initialize descriptor cache fields in prepare_vmcs02_full") Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> [Move all of prepare_vmcs02_full earlier, not just segment registers. - Paolo] Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 23 3月, 2018 10 次提交
-
-
由 Aneesh Kumar K.V 提交于
On POWER9, under some circumstances, a broadcast TLB invalidation might complete before all previous stores have drained, potentially allowing stale stores from becoming visible after the invalidation. This works around it by doubling up those TLB invalidations which was verified by HW to be sufficient to close the risk window. This will be documented in a yet-to-be-published errata. Fixes: 1a472c9d ("powerpc/mm/radix: Add tlbflush routines") Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> [mpe: Enable the feature in the DT CPU features code for all Power9, rename the feature to CPU_FTR_P9_TLBIE_BUG per benh.] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
No functionality change. Just code movement to ease code changes later Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
These function are not used in the code. Remove them. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Benjamin Herrenschmidt 提交于
On POWER9 the Nest MMU may fail to invalidate some translations when doing a tlbie "by PID" or "by LPID" that is targeted at the TLB only and not the page walk cache. This works around it by forcing such invalidations to escalate to RIC=2 (full invalidation of TLB *and* PWC) when a coprocessor is in use for the context. Fixes: 03b8abed ("cxl: Enable global TLBIs for cxl contexts") Cc: stable@vger.kernel.org # v4.15+ Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NBalbir Singh <bsingharora@gmail.com> [balbirs: fixed spelling and coding style to quiesce checkpatch.pl] Tested-by: NBalbir Singh <bsingharora@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Benjamin Herrenschmidt 提交于
Currently, when using coprocessors (which use the Nest MMU), we simply increment the active_cpu count to force all TLB invalidations to be come broadcast. Unfortunately, due to an errata in POWER9, we will need to know more specifically that coprocessors are in use. This maintains a separate copros counter in the MMU context for that purpose. NB. The commit mentioned in the fixes tag below is not at fault for the bug we're fixing in this commit and the next, but this fix applies on top the infrastructure it introduced. Fixes: 03b8abed ("cxl: Enable global TLBIs for cxl contexts") Cc: stable@vger.kernel.org # v4.15+ Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Tested-by: NBalbir Singh <bsingharora@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Paul Mackerras 提交于
Since commit 6964e6a4 ("KVM: PPC: Book3S HV: Do SLB load/unload with guest LPCR value loaded", 2018-01-11), we have been seeing occasional machine check interrupts on POWER8 systems when running KVM guests, due to SLB multihit errors. This turns out to be due to the guest exit code reloading the host SLB entries from the SLB shadow buffer when the SLB was not previously cleared in the guest entry path. This can happen because the path which skips from the guest entry code to the guest exit code without entering the guest now does the skip before the SLB is cleared and loaded with guest values, but the host values are loaded after the point in the guest exit path that we skip to. To fix this, we move the code that reloads the host SLB values up so that it occurs just before the point in the guest exit code (the label guest_bypass:) where we skip to from the guest entry path. Reported-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Fixes: 6964e6a4 ("KVM: PPC: Book3S HV: Do SLB load/unload with guest LPCR value loaded") Tested-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
-
由 Toshi Kani 提交于
Implement pud_free_pmd_page() and pmd_free_pte_page() on x86, which clear a given pud/pmd entry and free up lower level page table(s). The address range associated with the pud/pmd entry must have been purged by INVLPG. Link: http://lkml.kernel.org/r/20180314180155.19492-3-toshi.kani@hpe.com Fixes: e61ce6ad ("mm: change ioremap to set up huge I/O mappings") Signed-off-by: NToshi Kani <toshi.kani@hpe.com> Reported-by: NLei Li <lious.lilei@hisilicon.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Borislav Petkov <bp@suse.de> Cc: Matthew Wilcox <willy@infradead.org> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Toshi Kani 提交于
On architectures with CONFIG_HAVE_ARCH_HUGE_VMAP set, ioremap() may create pud/pmd mappings. A kernel panic was observed on arm64 systems with Cortex-A75 in the following steps as described by Hanjun Guo. 1. ioremap a 4K size, valid page table will build, 2. iounmap it, pte0 will set to 0; 3. ioremap the same address with 2M size, pgd/pmd is unchanged, then set the a new value for pmd; 4. pte0 is leaked; 5. CPU may meet exception because the old pmd is still in TLB, which will lead to kernel panic. This panic is not reproducible on x86. INVLPG, called from iounmap, purges all levels of entries associated with purged address on x86. x86 still has memory leak. The patch changes the ioremap path to free unmapped page table(s) since doing so in the unmap path has the following issues: - The iounmap() path is shared with vunmap(). Since vmap() only supports pte mappings, making vunmap() to free a pte page is an overhead for regular vmap users as they do not need a pte page freed up. - Checking if all entries in a pte page are cleared in the unmap path is racy, and serializing this check is expensive. - The unmap path calls free_vmap_area_noflush() to do lazy TLB purges. Clearing a pud/pmd entry before the lazy TLB purges needs extra TLB purge. Add two interfaces, pud_free_pmd_page() and pmd_free_pte_page(), which clear a given pud/pmd entry and free up a page for the lower level entries. This patch implements their stub functions on x86 and arm64, which work as workaround. [akpm@linux-foundation.org: fix typo in pmd_free_pte_page() stub] Link: http://lkml.kernel.org/r/20180314180155.19492-2-toshi.kani@hpe.com Fixes: e61ce6ad ("mm: change ioremap to set up huge I/O mappings") Reported-by: NLei Li <lious.lilei@hisilicon.com> Signed-off-by: NToshi Kani <toshi.kani@hpe.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Wang Xuefeng <wxf.wang@hisilicon.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Hanjun Guo <guohanjun@huawei.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Borislav Petkov <bp@suse.de> Cc: Matthew Wilcox <willy@infradead.org> Cc: Chintan Pandya <cpandya@codeaurora.org> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Arnd Bergmann 提交于
A bugfix I did earlier caused a build regression on h8300, which defines the __BIG_ENDIAN macro in a slightly different way than the generic code: arch/h8300/include/asm/byteorder.h:5:0: warning: "__BIG_ENDIAN" redefined We don't need to define it here, as the same macro is already provided by the linux/byteorder/big_endian.h, and that version does not conflict. While this is a v4.16 regression, my earlier patch also got backported to the 4.14 and 4.15 stable kernels, so we need the fixup there as well. Link: http://lkml.kernel.org/r/20180313120752.2645129-1-arnd@arndb.de Fixes: 101110f6 ("Kbuild: always define endianess in kconfig.h") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Nicholas Piggin 提交于
force_external_irq_replay() can be called in the do_IRQ path with interrupts hard enabled and soft disabled if may_hard_irq_enable() set MSR[EE]=1. It updates local_paca->irq_happened with a load, modify, store sequence. If a maskable interrupt hits during this sequence, it will go to the masked handler to be marked pending in irq_happened. This update will be lost when the interrupt returns and the store instruction executes. This can result in unpredictable latencies, timeouts, lockups, etc. Fix this by ensuring hard interrupts are disabled before modifying irq_happened. This could cause any maskable asynchronous interrupt to get lost, but it was noticed on P9 SMP system doing RDMA NVMe target over 100GbE, so very high external interrupt rate and high IPI rate. The hang was bisected down to enabling doorbell interrupts for IPIs. These provided an interrupt type that could run at high rates in the do_IRQ path, stressing the race. Fixes: 1d607bb3 ("powerpc/irq: Add mechanism to force a replay of interrupts") Cc: stable@vger.kernel.org # v4.8+ Reported-by: NCarol L. Soto <clsoto@us.ibm.com> Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 22 3月, 2018 5 次提交
-
-
由 NeilBrown 提交于
Since commit 3af5a67c ("MIPS: Fix early CM probing") the MT7621 has not been able to boot. This commit caused mips_cm_probe() to be called before mt7621.c::proc_soc_init(). prom_soc_init() has a comment explaining that mips_cm_probe() "wipes out the bootloader config" and means that configuration registers are no longer available. It has some code to re-enable this config. Before this re-enable code is run, the sysc register cannot be read, so when SYSC_REG_CHIP_NAME0 is read, a garbage value is returned and panic() is called. If we move the config-repair code to the top of prom_soc_init(), the registers can be read and boot can proceed. Very occasionally, the first register read after the reconfiguration returns garbage, so add a call to __sync(). Fixes: 3af5a67c ("MIPS: Fix early CM probing") Signed-off-by: NNeilBrown <neil@brown.name> Reviewed-by: NMatt Redfearn <matt.redfearn@mips.com> Cc: John Crispin <john@phrozen.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 4.5+ Patchwork: https://patchwork.linux-mips.org/patch/18859/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 NeilBrown 提交于
ralink_halt() does nothing that machine_halt() doesn't already do, so it adds no value. It actually causes incorrect behaviour due to the "unreachable()" at the end. This tells the compiler that the end of the function will never be reached, which isn't true. The compiler responds by not adding a 'return' instruction, so control simply moves on to whatever bytes come afterwards in memory. In my tested, that was the ralink_restart() function. This means that an attempt to 'halt' the machine would actually cause a reboot. So remove ralink_halt() so that a 'halt' really does halt. Fixes: c06e836a ("MIPS: ralink: adds reset code") Signed-off-by: NNeilBrown <neil@brown.name> Cc: John Crispin <john@phrozen.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.9+ Patchwork: https://patchwork.linux-mips.org/patch/18851/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Mathias Kresin 提交于
Enable syscon to use it for the RCU MFD on Amazon SE as well. The Amazon SE also has similar reset controller system as Danube and XWAY and use their drivers mostly. As these drivers now need syscon also activate the syscon subsystem for for Amazon SE. Fixes: 2b6639d4 ("MIPS: lantiq: Enable MFD_SYSCON to be able to use it for the RCU MFD") Signed-off-by: NMathias Kresin <dev@kresin.me> Signed-off-by: NHauke Mehrtens <hauke@hauke-m.de> Acked-by: NMartin Blumenstingl <martin.blumenstingl@googlemail.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: John Crispin <john@phrozen.org> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 4.14+ Patchwork: https://patchwork.linux-mips.org/patch/18817/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Mathias Kresin 提交于
On Danube and AR9 the USB core is connected though a AHB bus to the main system cross bar, hence we need to enable the gating clock of the AHB Bus as well to make the USB controller work. Fixes: dea54fba ("phy: Add an USB PHY driver for the Lantiq SoCs using the RCU module") Signed-off-by: NMathias Kresin <dev@kresin.me> Signed-off-by: NHauke Mehrtens <hauke@hauke-m.de> Acked-by: NMartin Blumenstingl <martin.blumenstingl@googlemail.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: John Crispin <john@phrozen.org> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 4.14+ Patchwork: https://patchwork.linux-mips.org/patch/18814/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Mathias Kresin 提交于
On Danube the USB0 controller registers are at 1e101000 and the USB0 PHY register is at 1f203018 similar to all other lantiq SoCs. Activate the USB controller gating clock thorough the USB controller driver and not the PHY. This fixes a problem introduced in a previous commit. Fixes: dea54fba ("phy: Add an USB PHY driver for the Lantiq SoCs using the RCU module") Signed-off-by: NMathias Kresin <dev@kresin.me> Signed-off-by: NHauke Mehrtens <hauke@hauke-m.de> Acked-by: NMartin Blumenstingl <martin.blumenstingl@googlemail.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: John Crispin <john@phrozen.org> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 4.14+ Patchwork: https://patchwork.linux-mips.org/patch/18816/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
- 21 3月, 2018 2 次提交
-
-
由 Tony Lindgren 提交于
We are still using custom SRAM code for some SoCs and are not marking the PM code mapped to SRAM as read-only and executable after we're done. With CONFIG_DEBUG_WX=y, we will get "Found insecure W+X mapping at address" warning. Let's fix this issue the same way as commit 728bbe75 ("misc: sram: Introduce support code for protect-exec sram type") is doing for drivers/misc/sram-exec.c. On omap3, we need to restore SRAM when returning from off mode after idle, so init time configuration is not enough. And as we no longer have users for omap_sram_push_address() we can make it static while at it. Note that eventually we should be using sram-exec.c for all SoCs. Cc: stable@vger.kernel.org # v4.12+ Cc: Dave Gerlach <d-gerlach@ti.com> Reported-by: NPavel Machek <pavel@ucw.cz> Signed-off-by: NTony Lindgren <tony@atomide.com>
-
由 Linus Torvalds 提交于
The undocumented 'icebp' instruction (aka 'int1') works pretty much like 'int3' in the absense of in-circuit probing equipment (except, obviously, that it raises #DB instead of raising #BP), and is used by some validation test-suites as such. But Andy Lutomirski noticed that his test suite acted differently in kvm than on bare hardware. The reason is that kvm used an inexact test for the icebp instruction: it just assumed that an all-zero VM exit qualification value meant that the VM exit was due to icebp. That is not unlike the guess that do_debug() does for the actual exception handling case, but it's purely a heuristic, not an absolute rule. do_debug() does it because it wants to ascribe _some_ reasons to the #DB that happened, and an empty %dr6 value means that 'icebp' is the most likely casue and we have no better information. But kvm can just do it right, because unlike the do_debug() case, kvm actually sees the real reason for the #DB in the VM-exit interruption information field. So instead of relying on an inexact heuristic, just use the actual VM exit information that says "it was 'icebp'". Right now the 'icebp' instruction isn't technically documented by Intel, but that will hopefully change. The special "privileged software exception" information _is_ actually mentioned in the Intel SDM, even though the cause of it isn't enumerated. Reported-by: NAndy Lutomirski <luto@kernel.org> Tested-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 20 3月, 2018 1 次提交
-
-
由 Boris Ostrovsky 提交于
Writing to it directly does not work for Xen PV guests. Fixes: 49275fef ("x86/vsyscall/64: Explicitly set _PAGE_USER in the pagetable hierarchy") Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NJuergen Gross <jgross@suse.com> Acked-by: NAndy Lutomirski <luto@kernel.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20180319143154.3742-1-boris.ostrovsky@oracle.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-