- 19 4月, 2017 1 次提交
-
-
由 Vishal Verma 提交于
The NFIT MCE handler callback (for handling media errors on NVDIMMs) takes a mutex to add the location of a memory error to a list. But since the notifier call chain for machine checks (x86_mce_decoder_chain) is atomic, we get a lockdep splat like: BUG: sleeping function called from invalid context at kernel/locking/mutex.c:620 in_atomic(): 1, irqs_disabled(): 0, pid: 4, name: kworker/0:0 [..] Call Trace: dump_stack ___might_sleep __might_sleep mutex_lock_nested ? __lock_acquire nfit_handle_mce notifier_call_chain atomic_notifier_call_chain ? atomic_notifier_call_chain mce_gen_pool_process Convert the notifier to a blocking one which gets to run only in process context. Boris: remove the notifier call in atomic context in print_mce(). For now, let's print the MCE on the atomic path so that we can make sure they go out and get logged at least. Fixes: 6839a6d9 ("nfit: do an ARS scrub on hitting a latent media error") Reported-by: NRoss Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: NVishal Verma <vishal.l.verma@intel.com> Acked-by: NTony Luck <tony.luck@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: linux-edac <linux-edac@vger.kernel.org> Cc: x86-ml <x86@kernel.org> Cc: <stable@vger.kernel.org> Link: http://lkml.kernel.org/r/20170411224457.24777-1-vishal.l.verma@intel.comSigned-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 15 4月, 2017 1 次提交
-
-
由 Mikulas Patocka 提交于
The patch 554bfece ("parisc: Fix access fault handling in pa_memcpy()") reimplements the pa_memcpy function. Unfortunatelly, it makes the kernel unbootable. The crash happens in the function ide_complete_cmd where memcpy is called with the same source and destination address. This patch fixes a few bugs in pa_memcpy: * When jumping to .Lcopy_loop_16 for the first time, don't skip the instruction "ldi 31,t0" (this bug made the kernel unbootable) * Use the COND macro when comparing length, so that the comparison is 64-bit (a theoretical issue, in case the length is greater than 0xffffffff) * Don't use the COND macro after the "extru" instruction (the PA-RISC specification says that the upper 32-bits of extru result are undefined, although they are set to zero in practice) * Fix exception addresses in .Lcopy16_fault and .Lcopy8_fault * Rename .Lcopy_loop_4 to .Lcopy_loop_8 (so that it is consistent with .Lcopy8_fault) Cc: <stable@vger.kernel.org> # v4.9+ Fixes: 554bfece ("parisc: Fix access fault handling in pa_memcpy()") Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Signed-off-by: NHelge Deller <deller@gmx.de>
-
- 14 4月, 2017 2 次提交
-
-
由 Peter Zijlstra 提交于
When the perf_branch_entry::{in_tx,abort,cycles} fields were added, intel_pmu_lbr_read_32() wasn't updated to initialize them. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Cc: <stable@vger.kernel.org> Fixes: 135c5612 ("perf/x86/intel: Support Haswell/v4 LBR format") Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Jan Beulich 提交于
The ia64 build generates many warnings like this: WARNING: EXPORT symbol "empty_zero_page" [vmlinux] version generation failed, symbol will not be versioned. Besides adding the necessary header this also requires fiddling with some explicit .S -> .o rules. Cc: IA64-ML <linux-ia64@vger.kernel.org> Signed-off-by: NJan Beulich <jbeulich@suse.com> Signed-off-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 4月, 2017 3 次提交
-
-
由 Omar Sandoval 提交于
Reserving a runtime region results in splitting the EFI memory descriptors for the runtime region. This results in runtime region descriptors with bogus memory mappings, leading to interesting crashes like the following during a kexec: general protection fault: 0000 [#1] SMP Modules linked in: CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.11.0-rc1 #53 Hardware name: Wiwynn Leopard-Orv2/Leopard-DDR BW, BIOS LBM05 09/30/2016 RIP: 0010:virt_efi_set_variable() ... Call Trace: efi_delete_dummy_variable() efi_enter_virtual_mode() start_kernel() ? set_init_arg() x86_64_start_reservations() x86_64_start_kernel() start_cpu() ... Kernel panic - not syncing: Fatal exception Runtime regions will not be freed and do not need to be reserved, so skip the memmap modification in this case. Signed-off-by: NOmar Sandoval <osandov@fb.com> Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk> Cc: <stable@vger.kernel.org> # v4.9+ Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Dave Young <dyoung@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Jones <pjones@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Fixes: 8e80632f ("efi/esrt: Use efi_mem_reserve() and avoid a kmalloc()") Link: http://lkml.kernel.org/r/20170412152719.9779-2-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Dan Williams 提交于
Before we rework the "pmem api" to stop abusing __copy_user_nocache() for memcpy_to_pmem() we need to fix cases where we may strand dirty data in the cpu cache. The problem occurs when copy_from_iter_pmem() is used for arbitrary data transfers from userspace. There is no guarantee that these transfers, performed by dax_iomap_actor(), will have aligned destinations or aligned transfer lengths. Backstop the usage __copy_user_nocache() with explicit cache management in these unaligned cases. Yes, copy_from_iter_pmem() is now too big for an inline, but addressing that is saved for a later patch that moves the entirety of the "pmem api" into the pmem driver directly. Fixes: 5de490da ("pmem: add copy_from_iter_pmem() and clear_pmem()") Cc: <stable@vger.kernel.org> Cc: <x86@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jeff Moyer <jmoyer@redhat.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Matthew Wilcox <mawilcox@microsoft.com> Reviewed-by: NRoss Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: NToshi Kani <toshi.kani@hpe.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Kees Cook 提交于
Under CONFIG_STRICT_DEVMEM, reading System RAM through /dev/mem is disallowed. However, on x86, the first 1MB was always allowed for BIOS and similar things, regardless of it actually being System RAM. It was possible for heap to end up getting allocated in low 1MB RAM, and then read by things like x86info or dd, which would trip hardened usercopy: usercopy: kernel memory exposure attempt detected from ffff880000090000 (dma-kmalloc-256) (4096 bytes) This changes the x86 exception for the low 1MB by reading back zeros for System RAM areas instead of blindly allowing them. More work is needed to extend this to mmap, but currently mmap doesn't go through usercopy, so hardened usercopy won't Oops the kernel. Reported-by: NTommi Rantala <tommi.t.rantala@nokia.com> Tested-by: NTommi Rantala <tommi.t.rantala@nokia.com> Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 11 4月, 2017 4 次提交
-
-
由 Jiri Olsa 提交于
The schemata lock is released before freeing the resource's temporary tmp_cbms allocation. That's racy versus another write which allocates and uses new temporary storage, resulting in memory leaks, freeing in use memory, double a free or any combination of those. Move the unlock after the release code. Fixes: 60ec2440 ("x86/intel_rdt: Add schemata file") Signed-off-by: NJiri Olsa <jolsa@kernel.org> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Shaohua Li <shli@fb.com> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20170411071446.15241-1-jolsa@kernel.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Markus Trippelsdorf 提交于
Since commit: 4bcc595c "printk: reinstate KERN_CONT for printing" ... the debug output of signal_fault(), do_trap() and do_general_protection() looks garbled, e.g.: traps: conftest[9335] trap invalid opcode ip:400428 sp:7ffeaba1b0d8 error:0 in conftest[400000+1000] (note the unintended line break.) Fix the bug by adding KERN_CONTs. Signed-off-by: NMarkus Trippelsdorf <markus@trippelsdorf.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Thomas Gleixner 提交于
The vsyscall32 sysctl can racy against a concurrent fork when it switches from disabled to enabled: arch_setup_additional_pages() if (vdso32_enabled) --> No mapping sysctl.vsysscall32() --> vdso32_enabled = true create_elf_tables() ARCH_DLINFO_IA32 if (vdso32_enabled) { --> Add VDSO entry with NULL pointer Make ARCH_DLINFO_IA32 check whether the VDSO mapping has been set up for the newly forked process or not. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NAndy Lutomirski <luto@amacapital.net> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Mathias Krause <minipli@googlemail.com> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20170410151723.602367196@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Mathias Krause 提交于
vdso_enabled can be set to arbitrary integer values via the kernel command line 'vdso32=' parameter or via 'sysctl abi.vsyscall32'. load_vdso32() only maps VDSO if vdso_enabled == 1, but ARCH_DLINFO_IA32 merily checks for vdso_enabled != 0. As a consequence the AT_SYSINFO_EHDR auxiliary vector for the VDSO_ENTRY is emitted with a NULL pointer which causes a segfault when the application tries to use the VDSO. Restrict the valid arguments on the command line and the sysctl to 0 and 1. Fixes: b0b49f26 ("x86, vdso: Remove compat vdso support") Signed-off-by: NMathias Krause <minipli@googlemail.com> Acked-by: NAndy Lutomirski <luto@amacapital.net> Cc: Peter Zijlstra <peterz@infradead.org> Cc: stable@vger.kernel.org Cc: Roland McGrath <roland@redhat.com> Link: http://lkml.kernel.org/r/1491424561-7187-1-git-send-email-minipli@googlemail.com Link: http://lkml.kernel.org/r/20170410151723.518412863@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 07 4月, 2017 6 次提交
-
-
由 Will Deacon 提交于
The use of the contiguous bit by our hugetlb implementation violates the break-before-make requirements of the architecture and can lead to silent data corruption or TLB conflict aborts. Once again, disable these hugetlb sizes whilst it gets worked out. This reverts commit ab2e1b89. Conflicts: arch/arm64/mm/hugetlbpage.c Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Michael Ellerman 提交于
In crc32c_vpmsum() we call enable_kernel_altivec() without first disabling preemption, which is not allowed: WARNING: CPU: 9 PID: 2949 at ../arch/powerpc/kernel/process.c:277 enable_kernel_altivec+0x100/0x120 Modules linked in: dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio libcrc32c vmx_crypto ... CPU: 9 PID: 2949 Comm: docker Not tainted 4.11.0-rc5-compiler_gcc-6.3.1-00033-g308ac756 #381 ... NIP [c00000000001e320] enable_kernel_altivec+0x100/0x120 LR [d000000003df0910] crc32c_vpmsum+0x108/0x150 [crc32c_vpmsum] Call Trace: 0xc138fd09 (unreliable) crc32c_vpmsum+0x108/0x150 [crc32c_vpmsum] crc32c_vpmsum_update+0x3c/0x60 [crc32c_vpmsum] crypto_shash_update+0x88/0x1c0 crc32c+0x64/0x90 [libcrc32c] dm_bm_checksum+0x48/0x80 [dm_persistent_data] sb_check+0x84/0x120 [dm_thin_pool] dm_bm_validate_buffer.isra.0+0xc0/0x1b0 [dm_persistent_data] dm_bm_read_lock+0x80/0xf0 [dm_persistent_data] __create_persistent_data_objects+0x16c/0x810 [dm_thin_pool] dm_pool_metadata_open+0xb0/0x1a0 [dm_thin_pool] pool_ctr+0x4cc/0xb60 [dm_thin_pool] dm_table_add_target+0x16c/0x3c0 table_load+0x184/0x400 ctl_ioctl+0x2f0/0x560 dm_ctl_ioctl+0x38/0x50 do_vfs_ioctl+0xd8/0x920 SyS_ioctl+0x68/0xc0 system_call+0x38/0xfc It used to be sufficient just to call pagefault_disable(), because that also disabled preemption. But the two were decoupled in commit 8222dbe2 ("sched/preempt, mm/fault: Decouple preemption from the page fault logic") in mid 2015. So add the missing preempt_disable/enable(). We should also call disable_kernel_fp(), although it does nothing by default, there is a debug switch to make it active and all enables should be paired with disables. Fixes: 6dd7a82c ("crypto: powerpc - Add POWER8 optimised crc32c") Cc: stable@vger.kernel.org # v4.8+ Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Mathias Krause 提交于
It's unused for ages, used to be required for ksyms.c back in the v1.1 times. Signed-off-by: NMathias Krause <minipli@googlemail.com> Acked-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Guenter Roeck 提交于
sparc32:allmodconfig fails to build with the following error. ERROR: "vac_cache_size" [drivers/infiniband/sw/rxe/rdma_rxe.ko] undefined! Fixes: cb886455 ("infiniband: Fix alignment of mmap cookies ...") Cc: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Cc: Doug Ledford <dledford@redhat.com> Signed-off-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nitin Gupta 提交于
The memory corruption was happening due to incorrect TLB/TSB flushing of hugepages. Reported-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NNitin Gupta <nitin.m.gupta@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tom Hromatka 提交于
This commit moves sparc64's prototype of pmd_write() outside of the CONFIG_TRANSPARENT_HUGEPAGE ifdef. In 2013, commit a7b9403f ("sparc64: Encode huge PMDs using PTE encoding.") exposed a path where pmd_write() could be called without CONFIG_TRANSPARENT_HUGEPAGE defined. This can result in the panic below. The diff is awkward to read, but the changes are straightforward. pmd_write() was moved outside of #ifdef CONFIG_TRANSPARENT_HUGEPAGE. Also, __HAVE_ARCH_PMD_WRITE was defined. kernel BUG at include/asm-generic/pgtable.h:576! \|/ ____ \|/ "@'/ .. \`@" /_| \__/ |_\ \__U_/ oracle_8114_cdb(8114): Kernel bad sw trap 5 [#1] CPU: 120 PID: 8114 Comm: oracle_8114_cdb Not tainted 4.1.12-61.7.1.el6uek.rc1.sparc64 #1 task: fff8400700a24d60 ti: fff8400700bc4000 task.ti: fff8400700bc4000 TSTATE: 0000004411e01607 TPC: 00000000004609f8 TNPC: 00000000004609fc Y: 00000005 Not tainted TPC: <gup_huge_pmd+0x198/0x1e0> g0: 000000000001c000 g1: 0000000000ef3954 g2: 0000000000000000 g3: 0000000000000001 g4: fff8400700a24d60 g5: fff8001fa5c10000 g6: fff8400700bc4000 g7: 0000000000000720 o0: 0000000000bc5058 o1: 0000000000000240 o2: 0000000000006000 o3: 0000000000001c00 o4: 0000000000000000 o5: 0000048000080000 sp: fff8400700bc6ab1 ret_pc: 00000000004609f0 RPC: <gup_huge_pmd+0x190/0x1e0> l0: fff8400700bc74fc l1: 0000000000020000 l2: 0000000000002000 l3: 0000000000000000 l4: fff8001f93250950 l5: 000000000113f800 l6: 0000000000000004 l7: 0000000000000000 i0: fff8400700ca46a0 i1: bd0000085e800453 i2: 000000026a0c4000 i3: 000000026a0c6000 i4: 0000000000000001 i5: fff800070c958de8 i6: fff8400700bc6b61 i7: 0000000000460dd0 I7: <gup_pud_range+0x170/0x1a0> Call Trace: [0000000000460dd0] gup_pud_range+0x170/0x1a0 [0000000000460e84] get_user_pages_fast+0x84/0x120 [00000000006f5a18] iov_iter_get_pages+0x98/0x240 [00000000005fa744] do_direct_IO+0xf64/0x1e00 [00000000005fbbc0] __blockdev_direct_IO+0x360/0x15a0 [00000000101f74fc] ext4_ind_direct_IO+0xdc/0x400 [ext4] [00000000101af690] ext4_ext_direct_IO+0x1d0/0x2c0 [ext4] [00000000101af86c] ext4_direct_IO+0xec/0x220 [ext4] [0000000000553bd4] generic_file_read_iter+0x114/0x140 [00000000005bdc2c] __vfs_read+0xac/0x100 [00000000005bf254] vfs_read+0x54/0x100 [00000000005bf368] SyS_pread64+0x68/0x80 Signed-off-by: NTom Hromatka <tom.hromatka@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 06 4月, 2017 2 次提交
-
-
由 Icenowy Zheng 提交于
The USB PHY in A64 has a "pmu0" region, which controls the EHCI/OHCI controller pair that can be connected to the PHY0. Add the MMIO region for PHY node. Signed-off-by: NIcenowy Zheng <icenowy@aosc.io> Signed-off-by: NMaxime Ripard <maxime.ripard@free-electrons.com>
-
由 Dan Carpenter 提交于
kzalloc() won't actually fail because sizeof(*resize) is small, but static checkers complain. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Acked-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
-
- 05 4月, 2017 10 次提交
-
-
由 James Hogan 提交于
The rapf copy loops in the Meta usercopy code is missing some extable entries for HTP cores with unaligned access checking enabled, where faults occur on the instruction immediately after the faulting access. Add the fixup labels and extable entries for these cases so that corner case user copy failures don't cause kernel crashes. Fixes: 373cd784 ("metag: Memory handling") Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: linux-metag@vger.kernel.org Cc: stable@vger.kernel.org
-
由 James Hogan 提交于
The fixup code to rewind the source pointer in __asm_copy_from_user_{32,64}bit_rapf_loop() always rewound the source by a single unit (4 or 8 bytes), however this is insufficient if the fault didn't occur on the first load in the loop, as the source pointer will have been incremented but nothing will have been stored until all 4 register [pairs] are loaded. Read the LSM_STEP field of TXSTATUS (which is already loaded into a register), a bit like the copy_to_user versions, to determine how many iterations of MGET[DL] have taken place, all of which need rewinding. Fixes: 373cd784 ("metag: Memory handling") Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: linux-metag@vger.kernel.org Cc: stable@vger.kernel.org
-
由 James Hogan 提交于
The fixup code for the copy_to_user rapf loops reads TXStatus.LSM_STEP to decide how far to rewind the source pointer. There is a special case for the last execution of an MGETL/MGETD, since it leaves LSM_STEP=0 even though the number of MGETLs/MGETDs attempted was 4. This uses ADDZ which is conditional upon the Z condition flag, but the AND instruction which masked the TXStatus.LSM_STEP field didn't set the condition flags based on the result. Fix that now by using ANDS which does set the flags, and also marking the condition codes as clobbered by the inline assembly. Fixes: 373cd784 ("metag: Memory handling") Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: linux-metag@vger.kernel.org Cc: stable@vger.kernel.org
-
由 James Hogan 提交于
Currently we try to zero the destination for a failed read from userland in fixup code in the usercopy.c macros. The rest of the destination buffer is then zeroed from __copy_user_zeroing(), which is used for both copy_from_user() and __copy_from_user(). Unfortunately we fail to zero in the fixup code as D1Ar1 is set to 0 before the fixup code entry labels, and __copy_from_user() shouldn't even be zeroing the rest of the buffer. Move the zeroing out into copy_from_user() and rename __copy_user_zeroing() to raw_copy_from_user() since it no longer does any zeroing. This also conveniently matches the name needed for RAW_COPY_USER support in a later patch. Fixes: 373cd784 ("metag: Memory handling") Reported-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: linux-metag@vger.kernel.org Cc: stable@vger.kernel.org
-
由 James Hogan 提交于
When copying to userland on Meta, if any faults are encountered immediately abort the copy instead of continuing on and repeatedly faulting, and worse potentially copying further bytes successfully to subsequent valid pages. Fixes: 373cd784 ("metag: Memory handling") Reported-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: linux-metag@vger.kernel.org Cc: stable@vger.kernel.org
-
由 James Hogan 提交于
Fix the error checking of the alignment adjustment code in raw_copy_from_user(), which mistakenly considers it safe to skip the error check when aligning the source buffer on a 2 or 4 byte boundary. If the destination buffer was unaligned it may have started to copy using byte or word accesses, which could well be at the start of a new (valid) source page. This would result in it appearing to have copied 1 or 2 bytes at the end of the first (invalid) page rather than none at all. Fixes: 373cd784 ("metag: Memory handling") Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: linux-metag@vger.kernel.org Cc: stable@vger.kernel.org
-
由 James Hogan 提交于
Metag's lib/usercopy.c has a bunch of copy_from_user macros for larger copies between 5 and 16 bytes which are completely unused. Before fixing zeroing lets drop these macros so there is less to fix. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: linux-metag@vger.kernel.org Cc: stable@vger.kernel.org
-
由 Frederic Barrat 提交于
Commit 4c6d9acc ("powerpc/mm: Add hooks for cxl") converted local TLB invalidates to global if the cxl driver is active. This is necessary because the CAPP snoops invalidations to forward them to the PSL on the cxl adapter. However one path was forgotten. native_flush_hash_range() still does local TLB invalidates, as found out the hard way recently. This patch fixes it by following the same logic as previously: if the cxl driver is active, the local TLB invalidates are 'upgraded' to global. Fixes: 4c6d9acc ("powerpc/mm: Add hooks for cxl") Cc: stable@vger.kernel.org # v3.18+ Signed-off-by: NFrederic Barrat <fbarrat@linux.vnet.ibm.com> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Oliver O'Halloran 提交于
When the kernel is compiled to use 64bit ABIv2 the _GLOBAL() macro does not include a global entry point. A function's global entry point is used when the function is called from a different TOC context and in the kernel this typically means a call from a module into the vmlinux (or vice-versa). There are a few exported asm functions declared with _GLOBAL() and calling them from a module will likely crash the kernel since any TOC relative load will yield garbage. flush_icache_range() and flush_dcache_range() are both exported to modules, and use the TOC, so must use _GLOBAL_TOC(). Fixes: 721aeaa9 ("powerpc: Build little endian ppc64 kernel with ABIv2") Cc: stable@vger.kernel.org # v3.16+ Signed-off-by: NOliver O'Halloran <oohall@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Joerg Roedel 提交于
Put the right values from the original siginfo into the userspace compat-siginfo. This fixes the 32-bit MPX "tabletest" testcase on 64-bit kernels. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Acked-by: NDave Hansen <dave.hansen@linux.intel.com> Cc: <stable@vger.kernel.org> # v4.8+ Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Borislav Petkov <bp@suse.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Dmitry Safonov <0x7f454c46@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: a4455082 ('x86/signals: Add missing signal_compat code for x86 features') Link: http://lkml.kernel.org/r/1491322501-5054-1-git-send-email-joro@8bytes.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 04 4月, 2017 7 次提交
-
-
由 Dave Gerlach 提交于
Starting from commit 5de85b9d ("PM / runtime: Re-init runtime PM states at probe error and driver unbind") pm_runtime core now changes device runtime_status back to after RPM_SUSPENDED after a probe defer. Certain OMAP devices make use of "ti,no-idle-on-init" flag which causes omap_device_enable to be called during the BUS_NOTIFY_ADD_DEVICE event during probe, along with pm_runtime_set_active. This call to pm_runtime_set_active typically will prevent a call to pm_runtime_get in a driver probe function from re-enabling the omap_device. However, in the case of a probe defer that happens before the driver probe function is able to run, such as a missing pinctrl states defer, pm_runtime_reinit will set the device as RPM_SUSPENDED and then once driver probe is actually able to run, pm_runtime_get will see the device as suspended and call through to the omap_device layer, attempting to enable the already enabled omap_device and causing errors like this: omap-gpmc 50000000.gpmc: omap_device: omap_device_enable() called from invalid state 1 omap-gpmc 50000000.gpmc: use pm_runtime_put_sync_suspend() in driver? We can avoid this error by making sure the pm_runtime status of a device matches the omap_device state before a probe attempt. By extending the omap_device bus notifier to act on the BUS_NOTIFY_BIND_DRIVER event we can check if a device is enabled in omap_device but with a pm_runtime status of RPM_SUSPENDED and once again mark the device as RPM_ACTIVE to avoid a second incorrect call to omap_device_enable. Fixes: 5de85b9d ("PM / runtime: Re-init runtime PM states at probe error and driver unbind") Tested-by: NFranklin S Cooper Jr. <fcooper@ti.com> Signed-off-by: NDave Gerlach <d-gerlach@ti.com> Signed-off-by: NTony Lindgren <tony@atomide.com>
-
由 Ladi Prosek 提交于
L2 was running with uninitialized PML fields which led to incomplete dirty bitmap logging. This manifested as all kinds of subtle erratic behavior of the nested guest. Fixes: 843e4330 ("KVM: VMX: Add PML support in VMX") Signed-off-by: NLadi Prosek <lprosek@redhat.com> Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
-
由 Ladi Prosek 提交于
The PML feature is not exposed to guests so we should not be forwarding the vmexit either. This commit fixes BSOD 0x20001 (HYPERVISOR_ERROR) when running Hyper-V enabled Windows Server 2016 in L1 on hardware that supports PML. Fixes: 843e4330 ("KVM: VMX: Add PML support in VMX") Signed-off-by: NLadi Prosek <lprosek@redhat.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com> Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
-
由 Paul Mackerras 提交于
In the past, there was only one load-with-reservation instruction, lwarx, and if a program attempted a lwarx on a misaligned address, it would take an alignment interrupt and the kernel handler would emulate it as though it was lwzx, which was not really correct, but benign since it is loading the right amount of data, and the lwarx should be paired with a stwcx. to the same address, which would also cause an alignment interrupt which would result in a SIGBUS being delivered to the process. We now have 5 different sizes of load-with-reservation instruction. Of those, lharx and ldarx cause an immediate SIGBUS by luck since their entries in aligninfo[] overlap instructions which were not fixed up, but lqarx overlaps with lhz and will be emulated as such. lbarx can never generate an alignment interrupt since it only operates on 1 byte. To straighten this out and fix the lqarx case, this adds code to detect the l[hwdq]arx instructions and return without fixing them up, resulting in a SIGBUS being delivered to the process. Cc: stable@vger.kernel.org Signed-off-by: NPaul Mackerras <paulus@ozlabs.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christoffer Dall 提交于
We currently have some code to clear the list registers on GICv3, but we never call this code, because the caller got nuked when removing the old vgic. We also used to have a similar GICv2 part, but that got lost in the process too. Let's reintroduce the logic for GICv2 and call the logic when we initialize the use of hypervisors on the CPU, for example when first loading KVM or when exiting a low power state. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Suzuki K Poulose 提交于
In kvm_free_stage2_pgd() we don't hold the kvm->mmu_lock while calling unmap_stage2_range() on the entire memory range for the guest. This could cause problems with other callers (e.g, munmap on a memslot) trying to unmap a range. And since we have to unmap the entire Guest memory range holding a spinlock, make sure we yield the lock if necessary, after we unmap each PUD range. Fixes: commit d5d8184d ("KVM: ARM: Memory virtualization setup") Cc: stable@vger.kernel.org # v3.10+ Cc: Paolo Bonzini <pbonzin@redhat.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> [ Avoid vCPU starvation and lockup detector warnings ] Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
由 Victor Kamensky 提交于
After 52d7523d (arm64: mm: allow the kernel to handle alignment faults on user accesses) commit user-land accesses that produce unaligned exceptions like in case of aarch32 ldm/stm/ldrd/strd instructions operating on unaligned memory received by user-land as SIGSEGV. It is wrong, it should be reported as SIGBUS as it was before 52d7523d commit. Changed do_bad_area function to take signal and code parameters out of esr value using fault_info table, so in case of do_alignment_fault fault user-land will receive SIGBUS. Wrapped access to fault_info table into esr_to_fault_info function. Cc: <stable@vger.kernel.org> Fixes: 52d7523d (arm64: mm: allow the kernel to handle alignment faults on user accesses) Signed-off-by: NVictor Kamensky <kamensky@cisco.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 4月, 2017 3 次提交
-
-
由 Heiko Carstens 提交于
Change-recording override (CO) was never implemented in any machine. According to the architecture it is unpredictable if a translation-specification exception will be recognized if the bit is set and EDAT1 does not apply. Therefore the easiest solution is to simply ignore the bit. This also fixes commit cd1836f5 ("KVM: s390: instruction-execution-protection support"). A guest may enable instruction-execution-protection (IEP) but not EDAT1. In such a case the guest_translate() function (arch/s390/kvm/gaccess.c) will report a specification exception on pages that have the IEP bit set while it should not. It might make sense to add full IEP support to guest_translate() and the GACC_IFETCH case. However, as far as I can tell the GACC_IFETCH case is currently only used after an instruction was executed in order to fetch the failing instruction. So there is no additional problem *currently*. Fixes: cd1836f5 ("KVM: s390: instruction-execution-protection support") Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Tobias Klauser 提交于
Make sure to reserve the boot memory for the flattened device tree. Otherwise it might get overwritten, e.g. when initial_boot_params is copied, leading to a corrupted FDT and a boot hang/crash: bootconsole [early0] enabled Early console on uart16650 initialized at 0xf8001600 OF: fdt: Error -11 processing FDT Kernel panic - not syncing: setup_cpuinfo: No CPU found in devicetree! ---[ end Kernel panic - not syncing: setup_cpuinfo: No CPU found in devicetree! Guenter Roeck says: > I think I found the problem. In unflatten_and_copy_device_tree(), with added > debug information: > > OF: fdt: initial_boot_params=c861e400, dt=c861f000 size=28874 (0x70ca) > > ... and then initial_boot_params is copied to dt, which results in corrupted > fdt since the memory overlaps. Looks like the initial_boot_params memory > is not reserved and (re-)allocated by early_init_dt_alloc_memory_arch(). Cc: stable@vger.kernel.org Reported-by: NGuenter Roeck <linux@roeck-us.net> Reference: http://lkml.kernel.org/r/20170226210338.GA19476@roeck-us.netTested-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NTobias Klauser <tklauser@distanz.ch> Acked-by: NLey Foon Tan <ley.foon.tan@intel.com>
-
- 01 4月, 2017 1 次提交
-
-
由 Mike Galbraith 提交于
Fixes this: kexec: Undefined symbol: __asan_load8_noabort kexec-bzImage64: Loading purgatory failed Link: http://lkml.kernel.org/r/1489672155.4458.7.camel@gmx.deSigned-off-by: NMike Galbraith <efault@gmx.de> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-