- 28 6月, 2016 2 次提交
-
-
由 Ard Biesheuvel 提交于
As Kees Cook notes in the ARM counterpart of this patch [0]: The _etext position is defined to be the end of the kernel text code, and should not include any part of the data segments. This interferes with things that might check memory ranges and expect executable code up to _etext. In particular, Kees is referring to the HARDENED_USERCOPY patch set [1], which rejects attempts to call copy_to_user() on kernel ranges containing executable code, but does allow access to the .rodata segment. Regardless of whether one may or may not agree with the distinction, it makes sense for _etext to have the same meaning across architectures. So let's put _etext where it belongs, between .text and .rodata, and fix up existing references to use __init_begin instead, which unlike _end_rodata includes the exception and notes sections as well. The _etext references in kaslr.c are left untouched, since its references to [_stext, _etext) are meant to capture potential jump instruction targets, and so disregarding .rodata is actually an improvement here. [0] http://article.gmane.org/gmane.linux.kernel/2245084 [1] http://thread.gmane.org/gmane.linux.kernel.hardened.devel/2502Reported-by: NKees Cook <keescook@chromium.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
We currently open-code extracting the NUMA node of a memblock region, which requires an ifdef to cater for !CONFIG_NUMA builds where the memblock_region::nid field does not exist. The generic memblock_get_region_node helper is intended to cater for this. For CONFIG_HAVE_MEMBLOCK_NODE_MAP, builds this returns reg->nid, and for for !CONFIG_HAVE_MEMBLOCK_NODE_MAP builds this is a static inline that returns 0. Note that for arm64, CONFIG_HAVE_MEMBLOCK_NODE_MAP is selected iff CONFIG_NUMA is. This patch makes use of memblock_get_region_node to simplify the arm64 code. At the same time, we can move the nid variable definition into the loop, as this is the only place it is used. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NSteve Capper <steve.capper@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 6月, 2016 5 次提交
-
-
由 Geoff Levand 提交于
To aid in debugging kexec problems or when adding new functionality to kexec add a new routine kexec_image_info() and several inline pr_debug statements. Signed-off-by: NGeoff Levand <geoff@infradead.org> Reviewed-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Geoff Levand 提交于
Signed-off-by: NGeoff Levand <geoff@infradead.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Geoff Levand 提交于
Add three new files, kexec.h, machine_kexec.c and relocate_kernel.S to the arm64 architecture that add support for the kexec re-boot mechanism (CONFIG_KEXEC) on arm64 platforms. Signed-off-by: NGeoff Levand <geoff@infradead.org> Reviewed-by: NJames Morse <james.morse@arm.com> [catalin.marinas@arm.com: removed dead code following James Morse's comments] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Geoff Levand 提交于
Commit 68234df4 ("arm64: kill flush_cache_all()") removed the global arm64 routines cpu_reset() and cpu_soft_restart() needed by the arm64 kexec and kdump support. Add back a simplified version of cpu_soft_restart() with some changes needed for kexec in the new files cpu_reset.S, and cpu_reset.h. When a CPU is reset it needs to be put into the exception level it had when it entered the kernel. Update cpu_soft_restart() to accept an argument which signals if the reset address should be entered at EL1 or EL2, and add a new hypercall HVC_SOFT_RESTART which is used for the EL2 switch. Signed-off-by: NGeoff Levand <geoff@infradead.org> Reviewed-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 James Morse 提交于
kernel/smp.c has a fancy counter that keeps track of the number of CPUs it marked as not-present and left in cpu_park_loop(). If there are any CPUs spinning in here, features like kexec or hibernate may release them by overwriting this memory. This problem also occurs on machines using spin-tables to release secondary cores. After commit 44dbcc93 ("arm64: Fix behavior of maxcpus=N") we bring all known cpus into the secondary holding pen, meaning this memory can't be re-used by kexec or hibernate. Add a function cpus_are_stuck_in_kernel() to determine if either of these cases have occurred. Signed-off-by: NJames Morse <james.morse@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> [catalin.marinas@arm.com: cherry-picked from mainline for kexec dependency] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 22 6月, 2016 4 次提交
-
-
由 Catalin Marinas 提交于
This reverts commit 6b14c517. The original patch and UBSAN+KASAN enabled causes Linux to fail to link with: lib/built-in.o: In function `get_signed_val': lib/ubsan.c:93: undefined reference to `__ashlti3' lib/ubsan.c:93: undefined reference to `__ashrti3' Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Currently we treat ESR_EL1 bit 24 as software-defined for distinguishing instruction aborts from data aborts, but this bit is architecturally RES0 for instruction aborts, and could be allocated for an arbitrary purpose in future. Additionally, we hard-code the value in entry.S without the mnemonic, making the code difficult to understand. Instead, remove ESR_LNX_EXEC, and distinguish aborts based on the esr, which we already pass to the sole use of ESR_LNX_EXEC. A new helper, is_el0_instruction_abort() is added to make the logic clear. Any instruction aborts taken from EL1 will already have been handled by bad_mode, so we need not handle that case in the helper. For consistency, the existing permission_fault helper is renamed to is_permission_fault, and the return type is changed to bool. There should be no functional changes as the return value was a boolean expression, and the result is only used in another boolean expression. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Dave P Martin <dave.martin@arm.com> Cc: Huang Shijie <shijie.huang@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Now that we have a helper to extract the EC from an ESR_ELx value, make use of this in the arm64 KVM code for simplicity and consistency. There should be no functional changes as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Dave P Martin <dave.martin@arm.com> Cc: Huang Shijie <shijie.huang@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: kvmarm@lists.cs.columbia.edu Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Several places open-code extraction of the EC field from an ESR_ELx value, in subtly different ways. This is unfortunate duplication and variation, and the precise logic used to extract the field is a distraction. This patch adds a new macro, ESR_ELx_EC(), to extract the EC field from an ESR_ELx value in a consistent fashion. Existing open-coded extractions in core arm64 code are moved over to the new helper. KVM code is left as-is for the moment. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NHuang Shijie <shijie.huang@arm.com> Cc: Dave P Martin <dave.martin@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 21 6月, 2016 7 次提交
-
-
由 Jisheng Zhang 提交于
we only initialize swiotlb when swiotlb_force is true or not all system memory is DMA-able, this trivial optimization saves us 64MB when swiotlb is not necessary. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Alexander Potapenko 提交于
Add ARCH_HAS_KCOV to ARM64 config. To avoid potential crashes, disable instrumentation of the files in arch/arm64/kvm/hyp/*. Signed-off-by: NAlexander Potapenko <glider@google.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NJames Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kefeng Wang 提交于
The gcc support __SIZEOF_INT128__ and __int128 in arm64, thus, enable ARCH_SUPPORTS_INT128 to make mul_u64_u32_shr() a bit more efficient in scheduler. Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Currently dump_mem attempts to dump memory in 64-bit chunks when reporting a failure in 64-bit code, or 32-bit chunks when reporting a failure in 32-bit code. We added code to handle these two cases separately in commit e147ae6d ("arm64: modify the dump mem for 64 bit addresses"). However, in all cases dump_mem is called, the failing context is a kernel rather than user context. Additionally dump_mem is assumed to only be used for kernel contexts, as internally it switches to KERNEL_DS, and its callers pass kernel stack bounds. This patch removes the redundant 32-bit chunk logic and associated compat parameter, largely reverting the aforementioned commit. For the call in __die(), the check of in_interrupt() is removed also, as __die() is only called in response to faults from the kernel's exception level, and thus the !user_mode(regs) check is sufficient. Were this not the case, the used of task_stack_page(tsk) to generate the stack bounds would be erroneous. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Yang Shi 提交于
The upstream commit 1771c6e1 ("x86/kasan: instrument user memory access API") added KASAN instrument to x86 user memory access API, so added such instrument to ARM64 too. Define __copy_to/from_user in C in order to add kasan_check_read/write call, rename assembly implementation to __arch_copy_to/from_user. Tested by test_kasan module. Acked-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NYang Shi <yang.shi@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
For debugging purposes, it would be nice if we could export page tables other than the swapper_pg_dir to userspace. To enable this, this patch refactors the arm64 page table dumping code such that multiple tables may be registered with the framework, and exported under debugfs. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Robin Murphy 提交于
AArch64 is capable of 128-bit memory accesses without alignment restrictions, which makes it both possible and highly practical to slurp up a typical 20-byte IP header in just 2 loads. Implement our own version of ip_fast_checksum() to take advantage of that, resulting in considerably fewer instructions and memory accesses than the generic version. We can also get more optimal code generation for csum_fold() by defining it a slightly different way round from the generic version, so throw that into the mix too. Suggested-by: NLuke Starrett <luke.starrett@broadcom.com> Acked-by: NLuke Starrett <luke.starrett@broadcom.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 19 6月, 2016 1 次提交
-
-
由 Lee Jones 提交于
This patch fixes a non-booting issue in Mainline. When booting with a compressed kernel, we need to be careful how we populate memory close to DDR start. AUTO_ZRELADDR is enabled by default in multi-arch enabled configurations, which place some restrictions on where the kernel is placed and where it will be uncompressed to on boot. AUTO_ZRELADDR takes the decompressor code's start address and masks out the bottom 28 bits to obtain an address to uncompress the kernel to (thus a load address of 0x42000000 means that the kernel will be uncompressed to 0x40000000 i.e. DDR START on this platform). Even changing the load address to after the co-processor's shared memory won't render a booting platform, since the AUTO_ZRELADDR algorithm still ensures the kernel is uncompressed into memory shared with the first co-processor (0x40000000). Another option would be to move loading to 0x4A000000, since this will mean the decompressor will decompress the kernel to 0x48000000. However, this would mean a large chunk (0x44000000 => 0x48000000 (64MB)) of memory would essentially be wasted for no good reason. Until we can work with ST to find a suitable memory location to relocate co-processor shared memory, let's disable the shared memory nodes. This will ensure a working platform in the mean time. NB: The more observant of you will notice that we're leaving the DMU shared memory node enabled; this is because a) it is the only one in active use at the time of this writing and b) it is not affected by the current default behaviour which is causing issues. Fixes: fe135c63 (ARM: dts: STiH407: Move over to using the 'reserved-memory' API for obtaining DMA memory) Signed-off-by: NLee Jones <lee.jones@linaro.org> Reviewed-by Peter Griffin <peter.griffin@linaro.org> Signed-off-by: NMaxime Coquelin <maxime.coquelin@st.com> Signed-off-by: NOlof Johansson <olof@lixom.net>
-
- 18 6月, 2016 1 次提交
-
-
由 William Breathitt Gray 提交于
Several modern devices, such as PC/104 cards, are expected to run on modern systems via an ISA bus interface. Since ISA is a legacy interface for most modern architectures, ISA support should remain disabled in general. Support for ISA-style drivers should be enabled on a per driver basis. To allow ISA-style drivers on modern systems, this patch introduces the ISA_BUS_API and ISA_BUS Kconfig options. The ISA bus driver will now build conditionally on the ISA_BUS_API Kconfig option, which defaults to the legacy ISA Kconfig option. The ISA_BUS Kconfig option allows the ISA_BUS_API Kconfig option to be selected on architectures which do not enable ISA (e.g. X86_64). The ISA_BUS Kconfig option is currently only implemented for X86 architectures. Other architectures may have their own ISA_BUS Kconfig options added as required. Reviewed-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NWilliam Breathitt Gray <vilhelm.gray@gmail.com> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 17 6月, 2016 4 次提交
-
-
由 Dave Gerlach 提交于
Based on the latest timing specifications for the TPS65218 from the data sheet, http://www.ti.com/lit/ds/symlink/tps65218.pdf, document SLDS206 from November 2014, we must change the i2c bus speed to better fit within the minimum high SCL time required for proper i2c transfer. When running at 400khz, measurements show that SCL spends 0.8125 uS/1.666 uS high/low which violates the requirement for minimum high period of SCL provided in datasheet Table 7.6 which is 1 uS. Switching to 100khz gives us 5 uS/5 uS high/low which both fall above the minimum given values for 100 khz, 4.0 uS/4.7 uS high/low. Without this patch occasionally a voltage set operation from the kernel will appear to have worked but the actual voltage reflected on the PMIC will not have updated, causing problems especially with cpufreq that may update to a higher OPP without actually raising the voltage on DCDC2, leading to a hang. Signed-off-by: NDave Gerlach <d-gerlach@ti.com> Signed-off-by: NNishanth Menon <nm@ti.com> Signed-off-by: NFranklin S Cooper Jr <fcooper@ti.com> Signed-off-by: NAparna Balasubramanian <aparnab@ti.com> Signed-off-by: NKeerthy <j-keerthy@ti.com> Signed-off-by: NTony Lindgren <tony@atomide.com>
-
由 Tero Kristo 提交于
A few platforms are currently missing clocksource_probe() completely in their time_init functionality. On OMAP3430 for example, this is causing cpuidle to be pretty much dead, as the counter32k is not going to be registered and instead a gptimer is used as a clocksource. This will tick in periodic mode, preventing any deeper idle states. While here, also drop one unnecessary check for populated DT before existing clocksource_probe() call. Signed-off-by: NTero Kristo <t-kristo@ti.com> Signed-off-by: NTony Lindgren <tony@atomide.com>
-
由 Janusz Krzysztofik 提交于
After OMAP1 IRQ definitions have been changed by commit 685e2d08 ("ARM: OMAP1: Change interrupt numbering for sparse IRQ") introduced in v4.2, ams-delta FIQ handler which depends on them no longer works as expected. Fix it. Created and tested on Amstrad Delta against Linux-4.7-rc3 Signed-off-by: NJanusz Krzysztofik <jmkrzyszt@gmail.com> Signed-off-by: NTony Lindgren <tony@atomide.com>
-
由 Daniel Thompson 提交于
Current versions of gdb do not interoperate cleanly with kgdb on arm64 systems because gdb and kgdb do not use the same register description. This patch modifies kgdb to work with recent releases of gdb (>= 7.8.1). Compatibility with gdb (after the patch is applied) is as follows: gdb-7.6 and earlier Ok gdb-7.7 series Works if user provides custom target description gdb-7.8(.0) Works if user provides custom target description gdb-7.8.1 and later Ok When commit 44679a4f ("arm64: KGDB: Add step debugging support") was introduced it was paired with a gdb patch that made an incompatible change to the gdbserver protocol. This patch was eventually merged into the gdb sources: https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commit;h=a4d9ba85ec5597a6a556afe26b712e878374b9dd The change to the protocol was mostly made to simplify big-endian support inside the kernel gdb stub. Unfortunately the gdb project released gdb-7.7.x and gdb-7.8.0 before the protocol incompatibility was identified and reversed: https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commit;h=bdc144174bcb11e808b4e73089b850cf9620a7ee This leaves us in a position where kgdb still uses the no-longer-used protocol; gdb-7.8.1, which restored the original behaviour, was released on 2014-10-29. I don't believe it is possible to detect/correct the protocol incompatiblity which means the kernel must take a view about which version of the gdb remote protocol is "correct". This patch takes the view that the original/current version of the protocol is correct and that version found in gdb-7.7.x and gdb-7.8.0 is anomalous. Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 16 6月, 2016 3 次提交
-
-
由 Yang Zhang 提交于
VT-d posted interrupt is relying on the CPU side's posted interrupt. Need to check whether VCPU's APICv is active before enabing VT-d posted interrupt. Fixes: d62caabb Cc: stable@vger.kernel.org Signed-off-by: NYang Zhang <yang.zhang.wz@gmail.com> Signed-off-by: NShengge Ding <shengge.dsg@alibaba-inc.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Suravee Suthikulpanit 提交于
Add logic to disable AVIC #ifndef CONFIG_X86_LOCAL_APIC. Suggested-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Suravee Suthikulpanit 提交于
The commit 8221c137 ("svm: Manage vcpu load/unload when enable AVIC") introduces a build error due to implicit function declaration when #ifdef CONFIG_X86_32 and #ifndef CONFIG_X86_LOCAL_APIC (as reported by Kbuild test robot i386-randconfig-x0-06121009). So, this patch introduces kvm_cpu_get_apicid() wrapper around __default_cpu_present_to_apicid() with additional handling if CONFIG_X86_LOCAL_APIC is not defined. Reported-by: Nkbuild test robot <fengguang.wu@intel.com> Fixes: commit 8221c137 ("svm: Manage vcpu load/unload when enable AVIC") Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 15 6月, 2016 4 次提交
-
-
由 Will Deacon 提交于
Rather than wait until we observe the lock being free (which might never happen), we can also return from spin_unlock_wait if we observe that the lock is now held by somebody else, which implies that it was unlocked but we just missed seeing it in that state. Furthermore, in such a scenario there is no longer a need to write back the value that we loaded, since we know that there has been a lock hand-off, which is sufficient to publish any stores prior to the unlock_wait because the ARm architecture ensures that a Store-Release instruction is multi-copy atomic when observed by a Load-Acquire instruction. The litmus test is something like: AArch64 { 0:X1=x; 0:X3=y; 1:X1=y; 2:X1=y; 2:X3=x; } P0 | P1 | P2 ; MOV W0,#1 | MOV W0,#1 | LDAR W0,[X1] ; STR W0,[X1] | STLR W0,[X1] | LDR W2,[X3] ; DMB SY | | ; LDR W2,[X3] | | ; exists (0:X2=0 /\ 2:X0=1 /\ 2:X2=0) where P0 is doing spin_unlock_wait, P1 is doing spin_unlock and P2 is doing spin_lock. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Commit d86b8da0 ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers") fixed spin_unlock_wait for LL/SC-based atomics under the premise that the LSE atomics (in particular, the LDADDA instruction) are indivisible. Unfortunately, these instructions are only indivisible when used with the -AL (full ordering) suffix and, consequently, the same issue can theoretically be observed with LSE atomics, where a later (in program order) load can be speculated before the write portion of the atomic operation. This patch fixes the issue by performing a CAS of the lock once we've established that it's unlocked, in much the same way as the LL/SC code. Fixes: d86b8da0 ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers") Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
spin_is_locked has grown two very different use-cases: (1) [The sane case] API functions may require a certain lock to be held by the caller and can therefore use spin_is_locked as part of an assert statement in order to verify that the lock is indeed held. For example, usage of assert_spin_locked. (2) [The insane case] There are two locks, where a CPU takes one of the locks and then checks whether or not the other one is held before accessing some shared state. For example, the "optimized locking" in ipc/sem.c. In the latter case, the sequence looks like: spin_lock(&sem->lock); if (!spin_is_locked(&sma->sem_perm.lock)) /* Access shared state */ and requires that the spin_is_locked check is ordered after taking the sem->lock. Unfortunately, since our spinlocks are implemented using a LDAXR/STXR sequence, the read of &sma->sem_perm.lock can be speculated before the STXR and consequently return a stale value. Whilst this hasn't been seen to cause issues in practice, PowerPC fixed the same issue in 51d7d520 ("powerpc: Add smp_mb() to arch_spin_is_locked()") and, although we did something similar for spin_unlock_wait in d86b8da0 ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers") that doesn't actually take care of ordering against local acquisition of a different lock. This patch adds an smp_mb() to the start of our arch_spin_is_locked and arch_spin_unlock_wait routines to ensure that the lock value is always loaded after any other locks have been taken by the current CPU. Reported-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Paul E. McKenney 提交于
Further testing with false negatives suppressed by commit 293e2421 ("rcu: Remove superfluous versions of rcu_read_lock_sched_held()") identified another unprotected use of RCU from the idle loop. Because RCU actively ignores idle-loop code (for energy-efficiency reasons, among other things), using RCU from the idle loop can result in too-short grace periods, in turn resulting in arbitrary misbehavior. The resulting lockdep-RCU splat is as follows: ------------------------------------------------------------------------ =============================== [ INFO: suspicious RCU usage. ] 4.6.0-rc5-next-20160426+ #1112 Not tainted ------------------------------- include/trace/events/ipi.h:35 suspicious rcu_dereference_check() usage! other info that might help us debug this: RCU used illegally from idle CPU! rcu_scheduler_active = 1, debug_locks = 0 RCU used illegally from extended quiescent state! no locks held by swapper/0/0. stack backtrace: CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.6.0-rc5-next-20160426+ #1112 Hardware name: Generic OMAP4 (Flattened Device Tree) [<c0110308>] (unwind_backtrace) from [<c010c3a8>] (show_stack+0x10/0x14) [<c010c3a8>] (show_stack) from [<c047fec8>] (dump_stack+0xb0/0xe4) [<c047fec8>] (dump_stack) from [<c010dcfc>] (smp_cross_call+0xbc/0x188) [<c010dcfc>] (smp_cross_call) from [<c01c9e28>] (generic_exec_single+0x9c/0x15c) [<c01c9e28>] (generic_exec_single) from [<c01ca0a0>] (smp_call_function_single_async+0 x38/0x9c) [<c01ca0a0>] (smp_call_function_single_async) from [<c0603728>] (cpuidle_coupled_poke_others+0x8c/0xa8) [<c0603728>] (cpuidle_coupled_poke_others) from [<c0603c10>] (cpuidle_enter_state_coupled+0x26c/0x390) [<c0603c10>] (cpuidle_enter_state_coupled) from [<c0183c74>] (cpu_startup_entry+0x198/0x3a0) [<c0183c74>] (cpu_startup_entry) from [<c0b00c0c>] (start_kernel+0x354/0x3c8) [<c0b00c0c>] (start_kernel) from [<8000807c>] (0x8000807c) ------------------------------------------------------------------------ Reported-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Tested-by: NTony Lindgren <tony@atomide.com> Tested-by: NGuenter Roeck <linux@roeck-us.net> Cc: Russell King <linux@arm.linux.org.uk> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: <linux-omap@vger.kernel.org> Cc: <linux-arm-kernel@lists.infradead.org>
-
- 14 6月, 2016 6 次提交
-
-
由 Mark Rutland 提交于
Unlike the debug_fault_info table, we never intentionally alter the fault_info table at runtime, and all derived pointers are treated as const currently. Make the table const so that it can be placed in .rodata and protected from unintentional writes, as we do for the syscall tables. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
If the kernel is set to show unhandled signals, and a user task does not handle a SIGILL as a result of an instruction abort, we will attempt to log the offending instruction with dump_instr before killing the task. We use dump_instr to log the encoding of the offending userspace instruction. However, dump_instr is also used to dump instructions from kernel space, and internally always switches to KERNEL_DS before dumping the instruction with get_user. When both PAN and UAO are in use, reading a user instruction via get_user while in KERNEL_DS will result in a permission fault, which leads to an Oops. As we have regs corresponding to the context of the original instruction abort, we can inspect this and only flip to KERNEL_DS if the original abort was taken from the kernel, avoiding this issue. At the same time, remove the redundant (and incorrect) comments regarding the order dump_mem and dump_instr are called in. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: <stable@vger.kernel.org> #4.6+ Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reported-by: NVladimir Murzin <vladimir.murzin@arm.com> Tested-by: NVladimir Murzin <vladimir.murzin@arm.com> Fixes: 57f4959b ("arm64: kernel: Add support for User Access Override") Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Hogan 提交于
When emulating TLB miss / invalid exceptions during CACHE instruction emulation, be sure to set up the correct PC and host_cp0_badvaddr state for the kvm_mips_emlulate_tlb*_ld() function to pick up for guest EPC and BadVAddr. PC needs to be rewound otherwise the guest EPC will end up pointing at the next instruction after the faulting CACHE instruction. host_cp0_badvaddr must be set because guest CACHE instructions trap with a Coprocessor Unusable exception, which doesn't update the host BadVAddr as a TLB exception would. This doesn't tend to get hit when dynamic translation of emulated instructions is enabled, since only the first execution of each CACHE instruction actually goes through this code path, with subsequent executions hitting the SYNCI instruction that it gets replaced with. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: kvm@vger.kernel.org Cc: linux-mips@linux-mips.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
When a CACHE instruction is emulated by kvm_mips_emulate_cache(), the PC is first updated to point to the next instruction, and afterwards it falls through the "dont_update_pc" label, which rewinds the PC back to its original address. This works when dynamic translation of emulated instructions is enabled, since the CACHE instruction is replaced with a SYNCI which works without trapping, however when dynamic translation is disabled the guest hangs on CACHE instructions as they always trap and are never stepped over. Roughly swap the meanings of the "done" and "dont_update_pc" to match kvm_mips_emulate_CP0(), so that "done" will roll back the PC on failure, and "dont_update_pc" won't change PC at all (for the sake of exceptions that have already modified the PC). Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: kvm@vger.kernel.org Cc: linux-mips@linux-mips.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
When faulting guest addresses are matched against guest segments with the KVM_GUEST_KSEGX() macro, change the mask to 0xe0000000 so as to include bit 31. This is mainly for safety's sake, as it prevents a rogue BadVAddr in the host kseg2/kseg3 segments (e.g. 0xC*******) after a TLB exception from matching the guest kseg0 segment (e.g. 0x4*******), triggering an internal KVM error instead of allowing the corresponding guest kseg0 page to be mapped into the host vmalloc space. Such a rogue BadVAddr was observed to happen with the host MIPS kernel running under QEMU with KVM built as a module, due to a not entirely transparent optimisation in the QEMU TLB handling. This has already been worked around properly in a previous commit. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: kvm@vger.kernel.org Cc: linux-mips@linux-mips.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 James Hogan 提交于
Copy __kvm_mips_vcpu_run() into unmapped memory, so that we can never get a TLB refill exception in it when KVM is built as a module. This was observed to happen with the host MIPS kernel running under QEMU, due to a not entirely transparent optimisation in the QEMU TLB handling where TLB entries replaced with TLBWR are copied to a separate part of the TLB array. Code in those pages continue to be executable, but those mappings persist only until the next ASID switch, even if they are marked global. An ASID switch happens in __kvm_mips_vcpu_run() at exception level after switching to the guest exception base. Subsequent TLB mapped kernel instructions just prior to switching to the guest trigger a TLB refill exception, which enters the guest exception handlers without updating EPC. This appears as a guest triggered TLB refill on a host kernel mapped (host KSeg2) address, which is not handled correctly as user (guest) mode accesses to kernel (host) segments always generate address error exceptions. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: kvm@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.10.x- Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 13 6月, 2016 3 次提交
-
-
由 Nishanth Menon 提交于
As per the latest revision F of public TRM for DRA7/AM57xx SoCs SPRUHZ6F[1] (April 2016), with the exception of MPU power domain, all other power domains do not have memories capable of retention since they all operate in either "ON" or "OFF" mode. For these power states, the retention state for memories are basically ignored by PRCM and does not require to be programmed. [1] http://www.ti.com/lit/pdf/spruhz6Signed-off-by: NNishanth Menon <nm@ti.com> Signed-off-by: NTony Lindgren <tony@atomide.com>
-
由 Nishanth Menon 提交于
As per the latest revision F of public TRM for DRA7/AM57xx SoCs SPRUHZ6F[1] (April 2016), with the exception of MPU power domain (and CPUx sub power domains), all other power domains can either operate in "ON" mode OR in some cases, "OFF" mode. For these power states, the logic retention state is basically ignored by PRCM and does not require to be programmed. [1] http://www.ti.com/lit/pdf/spruhz6Signed-off-by: NNishanth Menon <nm@ti.com> Signed-off-by: NTony Lindgren <tony@atomide.com>
-
由 Nishanth Menon 提交于
As per the latest revision F of public TRM for DRA7/AM57xx SoCs SPRUHZ6F[1] (April 2016), L4Per and L3init power domains now operate in always "ON" mode due to asymmetric aging limitations. Update the same [1] http://www.ti.com/lit/pdf/spruhz6Signed-off-by: NNishanth Menon <nm@ti.com> Signed-off-by: NTony Lindgren <tony@atomide.com>
-