- 15 6月, 2017 8 次提交
-
-
由 David Daney 提交于
Tested-by: NAlexander Graf <agraf@suse.de> Acked-by: NDavid Daney <david.daney@cavium.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NDavid Daney <david.daney@cavium.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
由 Marc Zyngier 提交于
A number of Group-0 registers can be handled by the same accessors as that of Group-1, so let's add the required system register encodings and catch them in the dispatching function. Tested-by: NAlexander Graf <agraf@suse.de> Acked-by: NDavid Daney <david.daney@cavium.com> Acked-by: NChristoffer Dall <cdall@linaro.org> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
由 Marc Zyngier 提交于
Add a handler for reading/writing the guest's view of the ICC_IGRPEN0_EL1 register, which is located in the ICH_VMCR_EL2.VENG0 field. Tested-by: NAlexander Graf <agraf@suse.de> Acked-by: NDavid Daney <david.daney@cavium.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
由 Marc Zyngier 提交于
Add a handler for reading/writing the guest's view of the ICC_BPR0_EL1 register, which is located in the ICH_VMCR_EL2.BPR0 field. Tested-by: NAlexander Graf <agraf@suse.de> Acked-by: NDavid Daney <david.daney@cavium.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
由 Marc Zyngier 提交于
Add a handler for reading the guest's view of the ICV_HPPIR1_EL1 register. This is a simple parsing of the available LRs, extracting the highest available interrupt. Tested-by: NAlexander Graf <agraf@suse.de> Acked-by: NDavid Daney <david.daney@cavium.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
由 Marc Zyngier 提交于
Add a handler for reading/writing the guest's view of the ICV_AP1Rn_EL1 registers. We just map them to the corresponding ICH_AP1Rn_EL2 registers. Tested-by: NAlexander Graf <agraf@suse.de> Acked-by: NDavid Daney <david.daney@cavium.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
由 Marc Zyngier 提交于
In order to start handling guest access to GICv3 system registers, let's add a hook that will get called when we trap a system register access. This is gated by a new static key (vgic_v3_cpuif_trap). Tested-by: NAlexander Graf <agraf@suse.de> Acked-by: NDavid Daney <david.daney@cavium.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
由 Marc Zyngier 提交于
It is often useful to compare an ESR syndrome reporting the trapping of a system register with a value matching that system register. Since encoding both the sysreg and the ESR version seem to be a bit overkill, let's add a set of macros that convert an ESR value into the corresponding sysreg encoding. We handle both AArch32 and AArch64, taking advantage of identical encodings between system registers and CP15 accessors. Tested-by: NAlexander Graf <agraf@suse.de> Acked-by: NDavid Daney <david.daney@cavium.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
- 08 6月, 2017 2 次提交
-
-
由 Christoffer Dall 提交于
First we define an ABI using the vcpu devices that lets userspace set the interrupt numbers for the various timers on both the 32-bit and 64-bit KVM/ARM implementations. Second, we add the definitions for the groups and attributes introduced by the above ABI. (We add the PMU define on the 32-bit side as well for symmetry and it may get used some day.) Third, we set up the arch-specific vcpu device operation handlers to call into the timer code for anything related to the KVM_ARM_VCPU_TIMER_CTRL group. Fourth, we implement support for getting and setting the timer interrupt numbers using the above defined ABI in the arch timer code. Fifth, we introduce error checking upon enabling the arch timer (which is called when first running a VCPU) to check that all VCPUs are configured to use the same PPI for the timer (as mandated by the architecture) and that the virtual and physical timers are not configured to use the same IRQ number. Signed-off-by: NChristoffer Dall <cdall@linaro.org> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
We currently initialize the arch timer IRQ numbers from the reset code, presumably because we once intended to model multiple CPU or SoC types from within the kernel and have hard-coded reset values in the reset code. As we are moving towards userspace being in charge of more fine-grained CPU emulation and stitching together the pieces needed to emulate a particular type of CPU, we should no longer have a tight coupling between resetting a VCPU and setting IRQ numbers. Therefore, move the logic to define and use the default IRQ numbers to the timer code and set the IRQ number immediately when creating the VCPU. Signed-off-by: NChristoffer Dall <cdall@linaro.org> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 07 6月, 2017 2 次提交
-
-
由 Marc Zyngier 提交于
We currently have the SCTLR_EL2.A bit set, trapping unaligned accesses at EL2, but we're not really prepared to deal with it. So far, this has been unnoticed, until GCC 7 started emitting those (in particular 64bit writes on a 32bit boundary). Since the rest of the kernel is pretty happy about that, let's follow its example and set SCTLR_EL2.A to zero. Modern CPUs don't really care. Cc: stable@vger.kernel.org Reported-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
由 Marc Zyngier 提交于
__do_hyp_init has the rather bad habit of ignoring RES1 bits and writing them back as zero. On a v8.0-8.2 CPU, this doesn't do anything bad, but may end-up being pretty nasty on future revisions of the architecture. Let's preserve those bits so that we don't have to fix this later on. Cc: stable@vger.kernel.org Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
- 04 6月, 2017 4 次提交
-
-
由 Andrew Jones 提交于
Don't use request-less VCPU kicks when injecting IRQs, as a VCPU kick meant to trigger the interrupt injection could be sent while the VCPU is outside guest mode, which means no IPI is sent, and after it has called kvm_vgic_flush_hwstate(), meaning it won't see the updated GIC state until its next exit some time later for some other reason. The receiving VCPU only needs to check this request in VCPU RUN to handle it. By checking it, if it's pending, a memory barrier will be issued that ensures all state is visible. See "Ensuring Requests Are Seen" of Documentation/virtual/kvm/vcpu-requests.rst Signed-off-by: NAndrew Jones <drjones@redhat.com> Reviewed-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
由 Andrew Jones 提交于
A request called EXIT is too generic. All requests are meant to cause exits, but different requests have different flags. Let's not make it difficult to decide if the EXIT request is correct for some case by just always providing unique requests for each case. This patch changes EXIT to SLEEP, because that's what the request is asking the VCPU to do. Signed-off-by: NAndrew Jones <drjones@redhat.com> Acked-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
由 Andrew Jones 提交于
arm/arm64 already has one VCPU request used when setting pause, but it doesn't properly check requests in VCPU RUN. Check it and also make sure we set vcpu->mode at the appropriate time (before the check) and with the appropriate barriers. See Documentation/virtual/kvm/vcpu-requests.rst. Also make sure we don't leave any vcpu requests we don't intend to handle later set in the request bitmap. If we don't clear them, then kvm_request_pending() may return true when it shouldn't. Using VCPU requests properly fixes a small race where pause could get set just as a VCPU was entering guest mode. Signed-off-by: NAndrew Jones <drjones@redhat.com> Reviewed-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
由 Andrew Jones 提交于
Marc Zyngier suggested that we define the arch specific VCPU request base, rather than requiring each arch to remember to start from 8. That suggestion, along with Radim Krcmar's recent VCPU request flag addition, snowballed into defining something of an arch VCPU request defining API. No functional change. (Looks like x86 is running out of arch VCPU request bits. Maybe someday we'll need to extend to 64.) Signed-off-by: NAndrew Jones <drjones@redhat.com> Acked-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
- 24 5月, 2017 1 次提交
-
-
由 Christoffer Dall 提交于
We have been a little loose with our intermediate VMCR representation where we had a 'ctlr' field, but we failed to differentiate between the GICv2 GICC_CTLR and ICC_CTLR_EL1 layouts, and therefore ended up mapping the wrong bits into the individual fields of the ICH_VMCR_EL2 when emulating a GICv2 on a GICv3 system. Fix this by using explicit fields for the VMCR bits instead. Cc: Eric Auger <eric.auger@redhat.com> Reported-by: Nwanghaibin <wanghaibin.wang@huawei.com> Signed-off-by: NChristoffer Dall <cdall@linaro.org> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 23 5月, 2017 1 次提交
-
-
由 Christoffer Dall 提交于
We don't need to stop a specific VCPU when changing the active state, because private IRQs can only be modified by a running VCPU for the VCPU itself and it is therefore already stopped. However, it is also possible for two VCPUs to be modifying the active state of SPIs at the same time, which can cause the thread being stuck in the loop that checks other VCPU threads for a potentially very long time, or to modify the active state of a running VCPU. Fix this by serializing all accesses to setting and clearing the active state of interrupts using the KVM mutex. Reported-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NChristoffer Dall <cdall@linaro.org> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 16 5月, 2017 1 次提交
-
-
由 James Morse 提交于
When KVM panics, it hurridly restores the host context and parachutes into the host's panic() code. At some point panic() touches the physical timer/counter. Unless we are an arm64 system with VHE, this traps back to EL2. If we're lucky, we panic again. Add a __timer_save_state() call to KVMs hyp_panic() path, this saves the guest registers and disables the traps for the host. Fixes: 53fd5b64 ("arm64: KVM: Add panic handling") Signed-off-by: NJames Morse <james.morse@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
- 15 5月, 2017 1 次提交
-
-
由 Marc Zyngier 提交于
We like living dangerously. Nothing explicitely forbids stack-protector to be used in the EL2 code, while distributions routinely compile their kernel with it. We're just lucky that no code actually triggers the instrumentation. Let's not try our luck for much longer, and disable stack-protector for code living at EL2. Cc: stable@vger.kernel.org Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
- 11 5月, 2017 1 次提交
-
-
由 Florian Fainelli 提交于
When CONFIG_ARM64_MODULE_PLTS is enabled, the first allocation using the module space fails, because the module is too big, and then the module allocation is attempted from vmalloc space. Silence the first allocation failure in that case by setting __GFP_NOWARN. Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 10 5月, 2017 10 次提交
-
-
由 Nicolas Dichtel 提交于
Regularly, when a new header is created in include/uapi/, the developer forgets to add it in the corresponding Kbuild file. This error is usually detected after the release is out. In fact, all headers under uapi directories should be exported, thus it's useless to have an exhaustive list. After this patch, the following files, which were not exported, are now exported (with make headers_install_all): asm-arc/kvm_para.h asm-arc/ucontext.h asm-blackfin/shmparam.h asm-blackfin/ucontext.h asm-c6x/shmparam.h asm-c6x/ucontext.h asm-cris/kvm_para.h asm-h8300/shmparam.h asm-h8300/ucontext.h asm-hexagon/shmparam.h asm-m32r/kvm_para.h asm-m68k/kvm_para.h asm-m68k/shmparam.h asm-metag/kvm_para.h asm-metag/shmparam.h asm-metag/ucontext.h asm-mips/hwcap.h asm-mips/reg.h asm-mips/ucontext.h asm-nios2/kvm_para.h asm-nios2/ucontext.h asm-openrisc/shmparam.h asm-parisc/kvm_para.h asm-powerpc/perf_regs.h asm-sh/kvm_para.h asm-sh/ucontext.h asm-tile/shmparam.h asm-unicore32/shmparam.h asm-unicore32/ucontext.h asm-x86/hwcap2.h asm-xtensa/kvm_para.h drm/armada_drm.h drm/etnaviv_drm.h drm/vgem_drm.h linux/aspeed-lpc-ctrl.h linux/auto_dev-ioctl.h linux/bcache.h linux/btrfs_tree.h linux/can/vxcan.h linux/cifs/cifs_mount.h linux/coresight-stm.h linux/cryptouser.h linux/fsmap.h linux/genwqe/genwqe_card.h linux/hash_info.h linux/kcm.h linux/kcov.h linux/kfd_ioctl.h linux/lightnvm.h linux/module.h linux/nbd-netlink.h linux/nilfs2_api.h linux/nilfs2_ondisk.h linux/nsfs.h linux/pr.h linux/qrtr.h linux/rpmsg.h linux/sched/types.h linux/sed-opal.h linux/smc.h linux/smc_diag.h linux/stm.h linux/switchtec_ioctl.h linux/vfio_ccw.h linux/wil6210_uapi.h rdma/bnxt_re-abi.h Note that I have removed from this list the files which are generated in every exported directories (like .install or .install.cmd). Thanks to Julien Floret <julien.floret@6wind.com> for the tip to get all subdirs with a pure makefile command. For the record, note that exported files for asm directories are a mix of files listed by: - include/uapi/asm-generic/Kbuild.asm; - arch/<arch>/include/uapi/asm/Kbuild; - arch/<arch>/include/asm/Kbuild. Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com> Acked-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Acked-by: NMark Salter <msalter@redhat.com> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
-
由 Mark Rutland 提交于
Clang tries to warn when there's a mismatch between an operand's size, and the size of the register it is held in, as this may indicate a bug. Specifically, clang warns when the operand's type is less than 64 bits wide, and the register is used unqualified (i.e. %N rather than %xN or %wN). Unfortunately clang can generate these warnings for unreachable code. For example, for code like: do { \ typeof(*(ptr)) __v = (v); \ switch(sizeof(*(ptr))) { \ case 1: \ // assume __v is 1 byte wide \ asm ("{op}b %w0" : : "r" (v)); \ break; \ case 8: \ // assume __v is 8 bytes wide \ asm ("{op} %0" : : "r" (v)); \ break; \ } while (0) ... if op() were passed a char value and pointer to char, clang may produce a warning for the unreachable case where sizeof(*(ptr)) is 8. For the same reasons, clang produces warnings when __put_user_err() is used for types that are less than 64 bits wide. We could avoid this with a cast to a fixed-width type in each of the cases. However, GCC will then warn that pointer types are being cast to mismatched integer sizes (in unreachable paths). Another option would be to use the same union trickery as we do for __smp_store_release() and __smp_load_acquire(), but this is fairly invasive. Instead, this patch suppresses the clang warning by using an x modifier in the assembly for the 8 byte case of __put_user_err(). No additional work is necessary as the value has been cast to typeof(*(ptr)), so the compiler will have performed any necessary extension for the reachable case. For consistency, __get_user_err() is also updated to use the x modifier for its 8 byte case. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reported-by: NMatthias Kaehlcke <mka@chromium.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
The LSE atomic code uses asm register variables to ensure that parameters are allocated in specific registers. In the majority of cases we specifically ask for an x register when using 64-bit values, but in a couple of cases we use a w regsiter for a 64-bit value. For asm register variables, the compiler only cares about the register index, with wN and xN having the same meaning. The compiler determines the register size to use based on the type of the variable. Thus, this inconsistency is merely confusing, and not harmful to code generation. For consistency, this patch updates those cases to use the x register alias. There should be no functional change as a result of this patch. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Our compat swp emulation holds the compat user address in an unsigned int, which it passes to __user_swpX_asm(). When a 32-bit value is passed in a register, the upper 32 bits of the register are unknown, and we must extend the value to 64 bits before we can use it as a base address. This patch casts the address to unsigned long to ensure it has been suitably extended, avoiding the potential issue, and silencing a related warning from clang. Fixes: bd35a4ad ("arm64: Port SWP/SWPB emulation support from arm") Cc: <stable@vger.kernel.org> # 3.19.x- Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Our access_ok() simply hands its arguments over to __range_ok(), which implicitly assummes that the addr parameter is 64 bits wide. This isn't necessarily true for compat code, which might pass down a 32-bit address parameter. In these cases, we don't have a guarantee that the address has been zero extended to 64 bits, and the upper bits of the register may contain unknown values, potentially resulting in a suprious failure. Avoid this by explicitly casting the addr parameter to an unsigned long (as is done on other architectures), ensuring that the parameter is widened appropriately. Fixes: 0aea86a2 ("arm64: User access library functions") Cc: <stable@vger.kernel.org> # 3.7.x- Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
When an inline assembly operand's type is narrower than the register it is allocated to, the least significant bits of the register (up to the operand type's width) are valid, and any other bits are permitted to contain any arbitrary value. This aligns with the AAPCS64 parameter passing rules. Our __smp_store_release() implementation does not account for this, and implicitly assumes that operands have been zero-extended to the width of the type being stored to. Thus, we may store unknown values to memory when the value type is narrower than the pointer type (e.g. when storing a char to a long). This patch fixes the issue by casting the value operand to the same width as the pointer operand in all cases, which ensures that the value is zero-extended as we expect. We use the same union trickery as __smp_load_acquire and {READ,WRITE}_ONCE() to avoid GCC complaining that pointers are potentially cast to narrower width integers in unreachable paths. A whitespace issue at the top of __smp_store_release() is also corrected. No changes are necessary for __smp_load_acquire(). Load instructions implicitly clear any upper bits of the register, and the compiler will only consider the least significant bits of the register as valid regardless. Fixes: 47933ad4 ("arch: Introduce smp_load_acquire(), smp_store_release()") Fixes: 878a84d5 ("arm64: add missing data types in smp_load_acquire/smp_store_release") Cc: <stable@vger.kernel.org> # 3.14.x- Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Matthias Kaehlcke <mka@chromium.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
The inline assembly in __XCHG_CASE() uses a +Q constraint to hazard against other accesses to the memory location being exchanged. However, the pointer passed to the constraint is a u8 pointer, and thus the hazard only applies to the first byte of the location. GCC can take advantage of this, assuming that other portions of the location are unchanged, as demonstrated with the following test case: union u { unsigned long l; unsigned int i[2]; }; unsigned long update_char_hazard(union u *u) { unsigned int a, b; a = u->i[1]; asm ("str %1, %0" : "+Q" (*(char *)&u->l) : "r" (0UL)); b = u->i[1]; return a ^ b; } unsigned long update_long_hazard(union u *u) { unsigned int a, b; a = u->i[1]; asm ("str %1, %0" : "+Q" (*(long *)&u->l) : "r" (0UL)); b = u->i[1]; return a ^ b; } The linaro 15.08 GCC 5.1.1 toolchain compiles the above as follows when using -O2 or above: 0000000000000000 <update_char_hazard>: 0: d2800001 mov x1, #0x0 // #0 4: f9000001 str x1, [x0] 8: d2800000 mov x0, #0x0 // #0 c: d65f03c0 ret 0000000000000010 <update_long_hazard>: 10: b9400401 ldr w1, [x0,#4] 14: d2800002 mov x2, #0x0 // #0 18: f9000002 str x2, [x0] 1c: b9400400 ldr w0, [x0,#4] 20: 4a000020 eor w0, w1, w0 24: d65f03c0 ret This patch fixes the issue by passing an unsigned long pointer into the +Q constraint, as we do for our cmpxchg code. This may hazard against more than is necessary, but this is better than missing a necessary hazard. Fixes: 305d454a ("arm64: atomics: implement native {relaxed, acquire, release} atomics") Cc: <stable@vger.kernel.org> # 4.4.x- Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
When handling a data abort from EL0, we currently zero the top byte of the faulting address, as we assume the address is a TTBR0 address, which may contain a non-zero address tag. However, the address may be a TTBR1 address, in which case we should not zero the top byte. This patch fixes that. The effect is that the full TTBR1 address is passed to the task's signal handler (or printed out in the kernel log). When handling a data abort from EL1, we leave the faulting address intact, as we assume it's either a TTBR1 address or a TTBR0 address with tag 0x00. This is true as far as I'm aware, we don't seem to access a tagged TTBR0 address anywhere in the kernel. Regardless, it's easy to forget about address tags, and code added in the future may not always remember to remove tags from addresses before accessing them. So add tag handling to the EL1 data abort handler as well. This also makes it consistent with the EL0 data abort handler. Fixes: d50240a5 ("arm64: mm: permit use of tagged pointers at EL0") Cc: <stable@vger.kernel.org> # 3.12.x- Reviewed-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
When we take a watchpoint exception, the address that triggered the watchpoint is found in FAR_EL1. We compare it to the address of each configured watchpoint to see which one was hit. The configured watchpoint addresses are untagged, while the address in FAR_EL1 will have an address tag if the data access was done using a tagged address. The tag needs to be removed to compare the address to the watchpoints. Currently we don't remove it, and as a result can report the wrong watchpoint as being hit (specifically, always either the highest TTBR0 watchpoint or lowest TTBR1 watchpoint). This patch removes the tag. Fixes: d50240a5 ("arm64: mm: permit use of tagged pointers at EL0") Cc: <stable@vger.kernel.org> # 3.12.x- Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
When we emulate userspace cache maintenance in the kernel, we can currently send the task a SIGSEGV even though the maintenance was done on a valid address. This happens if the address has a non-zero address tag, and happens to not be mapped in. When we get the address from a user register, we don't currently remove the address tag before performing cache maintenance on it. If the maintenance faults, we end up in either __do_page_fault, where find_vma can't find the VMA if the address has a tag, or in do_translation_fault, where the tagged address will appear to be above TASK_SIZE. In both cases, the address is not mapped in, and the task is sent a SIGSEGV. This patch removes the tag from the address before using it. With this patch, the fault is handled correctly, the address gets mapped in, and the cache maintenance succeeds. As a second bug, if cache maintenance (correctly) fails on an invalid tagged address, the address gets passed into arm64_notify_segfault, where find_vma fails to find the VMA due to the tag, and the wrong si_code may be sent as part of the siginfo_t of the segfault. With this patch, the correct si_code is sent. Fixes: 7dd01aef ("arm64: trap userspace "dc cvau" cache operation on errata-affected core") Cc: <stable@vger.kernel.org> # 4.8.x- Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 5月, 2017 3 次提交
-
-
由 Laura Abbott 提交于
Now that all call sites, completely decouple cacheflush.h and set_memory.h [sfr@canb.auug.org.au: kprobes/x86: merge fix for set_memory.h decoupling] Link: http://lkml.kernel.org/r/20170418180903.10300fd3@canb.auug.org.au Link: http://lkml.kernel.org/r/1488920133-27229-17-git-send-email-labbott@redhat.comSigned-off-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Laura Abbott 提交于
The set_memory_* functions have moved to set_memory.h. Use that header explicitly. Link: http://lkml.kernel.org/r/1488920133-27229-4-git-send-email-labbott@redhat.comSigned-off-by: NLaura Abbott <labbott@redhat.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Laura Abbott 提交于
Patch series "set_memory_* functions header refactor", v3. The set_memory_* APIs came out of a desire to have a better way to change memory attributes. Many of these attributes were linked to cache functionality so the prototypes were put in cacheflush.h. These days, the APIs have grown and have a much wider use than just cache APIs. To support this growth, split off set_memory_* and friends into a separate header file to avoid growing cacheflush.h for APIs that have nothing to do with caches. Link: http://lkml.kernel.org/r/1488920133-27229-2-git-send-email-labbott@redhat.comSigned-off-by: NLaura Abbott <labbott@redhat.com> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 5月, 2017 3 次提交
-
-
由 Eric Auger 提交于
This patch adds a new attribute to GICV3 KVM device KVM_DEV_ARM_VGIC_GRP_CTRL group. This allows userspace to flush all GICR pending tables into guest RAM. Signed-off-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NChristoffer Dall <cdall@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Eric Auger 提交于
Introduce new attributes in KVM_DEV_ARM_VGIC_GRP_CTRL group: - KVM_DEV_ARM_ITS_SAVE_TABLES: saves the ITS tables into guest RAM - KVM_DEV_ARM_ITS_RESTORE_TABLES: restores them into VGIC internal structures. We hold the vcpus lock during the save and restore to make sure no vcpu is running. At this stage the functionality is not yet implemented. Only the skeleton is put in place. Signed-off-by: NEric Auger <eric.auger@redhat.com> [Given we will move the iodev register until setting the base addr] Reviewed-by: NChristoffer Dall <cdall@linaro.org>
-
由 Eric Auger 提交于
The ITS KVM device exposes a new KVM_DEV_ARM_VGIC_GRP_ITS_REGS group which allows the userspace to save/restore ITS registers. At this stage the get/set/has operations are not yet implemented. Signed-off-by: NEric Auger <eric.auger@redhat.com> Reviewed-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NChristoffer Dall <cdall@linaro.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 05 5月, 2017 1 次提交
-
-
由 Catalin Marinas 提交于
While honouring the DMA_ATTR_FORCE_CONTIGUOUS on arm64 (commit 44176bb3: "arm64: Add support for DMA_ATTR_FORCE_CONTIGUOUS to IOMMU"), the existing uses of dma_mmap_attrs() and dma_get_sgtable() have been broken by passing a physically contiguous vm_struct with an invalid pages pointer through the common iommu API. Since the coherent allocation with DMA_ATTR_FORCE_CONTIGUOUS uses CMA, this patch simply reuses the existing swiotlb logic for mmap and get_sgtable. Note that the current implementation of get_sgtable (both swiotlb and iommu) is broken if dma_declare_coherent_memory() is used since such memory does not have a corresponding struct page. To be addressed in a subsequent patch. Fixes: 44176bb3 ("arm64: Add support for DMA_ATTR_FORCE_CONTIGUOUS to IOMMU") Reported-by: NAndrzej Hajda <a.hajda@samsung.com> Cc: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: NRobin Murphy <robin.murphy@arm.com> Tested-by: NAndrzej Hajda <a.hajda@samsung.com> Reviewed-by: NAndrzej Hajda <a.hajda@samsung.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 04 5月, 2017 1 次提交
-
-
由 Christoffer Dall 提交于
For some time now we have been having a lot of shared functionality between the arm and arm64 KVM support in arch/arm, which not only required a horrible inter-arch reference from the Makefile in arch/arm64/kvm, but also created confusion for newcomers to the code base, as was recently seen on the mailing list. Further, it causes confusion for things like cscope, which needs special attention to index specific shared files for arm64 from the arm tree. Move the shared files into virt/kvm/arm and move the trace points along with it. When moving the tracepoints we have to modify the way the vgic creates definitions of the trace points, so we take the chance to include the VGIC tracepoints in its very own special vgic trace.h file. Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
- 03 5月, 2017 1 次提交
-
-
由 Daniel Borkmann 提交于
When the instruction right before the branch destination is a 64 bit load immediate, we currently calculate the wrong jump offset in the ctx->offset[] array as we only account one instruction slot for the 64 bit load immediate although it uses two BPF instructions. Fix it up by setting the offset into the right slot after we incremented the index. Before (ldimm64 test 1): [...] 00000020: 52800007 mov w7, #0x0 // #0 00000024: d2800060 mov x0, #0x3 // #3 00000028: d2800041 mov x1, #0x2 // #2 0000002c: eb01001f cmp x0, x1 00000030: 54ffff82 b.cs 0x00000020 00000034: d29fffe7 mov x7, #0xffff // #65535 00000038: f2bfffe7 movk x7, #0xffff, lsl #16 0000003c: f2dfffe7 movk x7, #0xffff, lsl #32 00000040: f2ffffe7 movk x7, #0xffff, lsl #48 00000044: d29dddc7 mov x7, #0xeeee // #61166 00000048: f2bdddc7 movk x7, #0xeeee, lsl #16 0000004c: f2ddddc7 movk x7, #0xeeee, lsl #32 00000050: f2fdddc7 movk x7, #0xeeee, lsl #48 [...] After (ldimm64 test 1): [...] 00000020: 52800007 mov w7, #0x0 // #0 00000024: d2800060 mov x0, #0x3 // #3 00000028: d2800041 mov x1, #0x2 // #2 0000002c: eb01001f cmp x0, x1 00000030: 540000a2 b.cs 0x00000044 00000034: d29fffe7 mov x7, #0xffff // #65535 00000038: f2bfffe7 movk x7, #0xffff, lsl #16 0000003c: f2dfffe7 movk x7, #0xffff, lsl #32 00000040: f2ffffe7 movk x7, #0xffff, lsl #48 00000044: d29dddc7 mov x7, #0xeeee // #61166 00000048: f2bdddc7 movk x7, #0xeeee, lsl #16 0000004c: f2ddddc7 movk x7, #0xeeee, lsl #32 00000050: f2fdddc7 movk x7, #0xeeee, lsl #48 [...] Also, add a couple of test cases to make sure JITs pass this test. Tested on Cavium ThunderX ARMv8. The added test cases all pass after the fix. Fixes: 8eee539d ("arm64: bpf: fix out-of-bounds read in bpf2a64_offset()") Reported-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAlexei Starovoitov <ast@kernel.org> Cc: Xi Wang <xi.wang@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-