- 19 3月, 2018 26 次提交
-
-
由 Marc Zyngier 提交于
Both HYP io mappings call ioremap, followed by create_hyp_io_mappings. Let's move the ioremap call into create_hyp_io_mappings itself, which simplifies the code a bit and allows for further refactoring. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
kvm_vgic_global_state is part of the read-only section, and is usually accessed using a PC-relative address generation (adrp + add). It is thus useless to use kern_hyp_va() on it, and actively problematic if kern_hyp_va() becomes non-idempotent. On the other hand, there is no way that the compiler is going to guarantee that such access is always PC relative. So let's bite the bullet and provide our own accessor. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NJames Morse <james.morse@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Now that we can dynamically compute the kernek/hyp VA mask, there is no need for a feature flag to trigger the alternative patching. Let's drop the flag and everything that depends on it. Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
So far, we're using a complicated sequence of alternatives to patch the kernel/hyp VA mask on non-VHE, and NOP out the masking altogether when on VHE. The newly introduced dynamic patching gives us the opportunity to simplify that code by patching a single instruction with the correct mask (instead of the mind bending cumulative masking we have at the moment) or even a single NOP on VHE. This also adds some initial code that will allow the patching callback to switch to a more complex patching. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NJames Morse <james.morse@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
We lack a way to encode operations such as AND, ORR, EOR that take an immediate value. Doing so is quite involved, and is all about reverse engineering the decoding algorithm described in the pseudocode function DecodeBitMasks(). This has been tested by feeding it all the possible literal values and comparing the output with that of GAS. Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
We're missing the a way to generate the encoding of the N immediate, which is only a single bit used in a number of instruction that take an immediate. Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
We've so far relied on a patching infrastructure that only gave us a single alternative, without any way to provide a range of potential replacement instructions. For a single feature, this is an all or nothing thing. It would be interesting to have a more flexible grained way of patching the kernel though, where we could dynamically tune the code that gets injected. In order to achive this, let's introduce a new form of dynamic patching, assiciating a callback to a patching site. This callback gets source and target locations of the patching request, as well as the number of instructions to be patched. Dynamic patching is declared with the new ALTERNATIVE_CB and alternative_cb directives: asm volatile(ALTERNATIVE_CB("mov %0, #0\n", callback) : "r" (v)); or alternative_cb callback mov x0, #0 alternative_cb_end where callback is the C function computing the alternative. Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
We can finally get completely rid of any calls to the VGICv3 save/restore functions when the AP lists are empty on VHE systems. This requires carefully factoring out trap configuration from saving and restoring state, and carefully choosing what to do on the VHE and non-VHE path. One of the challenges is that we cannot save/restore the VMCR lazily because we can only write the VMCR when ICC_SRE_EL1.SRE is cleared when emulating a GICv2-on-GICv3, since otherwise all Group-0 interrupts end up being delivered as FIQ. To solve this problem, and still provide fast performance in the fast path of exiting a VM when no interrupts are pending (which also optimized the latency for actually delivering virtual interrupts coming from physical interrupts), we orchestrate a dance of only doing the activate/deactivate traps in vgic load/put for VHE systems (which can have ICC_SRE_EL1.SRE cleared when running in the host), and doing the configuration on every round-trip on non-VHE systems. Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
The APRs can only have bits set when the guest acknowledges an interrupt in the LR and can only have a bit cleared when the guest EOIs an interrupt in the LR. Therefore, if we have no LRs with any pending/active interrupts, the APR cannot change value and there is no need to clear it on every exit from the VM (hint: it will have already been cleared when we exited the guest the last time with the LRs all EOIed). The only case we need to take care of is when we migrate the VCPU away from a CPU or migrate a new VCPU onto a CPU, or when we return to userspace to capture the state of the VCPU for migration. To make sure this works, factor out the APR save/restore functionality into separate functions called from the VCPU (and by extension VGIC) put/load hooks. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
We can program the GICv2 hypervisor control interface logic directly from the core vgic code and can instead do the save/restore directly from the flush/sync functions, which can lead to a number of future optimizations. Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
We do not have to change the c15 trap setting on each switch to/from the guest on VHE systems, because this setting only affects guest EL1/EL0 (and therefore not the VHE host). The PMU and debug trap configuration can also be done on vcpu load/put instead, because they don't affect how the VHE host kernel can access the debug registers while executing KVM kernel code. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
32-bit registers are not used by a 64-bit host kernel and can be deferred, but we need to rework the accesses to these register to access the latest values depending on whether or not guest system registers are loaded on the CPU or only reside in memory. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
ELR_EL1 is not used by a VHE host kernel and can be deferred, but we need to rework the accesses to this register to access the latest value depending on whether or not guest system registers are loaded on the CPU or only reside in memory. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
SPSR_EL1 is not used by a VHE host kernel and can be deferred, but we need to rework the accesses to this register to access the latest value depending on whether or not guest system registers are loaded on the CPU or only reside in memory. The handling of accessing the various banked SPSRs for 32-bit VMs is a bit clunky, but this will be improved in following patches which will first prepare and subsequently implement deferred save/restore of the 32-bit registers, including the 32-bit SPSRs. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
We are about to defer saving and restoring some groups of system registers to vcpu_put and vcpu_load on supported systems. This means that we need some infrastructure to access system registes which supports either accessing the memory backing of the register or directly accessing the system registers, depending on the state of the system when we access the register. We do this by defining read/write accessor functions, which can handle both "immediate" and "deferrable" system registers. Immediate registers are always saved/restored in the world-switch path, but deferrable registers are only saved/restored in vcpu_put/vcpu_load when supported and sysregs_loaded_on_cpu will be set in that case. Note that we don't use the deferred mechanism yet in this patch, but only introduce infrastructure. This is to improve convenience of review in the subsequent patches where it is clear which registers become deferred. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
Currently we access the system registers array via the vcpu_sys_reg() macro. However, we are about to change the behavior to some times modify the register file directly, so let's change this to two primitives: * Accessor macros vcpu_write_sys_reg() and vcpu_read_sys_reg() * Direct array access macro __vcpu_sys_reg() The accessor macros should be used in places where the code needs to access the currently loaded VCPU's state as observed by the guest. For example, when trapping on cache related registers, a write to a system register should go directly to the VCPU version of the register. The direct array access macro can be used in places where the VCPU is known to never be running (for example userspace access) or for registers which are never context switched (for example all the PMU system registers). This rewrites all users of vcpu_sys_regs to one of the macros described above. No functional change. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
We currently handle 32-bit accesses to trapped VM system registers using the 32-bit index into the coproc array on the vcpu structure, which is a union of the coproc array and the sysreg array. Since all the 32-bit coproc indices are created to correspond to the architectural mapping between 64-bit system registers and 32-bit coprocessor registers, and because the AArch64 system registers are the double in size of the AArch32 coprocessor registers, we can always find the system register entry that we must update by dividing the 32-bit coproc index by 2. This is going to make our lives much easier when we have to start accessing system registers that use deferred save/restore and might have to be read directly from the physical CPU. Reviewed-by: NAndrew Jones <drjones@redhat.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
There is no need to have multiple identical functions with different names for saving host and guest state. When saving and restoring state for the host and guest, the state is the same for both contexts, and that's why we have the kvm_cpu_context structure. Delete one version and rename the other into simply save/restore. Reviewed-by: NAndrew Jones <drjones@redhat.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
As we are about to handle system registers quite differently between VHE and non-VHE systems. In preparation for that, we need to split some of the handling functions between VHE and non-VHE functionality. For now, we simply copy the non-VHE functions, but we do change the use of static keys for VHE and non-VHE functionality now that we have separate functions. Reviewed-by: NAndrew Jones <drjones@redhat.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
So far this is mostly (see below) a copy of the legacy non-VHE switch function, but we will start reworking these functions in separate directions to work on VHE and non-VHE in the most optimal way in later patches. The only difference after this patch between the VHE and non-VHE run functions is that we omit the branch-predictor variant-2 hardening for QC Falkor CPUs, because this workaround is specific to a series of non-VHE ARMv8.0 CPUs. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
Instead of having multiple calls from the world switch path to the debug logic, each figuring out if the dirty bit is set and if we should save/restore the debug registers, let's just provide two hooks to the debug save/restore functionality, one for switching to the guest context, and one for switching to the host context, and we get the benefit of only having to evaluate the dirty flag once on each path, plus we give the compiler some more room to inline some of this functionality. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
We have numerous checks around that checks if the HCR_EL2 has the RW bit set to figure out if we're running an AArch64 or AArch32 VM. In some cases, directly checking the RW bit (given its unintuitive name), is a bit confusing, and that's not going to improve as we move logic around for the following patches that optimize KVM on AArch64 hosts with VHE. Therefore, introduce a helper, vcpu_el1_is_32bit, and replace existing direct checks of HCR_EL2.RW with the helper. Reviewed-by: NJulien Grall <julien.grall@arm.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
As we are about to move a bunch of save/restore logic for VHE kernels to the load and put functions, we need some infrastructure to do this. Reviewed-by: NAndrew Jones <drjones@redhat.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
We currently have a separate read-modify-write of the HCR_EL2 on entry to the guest for the sole purpose of setting the VF and VI bits, if set. Since this is most rarely the case (only when using userspace IRQ chip and interrupts are in flight), let's get rid of this operation and instead modify the bits in the vcpu->arch.hcr[_el2] directly when needed. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Reviewed-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Shih-Wei Li 提交于
We always set the IMO and FMO bits in the HCR_EL2 when running the guest, regardless if we use the vgic or not. By moving these flags to HCR_GUEST_FLAGS we can avoid one of the extra save/restore operations of HCR_EL2 in the world switch code, and we can also soon get rid of the other one. This is safe, because even though the IMO and FMO bits control both taking the interrupts to EL2 and remapping ICC_*_EL1 to ICV_*_EL1 when executed at EL1, as long as we ensure that these bits are clear when running the EL1 host, we're OK, because we reset the HCR_EL2 to only have the HCR_RW bit set when returning to EL1 on non-VHE systems. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NShih-Wei Li <shihwei@cs.columbia.edu> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
We already have the percpu area for the host cpu state, which points to the VCPU, so there's no need to store the VCPU pointer on the stack on every context switch. We can be a little more clever and just use tpidr_el2 for the percpu offset and load the VCPU pointer from the host context. This has the benefit of being able to retrieve the host context even when our stack is corrupted, and it has a potential performance benefit because we trade a store plus a load for an mrs and a load on a round trip to the guest. This does require us to calculate the percpu offset without including the offset from the kernel mapping of the percpu array to the linear mapping of the array (which is what we store in tpidr_el1), because a PC-relative generated address in EL2 is already giving us the hyp alias of the linear mapping of a kernel address. We do this in __cpu_init_hyp_mode() by using kvm_ksym_ref(). The code that accesses ESR_EL2 was previously using an alternative to use the _EL1 accessor on VHE systems, but this was actually unnecessary as the _EL1 accessor aliases the ESR_EL2 register on VHE, and the _EL2 accessor does the same thing on both systems. Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NAndrew Jones <drjones@redhat.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 26 2月, 2018 2 次提交
-
-
由 Dave Martin 提交于
The HCR_EL2.TID3 flag needs to be set when trapping guest access to the CPU ID registers is required. However, the decision about whether to set this bit does not need to be repeated at every switch to the guest. Instead, it's sufficient to make this decision once and record the outcome. This patch moves the decision to vcpu_reset_hcr() and records the choice made in vcpu->arch.hcr_el2. The world switch code can then load this directly when switching to the guest without the need for conditional logic on the critical path. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Suggested-by: NChristoffer Dall <christoffer.dall@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Mark Rutland 提交于
We don't currently limit guest accesses to the LOR registers, which we neither virtualize nor context-switch. As such, guests are provided with unusable information/controls, and are not isolated from each other (or the host). To prevent these issues, we can trap register accesses and present the illusion LORegions are unssupported by the CPU. To do this, we mask ID_AA64MMFR1.LO, and set HCR_EL2.TLOR to trap accesses to the following registers: * LORC_EL1 * LOREA_EL1 * LORID_EL1 * LORN_EL1 * LORSA_EL1 ... when trapped, we inject an UNDEFINED exception to EL1, simulating their non-existence. As noted in D7.2.67, when no LORegions are implemented, LoadLOAcquire and StoreLORelease must behave as LoadAcquire and StoreRelease respectively. We can ensure this by clearing LORC_EL1.EN when a CPU's EL2 is first initialized, as the host kernel will not modify this. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Vladimir Murzin <vladimir.murzin@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 23 2月, 2018 1 次提交
-
-
由 Pratyush Anand 提交于
do_task_stat() calls get_wchan(), which further does unwind_frame(). unwind_frame() restores frame->pc to original value in case function graph tracer has modified a return address (LR) in a stack frame to hook a function return. However, if function graph tracer has hit a filtered function, then we can't unwind it as ftrace_push_return_trace() has biased the index(frame->graph) with a 'huge negative' offset(-FTRACE_NOTRACE_DEPTH). Moreover, arm64 stack walker defines index(frame->graph) as unsigned int, which can not compare a -ve number. Similar problem we can have with calling of walk_stackframe() from save_stack_trace_tsk() or dump_backtrace(). This patch fixes unwind_frame() to test the index for -ve value and restore index accordingly before we can restore frame->pc. Reproducer: cd /sys/kernel/debug/tracing/ echo schedule > set_graph_notrace echo 1 > options/display-graph echo wakeup > current_tracer ps -ef | grep -i agent Above commands result in: Unable to handle kernel paging request at virtual address ffff801bd3d1e000 pgd = ffff8003cbe97c00 [ffff801bd3d1e000] *pgd=0000000000000000, *pud=0000000000000000 Internal error: Oops: 96000006 [#1] SMP [...] CPU: 5 PID: 11696 Comm: ps Not tainted 4.11.0+ #33 [...] task: ffff8003c21ba000 task.stack: ffff8003cc6c0000 PC is at unwind_frame+0x12c/0x180 LR is at get_wchan+0xd4/0x134 pc : [<ffff00000808892c>] lr : [<ffff0000080860b8>] pstate: 60000145 sp : ffff8003cc6c3ab0 x29: ffff8003cc6c3ab0 x28: 0000000000000001 x27: 0000000000000026 x26: 0000000000000026 x25: 00000000000012d8 x24: 0000000000000000 x23: ffff8003c1c04000 x22: ffff000008c83000 x21: ffff8003c1c00000 x20: 000000000000000f x19: ffff8003c1bc0000 x18: 0000fffffc593690 x17: 0000000000000000 x16: 0000000000000001 x15: 0000b855670e2b60 x14: 0003e97f22cf1d0f x13: 0000000000000001 x12: 0000000000000000 x11: 00000000e8f4883e x10: 0000000154f47ec8 x9 : 0000000070f367c0 x8 : 0000000000000000 x7 : 00008003f7290000 x6 : 0000000000000018 x5 : 0000000000000000 x4 : ffff8003c1c03cb0 x3 : ffff8003c1c03ca0 x2 : 00000017ffe80000 x1 : ffff8003cc6c3af8 x0 : ffff8003d3e9e000 Process ps (pid: 11696, stack limit = 0xffff8003cc6c0000) Stack: (0xffff8003cc6c3ab0 to 0xffff8003cc6c4000) [...] [<ffff00000808892c>] unwind_frame+0x12c/0x180 [<ffff000008305008>] do_task_stat+0x864/0x870 [<ffff000008305c44>] proc_tgid_stat+0x3c/0x48 [<ffff0000082fde0c>] proc_single_show+0x5c/0xb8 [<ffff0000082b27e0>] seq_read+0x160/0x414 [<ffff000008289e6c>] __vfs_read+0x58/0x164 [<ffff00000828b164>] vfs_read+0x88/0x144 [<ffff00000828c2e8>] SyS_read+0x60/0xc0 [<ffff0000080834a0>] __sys_trace_return+0x0/0x4 Fixes: 20380bb3 (arm64: ftrace: fix a stack tracer's output under function graph tracer) Signed-off-by: NPratyush Anand <panand@redhat.com> Signed-off-by: NJerome Marchand <jmarchan@redhat.com> [catalin.marinas@arm.com: replace WARN_ON with WARN_ON_ONCE] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 19 2月, 2018 2 次提交
-
-
由 Robin Murphy 提交于
In converting __range_ok() into a static inline, I inadvertently made it more type-safe, but without considering the ordering of the relevant conversions. This leads to quite a lot of Sparse noise about the fact that we use __chk_user_ptr() after addr has already been converted from a user pointer to an unsigned long. Rather than just adding another cast for the sake of shutting Sparse up, it seems reasonable to rework the types to make logical sense (although the resulting codegen for __range_ok() remains identical). The only callers this affects directly are our compat traps where the inferred "user-pointer-ness" of a register value now warrants explicit casting. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Bhupesh Sharma 提交于
Since commit e1a50de3 (arm64: cputype: Silence Sparse warnings), compilation of arm64 architecture is broken with the following error messages: AR arch/arm64/kernel/built-in.o arch/arm64/kernel/head.S: Assembler messages: arch/arm64/kernel/head.S:677: Error: found 'L', expected: ')' arch/arm64/kernel/head.S:677: Error: found 'L', expected: ')' arch/arm64/kernel/head.S:677: Error: found 'L', expected: ')' arch/arm64/kernel/head.S:677: Error: junk at end of line, first unrecognized character is `L' arch/arm64/kernel/head.S:677: Error: unexpected characters following instruction at operand 2 -- `movz x1,:abs_g1_s:0xff00ffffffUL' arch/arm64/kernel/head.S:677: Error: unexpected characters following instruction at operand 2 -- `movk x1,:abs_g0_nc:0xff00ffffffUL' This patch fixes the same by using the UL() macro correctly for assigning the MPIDR_HWID_BITMASK macro value. Fixes: e1a50de3 ("arm64: cputype: Silence Sparse warnings") Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NBhupesh Sharma <bhsharma@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 17 2月, 2018 2 次提交
-
-
由 Robin Murphy 提交于
Sparse makes a fair bit of noise about our MPIDR mask being implicitly long - let's explicitly describe it as such rather than just relying on the value forcing automatic promotion. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
In many cases, page tables can be accessed concurrently by either another CPU (due to things like fast gup) or by the hardware page table walker itself, which may set access/dirty bits. In such cases, it is important to use READ_ONCE/WRITE_ONCE when accessing page table entries so that entries cannot be torn, merged or subject to apparent loss of coherence due to compiler transformations. Whilst there are some scenarios where this cannot happen (e.g. pinned kernel mappings for the linear region), the overhead of using READ_ONCE /WRITE_ONCE everywhere is minimal and makes the code an awful lot easier to reason about. This patch consistently uses these macros in the arch code, as well as explicitly namespacing pointers to page table entries from the entries themselves by using adopting a 'p' suffix for the former (as is sometimes used elsewhere in the kernel source). Tested-by: NYury Norov <ynorov@caviumnetworks.com> Tested-by: NRichard Ruigrok <rruigrok@codeaurora.org> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 07 2月, 2018 7 次提交
-
-
由 Andrey Konovalov 提交于
Right now the fact that KASAN uses a single shadow byte for 8 bytes of memory is scattered all over the code. This change defines KASAN_SHADOW_SCALE_SHIFT early in asm include files and makes use of this constant where necessary. [akpm@linux-foundation.org: coding-style fixes] Link: http://lkml.kernel.org/r/34937ca3b90736eaad91b568edf5684091f662e3.1515775666.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com> Acked-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Cc: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Marc Zyngier 提交于
A new feature of SMCCC 1.1 is that it offers firmware-based CPU workarounds. In particular, SMCCC_ARCH_WORKAROUND_1 provides BP hardening for CVE-2017-5715. If the host has some mitigation for this issue, report that we deal with it using SMCCC_ARCH_WORKAROUND_1, as we apply the host workaround on every guest exit. Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
As we're about to update the PSCI support, and because I'm lazy, let's move the PSCI include file to include/kvm so that both ARM architectures can find it. Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
The arm64 futex code has some explicit dereferencing of user pointers where performing atomic operations in response to a futex command. This patch uses masking to limit any speculative futex operations to within the user address space. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Like we've done for get_user and put_user, ensure that user pointers are masked before invoking the underlying __arch_{clear,copy_*}_user operations. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
access_ok isn't an expensive operation once the addr_limit for the current thread has been loaded into the cache. Given that the initial access_ok check preceding a sequence of __{get,put}_user operations will take the brunt of the miss, we can make the __* variants identical to the full-fat versions, which brings with it the benefits of address masking. The likely cost in these sequences will be from toggling PAN/UAO, which we can address later by implementing the *_unsafe versions. Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
A mispredicted conditional call to set_fs could result in the wrong addr_limit being forwarded under speculation to a subsequent access_ok check, potentially forming part of a spectre-v1 attack using uaccess routines. This patch prevents this forwarding from taking place, but putting heavy barriers in set_fs after writing the addr_limit. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-