- 14 9月, 2014 2 次提交
-
-
由 Frederic Weisbecker 提交于
ARM64 irq work self-IPI support depends on __smp_cross_call to point to some relevant IRQ controller operations. This information should be available after the call to init_IRQ(). Lets implement arch_irq_work_has_interrupt() accordingly. Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
由 Peter Zijlstra 提交于
The nohz full code needs irq work to trigger its own interrupt so that the subsystem can work even when the tick is stopped. Lets introduce arch_irq_work_has_interrupt() that archs can override to tell about their support for this ability. Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
- 01 9月, 2014 2 次提交
-
-
由 Will Deacon 提交于
It turns out that vendors are relying on the format of /proc/cpuinfo, and we've even spotted out-of-tree hacks attempting to make it look identical to the format used by arch/arm/. That means we can't afford to churn this interface in mainline, so revert the recent reformatting of the file for arm64 pending discussions on the list to find out what people actually want. This reverts commit d7a49086. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Leo Yan 提交于
Now arm64 defers reloading FPSIMD state, but this optimization also introduces the bug after cpu resume back from low power mode. The reason is after the cpu has been powered off, s/w need set the cpu's fpsimd_last_state to NULL so that it will force to reload FPSIMD state for the thread, otherwise there has the chance to meet the condition for both the task's fpsimd_state.cpu field contains the id of the current cpu, and the cpu's fpsimd_last_state per-cpu variable points to the task's fpsimd_state, so finally kernel will skip to reload the context during it return back to userland. Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NLeo Yan <leoy@marvell.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 29 8月, 2014 7 次提交
-
-
由 Will Deacon 提交于
The KSTK_ESP macro is used to determine the user stack pointer for a given task. In particular, this is used to to report the '[stack]' VMA in /proc/self/maps, which is used by Android to determine the stack location for children of the main thread. This patch fixes the macro to use user_stack_pointer instead of directly returning sp. This means that we report w13 instead of sp, since the former is used as the stack pointer when executing in AArch32 state. Cc: <stable@vger.kernel.org> Reported-by: NSerban Constantinescu <Serban.Constantinescu@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Catalin Marinas 提交于
Commit 5f888a1d (ARM64: perf: support dwarf unwinding in compat mode) changes user_stack_pointer() to return the compat SP for 32-bit tasks but without brackets around the whole definition, with possible issues on the call sites (noticed with a subsequent fix for KSTK_ESP). Fixes: 5f888a1d (ARM64: perf: support dwarf unwinding in compat mode) Reported-by: NSudeep Holla <sudeep.holla@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Christoffer Dall 提交于
The architecture specifies that when the processor wakes up from a WFE or WFI instruction, the instruction is considered complete, however we currrently return to EL1 (or EL0) at the WFI/WFE instruction itself. While most guests may not be affected by this because their local exception handler performs an exception returning setting the event bit or with an interrupt pending, some guests like UEFI will get wedged due this little mishap. Simply skip the instruction when we have completed the emulation. Cc: <stable@vger.kernel.org> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Pranavkumar Sawargaonkar 提交于
X-Gene u-boot runs in EL2 mode with MMU enabled hence we might have stale EL2 tlb enteris when we enable EL2 MMU on each host CPU. This can happen on any ARM/ARM64 board running bootloader in Hyp-mode (or EL2-mode) with MMU enabled. This patch ensures that we flush all Hyp-mode (or EL2-mode) TLBs on each host CPU before enabling Hyp-mode (or EL2-mode) MMU. Cc: <stable@vger.kernel.org> Tested-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NPranavkumar Sawargaonkar <pranavkumar@linaro.org> Signed-off-by: NAnup Patel <anup.patel@linaro.org> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Will Deacon 提交于
The current perf_regs code relies on sp and pc sitting just off the end of the pt_regs->regs array. This is ugly and fragile, so this patch checks for these register explicitly and returns the appropriate field. Acked-by: NJean Pihet <jean.pihet@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
copy_{to,from}_user return the number of bytes remaining on failure, not an error code. This patch returns -EFAULT when the copy operation didn't complete, rather than expose the number of bytes not copied directly to userspace. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
I'm not sure what I was on when I wrote this, but when iterating over the hardware watchpoint array (hbp_watch_array), our index is off by ARM_MAX_BRP, so we walk off the end of our thread_struct... ... except, a dodgy condition in the loop means that it never executes at all (bp cannot be NULL). This patch fixes the code so that we remove the bp check and use the correct index for accessing the watchpoint structures. Cc: <stable@vger.kernel.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 8月, 2014 1 次提交
-
-
由 Geoff Levand 提交于
Remove an unused local variable from head.S. It seems this was never used even from the initial commit 9703d9d7 (arm64: Kernel booting and initialisation), and is a left over from a previous implementation of __calc_phys_offset. Signed-off-by: NGeoff Levand <geoff@infradead.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 26 8月, 2014 1 次提交
-
-
由 Colin Ian King 提交于
Originally found by cppcheck: [arch/arm64/crypto/sha2-ce-glue.c:153]: (warning) Assignment of function parameter has no effect outside the function. Did you forget dereferencing it? Updating data by blocks * SHA256_BLOCK_SIZE at the end of sha2_finup is redundant code and can be removed. Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NColin Ian King <colin.king@canonical.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 22 8月, 2014 1 次提交
-
-
由 Semen Protsenko 提交于
"efi" global data structure contains "runtime_version" field which must be assigned in order to use it later in Runtime Services virtual calls (virt_efi_* functions). Before this patch "runtime_version" was unassigned (0), so each Runtime Service virtual call that checks revision would fail. Signed-off-by: NSemen Protsenko <semen.protsenko@linaro.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: <stable@vger.kernel.org> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
- 20 8月, 2014 5 次提交
-
-
由 Will Deacon 提交于
For some reason, the audit patches didn't make it out of -next this merge window, so revert our temporary hack and let the audit guys deal with fixing up -next. This reverts commit 2a8f45b0. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ganapatrao Kulkarni 提交于
Now that we support 48-bit physical addressing, update MAX_PHYSMEM_BITS accordingly. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NGanapatrao Kulkarni <ganapatrao.kulkarni@caviumnetworks.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Leif Lindholm 提交于
UEFI provides its own method for marking regions to reserve, via the memory map which is also used to initialise memblock. So when using the UEFI memory map, ignore any memreserve entries present in the DT. Reported-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Brown 提交于
Currently when run on an APM platform the ARMv8 defconfig has no viable options for rootfs other than ramdisk which is rather limiting. Since we already have both SATA and the bits needed for NFS root enabled we just need to enable the relevant drivers so do that, helping enable direct testing of upstream. If the configuration ends up becoming too big we can consider modularising some of the drivers and asking people to use an initramfs but for now this is not an issue. Signed-off-by: NMark Brown <broonie@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
When booting via UEFI, the kernel Image is loaded at a 4 kB boundary and the embedded EFI stub is executed in place. The EFI stub relocates the Image to reside TEXT_OFFSET bytes above a 2 MB boundary, and jumps into the kernel proper. In AArch64, PC relative symbol references are emitted using adrp/add or adrp/ldr pairs, where the offset into a 4 kB page is resolved using a separate :lo12: relocation. This implicitly assumes that the code will always be executed at the same relative offset with respect to a 4 kB boundary, or the references will point to the wrong address. This means we should link the kernel at a 4 kB aligned base address in order to remain compatible with the base address the UEFI loader uses when doing the initial load of Image. So update the code that generates TEXT_OFFSET to choose a multiple of 4 kB. At the same time, update the code so it chooses from the interval [0..2MB) as the author originally intended. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 19 8月, 2014 2 次提交
-
-
由 Will Deacon 提交于
arch/arm/ just grew support for the new memfd_create and getrandom syscalls, so add them to our compat layer too. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
This removes an unfortunately placed semi-colon resulting in all instruction caches being classified as AIVIVT. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 12 8月, 2014 1 次提交
-
-
由 Iyappan Subramanian 提交于
This patch adds bindings for APM X-Gene SoC ethernet driver. Signed-off-by: NIyappan Subramanian <isubramanian@apm.com> Signed-off-by: NRavi Patel <rapatel@apm.com> Signed-off-by: NKeyur Chudgar <kchudgar@apm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 8月, 2014 2 次提交
-
-
由 Andy Lutomirski 提交于
The core mm code will provide a default gate area based on FIXADDR_USER_START and FIXADDR_USER_END if !defined(__HAVE_ARCH_GATE_AREA) && defined(AT_SYSINFO_EHDR). This default is only useful for ia64. arm64, ppc, s390, sh, tile, 64-bit UML, and x86_32 have their own code just to disable it. arm, 32-bit UML, and x86_64 have gate areas, but they have their own implementations. This gets rid of the default and moves the code into ia64. This should save some code on architectures without a gate area: it's now possible to inline the gate_area functions in the default case. Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Acked-by: NNathan Lynch <nathan_lynch@mentor.com> Acked-by: NH. Peter Anvin <hpa@linux.intel.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [in principle] Acked-by: Richard Weinberger <richard@nod.at> [for um] Acked-by: Will Deacon <will.deacon@arm.com> [for arm64] Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Jeff Dike <jdike@addtoit.com> Cc: Richard Weinberger <richard@nod.at> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Nathan Lynch <Nathan_Lynch@mentor.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Laura Abbott 提交于
Rather than have architectures #define ARCH_HAS_SG_CHAIN in an architecture specific scatterlist.h, make it a proper Kconfig option and use that instead. At same time, remove the header files are are now mostly useless and just include asm-generic/scatterlist.h. [sfr@canb.auug.org.au: powerpc files now need asm/dma.h] Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> [x86] Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [powerpc] Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "James E.J. Bottomley" <JBottomley@parallels.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 8月, 2014 1 次提交
-
-
由 Nicolas Pitre 提交于
The strings used to list IPIs in /proc/interrupts are reused for tracing purposes. While at it, the code is slightly cleaned up so the ipi_types array indices are no longer offset by IPI_RESCHEDULE whose value is 0 anyway. Link: http://lkml.kernel.org/p/1406318733-26754-5-git-send-email-nicolas.pitre@linaro.orgAcked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 06 8月, 2014 2 次提交
-
-
由 Richard Weinberger 提交于
Use sigsp() instead of the open coded variant. Signed-off-by: NRichard Weinberger <richard@nod.at>
-
由 Richard Weinberger 提交于
Use the more generic functions get_signal() signal_setup_done() for signal delivery. Signed-off-by: NRichard Weinberger <richard@nod.at>
-
- 01 8月, 2014 2 次提交
-
-
由 Mark Rutland 提交于
Due to a missing newline in the I-cache policy detection log output, it's possible to get some ratehr unfortunate output at boot time: CPU1: Booted secondary processor Detected VIPT I-cache on CPU1CPU2: Booted secondary processor Detected VIPT I-cache on CPU2CPU3: Booted secondary processor Detected VIPT I-cache on CPU3CPU4: Booted secondary processor Detected PIPT I-cache on CPU4CPU5: Booted secondary processor Detected PIPT I-cache on CPU5Brought up 6 CPUs SMP: Total of 6 processors activated. This patch adds the missing newline to the format string, cleaning up the output. Fixes: 59ccc0d4 ("arm64: cachetype: report weakest cache policy") Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Marc Zyngier 提交于
Commit f0a3eaff (ARM64: KVM: fix big endian issue in access_vm_reg for 32bit guest) changed the way we handle CP15 VM accesses, so that all 64bit accesses are done via vcpu_sys_reg. This looks like a good idea as it solves indianness issues in an elegant way, except for one small detail: the register index is doesn't refer to the same array! We end up corrupting some random data structure instead. Fix this by reverting to the original code, except for the introduction of a vcpu_cp15_64_high macro that deals with the endianness thing. Tested on Juno with 32bit SMP guests. Cc: Victor Kamensky <victor.kamensky@linaro.org> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 31 7月, 2014 4 次提交
-
-
由 Marc Zyngier 提交于
Commit 72c58395 (arm64: gicv3: Allow GICv3 compilation with older binutils) changed the way we express the GICv3 system registers, but couldn't change the occurences used by KVM as the code wasn't merged yet. Just fix the accessors. Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Will Deacon 提交于
This reverts commit a28e3f4b. Ard and Yi Li report that this patch is broken by design, so revert it and let them sort it out for 3.18 instead. Reported-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 byungchul.park 提交于
Commit 190f1ca8 ("arm64: add support for kernel mode NEON in interrupt context") introduced a typing error in fpsimd_save_partial_state ENDPROC. This patch fixes the typing error. Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Nbyungchul.park <byungchul.park@lge.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Our break hooks are used to handle brk exceptions from kgdb (and potentially kprobes if that code ever resurfaces), so don't bother calling them if the BRK exception comes from userspace. This prevents userspace from trapping to a kdb shell on systems where kgdb is enabled and active. Cc: <stable@vger.kernel.org> Reported-by: NOmar Sandoval <osandov@osandov.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 30 7月, 2014 2 次提交
-
-
由 Robert Richter 提交于
Matching x86 and making it more convenient to run the arm64 default kernel as distros like Ubuntu need this option. Signed-off-by: NRobert Richter <rrichter@cavium.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Arun Chandran 提交于
Building a kernel with CPU_BIG_ENDIAN fails if there are stale objects from a !CPU_BIG_ENDIAN build. Due to a missing FORCE prerequisite on an if_changed rule in the VDSO Makefile, we attempt to link a stale LE object into the new BE kernel. According to Documentation/kbuild/makefiles.txt, FORCE is required for if_changed rules and forgetting it is a common mistake, so fix it by 'Forcing' the build of vdso. This patch fixes build errors like these: arch/arm64/kernel/vdso/note.o: compiled for a little endian system and target is big endian failed to merge target specific data of file arch/arm64/kernel/vdso/note.o arch/arm64/kernel/vdso/sigreturn.o: compiled for a little endian system and target is big endian failed to merge target specific data of file arch/arm64/kernel/vdso/sigreturn.o Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArun Chandran <achandran@mvista.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 29 7月, 2014 1 次提交
-
-
由 Will Deacon 提交于
When running as a kvm guest on a para-virtualised platform, it is useful to have virtio implementations of console, 9pfs and network. This adds these options to the arm64 defconfig, so we can easily run a defconfig kernel build as both host and as a kvm guest. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 28 7月, 2014 1 次提交
-
-
由 Mikulas Patocka 提交于
cryptsetup fails on arm64 when using kernel encryption via AF_ALG socket. See https://bugzilla.redhat.com/show_bug.cgi?id=1122937 The bug is caused by incorrect handling of unaligned data in arch/arm64/crypto/aes-glue.c. Cryptsetup creates a buffer that is aligned on 8 bytes, but not on 16 bytes. It opens AF_ALG socket and uses the socket to encrypt data in the buffer. The arm64 crypto accelerator causes data corruption or crashes in the scatterwalk_pagedone. This patch fixes the bug by passing the residue bytes that were not processed as the last parameter to blkcipher_walk_done. Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
- 25 7月, 2014 3 次提交
-
-
由 Catalin Marinas 提交于
GICv3 introduces new system registers accessible with the full msr/mrs syntax (e.g. mrs x0, Sop0_op1_CRm_CRn_op2). However, only recent binutils understand the new syntax. This patch introduces msr_s/mrs_s assembly macros which generate the equivalent instructions above and converts the existing GICv3 code (both drivers/irqchip/ and arch/arm64/kernel/). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NOlof Johansson <olof@lixom.net> Tested-by: NOlof Johansson <olof@lixom.net> Suggested-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NJason Cooper <jason@lakedaemon.net> Cc: Will Deacon <will.deacon@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com>
-
由 Mark Salter 提交于
Under certain loads, this soft lockup has been observed: BUG: soft lockup - CPU#2 stuck for 22s! [ip6tables:1016] Modules linked in: ip6t_rpfilter ip6t_REJECT cfg80211 rfkill xt_conntrack ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw vfat fat efivarfs xfs libcrc32c CPU: 2 PID: 1016 Comm: ip6tables Not tainted 3.13.0-0.rc7.30.sa2.aarch64 #1 task: fffffe03e81d1400 ti: fffffe03f01f8000 task.ti: fffffe03f01f8000 PC is at __cpu_flush_kern_tlb_range+0xc/0x40 LR is at __purge_vmap_area_lazy+0x28c/0x3ac pc : [<fffffe000009c5cc>] lr : [<fffffe0000182710>] pstate: 80000145 sp : fffffe03f01fbb70 x29: fffffe03f01fbb70 x28: fffffe03f01f8000 x27: fffffe0000b19000 x26: 00000000000000d0 x25: 000000000000001c x24: fffffe03f01fbc50 x23: fffffe03f01fbc58 x22: fffffe03f01fbc10 x21: fffffe0000b2a3f8 x20: 0000000000000802 x19: fffffe0000b2a3c8 x18: 000003fffdf52710 x17: 000003ff9d8bb910 x16: fffffe000050fbfc x15: 0000000000005735 x14: 000003ff9d7e1a5c x13: 0000000000000000 x12: 000003ff9d7e1a5c x11: 0000000000000007 x10: fffffe0000c09af0 x9 : fffffe0000ad1000 x8 : 000000000000005c x7 : fffffe03e8624000 x6 : 0000000000000000 x5 : 0000000000000000 x4 : 0000000000000000 x3 : fffffe0000c09cc8 x2 : 0000000000000000 x1 : 000fffffdfffca80 x0 : 000fffffcd742150 The __cpu_flush_kern_tlb_range() function looks like: ENTRY(__cpu_flush_kern_tlb_range) dsb sy lsr x0, x0, #12 lsr x1, x1, #12 1: tlbi vaae1is, x0 add x0, x0, #1 cmp x0, x1 b.lo 1b dsb sy isb ret ENDPROC(__cpu_flush_kern_tlb_range) The above soft lockup shows the PC at tlbi insn with: x0 = 0x000fffffcd742150 x1 = 0x000fffffdfffca80 So __cpu_flush_kern_tlb_range has 0x128ba930 tlbi flushes left after it has already been looping for 23 seconds!. Looking up one frame at __purge_vmap_area_lazy(), there is: ... list_for_each_entry_rcu(va, &vmap_area_list, list) { if (va->flags & VM_LAZY_FREE) { if (va->va_start < *start) *start = va->va_start; if (va->va_end > *end) *end = va->va_end; nr += (va->va_end - va->va_start) >> PAGE_SHIFT; list_add_tail(&va->purge_list, &valist); va->flags |= VM_LAZY_FREEING; va->flags &= ~VM_LAZY_FREE; } } ... if (nr || force_flush) flush_tlb_kernel_range(*start, *end); So if two areas are being freed, the range passed to flush_tlb_kernel_range() may be as large as the vmalloc space. For arm64, this is ~240GB for 4k pagesize and ~2TB for 64kpage size. This patch works around this problem by adding a loop limit. If the range is larger than the limit, use flush_tlb_all() rather than flushing based on individual pages. The limit chosen is arbitrary as the TLB size is implementation specific and not accessible in an architected way. The aim of the arbitrary limit is to avoid soft lockup. Signed-off-by: NMark Salter <msalter@redhat.com> [catalin.marinas@arm.com: commit log update] [catalin.marinas@arm.com: marginal optimisation] [catalin.marinas@arm.com: changed to MAX_TLB_RANGE and added comment] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Andreas Schwab 提交于
This fixes the following build failure when building with CONFIG_MODVERSIONS enabled: CC [M] arch/arm64/crypto/aes-glue-ce.o ld: cannot find arch/arm64/crypto/aes-glue-ce.o: No such file or directory make[1]: *** [arch/arm64/crypto/aes-ce-blk.o] Error 1 make: *** [arch/arm64/crypto] Error 2 The $(obj)/aes-glue-%.o rule only creates $(obj)/.tmp_aes-glue-ce.o, it should use if_changed_rule instead of if_changed_dep. Signed-off-by: NAndreas Schwab <schwab@suse.de> [ardb: mention CONFIG_MODVERSIONS in commit log] Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-