- 05 12月, 2015 1 次提交
-
-
由 Catalin Marinas 提交于
When a kernel is built with CONFIG_TRACE_IRQFLAGS the following warning is produced when entering userspace for the first time: WARNING: at /work/Linux/linux-2.6-aarch64/kernel/locking/lockdep.c:3519 Modules linked in: CPU: 1 PID: 1 Comm: systemd Not tainted 4.4.0-rc3+ #639 Hardware name: Juno (DT) task: ffffffc9768a0000 ti: ffffffc9768a8000 task.ti: ffffffc9768a8000 PC is at check_flags.part.22+0x19c/0x1a8 LR is at check_flags.part.22+0x19c/0x1a8 pc : [<ffffffc0000fba6c>] lr : [<ffffffc0000fba6c>] pstate: 600001c5 sp : ffffffc9768abe10 x29: ffffffc9768abe10 x28: ffffffc9768a8000 x27: 0000000000000000 x26: 0000000000000001 x25: 00000000000000a6 x24: ffffffc00064be6c x23: ffffffc0009f249e x22: ffffffc9768a0000 x21: ffffffc97fea5480 x20: 00000000000001c0 x19: ffffffc00169a000 x18: 0000005558cc7b58 x17: 0000007fb78e3180 x16: 0000005558d2e238 x15: ffffffffffffffff x14: 0ffffffffffffffd x13: 0000000000000008 x12: 0101010101010101 x11: 7f7f7f7f7f7f7f7f x10: fefefefefefeff63 x9 : 7f7f7f7f7f7f7f7f x8 : 6e655f7371726964 x7 : 0000000000000001 x6 : ffffffc0001079c4 x5 : 0000000000000000 x4 : 0000000000000001 x3 : ffffffc001698438 x2 : 0000000000000000 x1 : ffffffc9768a0000 x0 : 000000000000002e Call trace: [<ffffffc0000fba6c>] check_flags.part.22+0x19c/0x1a8 [<ffffffc0000fc440>] lock_is_held+0x80/0x98 [<ffffffc00064bafc>] __schedule+0x404/0x730 [<ffffffc00064be6c>] schedule+0x44/0xb8 [<ffffffc000085bb0>] ret_to_user+0x0/0x24 possible reason: unannotated irqs-off. irq event stamp: 502169 hardirqs last enabled at (502169): [<ffffffc000085a98>] el0_irq_naked+0x1c/0x24 hardirqs last disabled at (502167): [<ffffffc0000bb3bc>] __do_softirq+0x17c/0x298 softirqs last enabled at (502168): [<ffffffc0000bb43c>] __do_softirq+0x1fc/0x298 softirqs last disabled at (502143): [<ffffffc0000bb830>] irq_exit+0xa0/0xf0 This happens because we disable interrupts in ret_to_user before calling schedule() in work_resched. This patch adds the necessary trace_hardirqs_off annotation. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NMark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 04 12月, 2015 4 次提交
-
-
由 Li Bin 提交于
There is no need to worry about module and __init text disappearing case, because that ftrace has a module notifier that is called when a module is being unloaded and before the text goes away and this code grabs the ftrace_lock mutex and removes the module functions from the ftrace list, such that it will no longer do any modifications to that module's text, the update to make functions be traced or not is done under the ftrace_lock mutex as well. And by now, __init section codes should not been modified by ftrace, because it is black listed in recordmcount.c and ignored by ftrace. Suggested-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Li Bin 提交于
For ftrace on arm64, kstop_machine which is hugely disruptive to a running system is not needed to convert nops to ftrace calls or back, because that to be modified instrucions, that NOP, B or BL, are all safe instructions which called "concurrent modification and execution of instructions", that can be executed by one thread of execution as they are being modified by another thread of execution without requiring explicit synchronization. Signed-off-by: NLi Bin <huawei.libin@huawei.com> Reviewed-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Boqun Feng reported a rather nasty ordering issue with spin_unlock_wait on architectures implementing spin_lock with LL/SC sequences and acquire semantics: | CPU 1 CPU 2 CPU 3 | ================== ==================== ============== | spin_unlock(&lock); | spin_lock(&lock): | r1 = *lock; // r1 == 0; | o = READ_ONCE(object); // reordered here | object = NULL; | smp_mb(); | spin_unlock_wait(&lock); | *lock = 1; | smp_mb(); | o->dead = true; | if (o) // true | BUG_ON(o->dead); // true!! The crux of the problem is that spin_unlock_wait(&lock) can return on CPU 1 whilst CPU 2 is in the process of taking the lock. This can be resolved by upgrading spin_unlock_wait to a LOCK operation, forcing it to serialise against a concurrent locker and giving it acquire semantics in the process (although it is not at all clear whether this is needed - different callers seem to assume different things about the barrier semantics and architectures are similarly disjoint in their implementations of the macro). This patch implements spin_unlock_wait using an LL/SC sequence with acquire semantics on arm64. For v8.1 systems with the LSE atomics, the exclusive writeback is omitted, since the spin_lock operation is indivisible and no intermediate state can be observed. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
arm64 relies on the arm_arch_timer for sched_clock, so we can select HAVE_IRQ_TIME_ACCOUNTING and have the core sched-clock code enable the feature at runtime based on the rate. Reported-by: NMario Smarduch <m.smarduch@samsung.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 02 12月, 2015 2 次提交
-
-
由 Yury Norov 提交于
ARM glibc uses (4 * __getpagesize()) for SHMLBA, which is correct for 4KB pages and works fine for 64KB pages, but the kernel uses a hardcoded 16KB that is too small for 64KB page based kernels. This changes the definition to what user space sees when using 64KB pages. Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NYury Norov <ynorov@caviumnetworks.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jisheng Zhang 提交于
These functions/variables are not needed after booting, so mark them as __init or __initdata. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 01 12月, 2015 3 次提交
-
-
由 Will Deacon 提交于
This patch implements the pte_accessible() macro, which can be used to test whether or not a given pte is a candidate for allocation in the TLB. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Callees of __create_mapping may decide to create section mappings if sufficient low bits of the physical and virtual addresses they were passed are zero. While __create_mapping rounds the virtual base address down, it does not similarly round the physical base address down, and hence non-zero bits in the physical address can prevent use of a section mapping, even where a whole next-level table would be used instead. Round down the physical base address in __create_mapping to enable all callees to always create section mappings when such a mapping is possible. Cc: Laura Abbott <labbott@fedoraproject.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NSteve Capper <steve.capper@linaro.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
If a caller of __create_mapping provides a PA and VA which have different sub-page offsets, it is not clear which offset they expect to apply to the mapping, and is indicative of a bad caller. In some cases, the region we wish to map may validly have a sub-page offset in the physical and virtual addresses. For example, EFI runtime regions have 4K granularity, yet may be mapped by a 64K page kernel. So long as the physical and virtual offsets are the same, the region will be mapped at the expected VAs. Disallow calls with differing sub-page offsets, and WARN when they are encountered, so that we can detect and fix such cases. Cc: Laura Abbott <labbott@fedoraproject.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NSteve Capper <steve.capper@linaro.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 11月, 2015 6 次提交
-
-
由 Ard Biesheuvel 提交于
Even though initcall return values are typically ignored, the prototype is to return 0 on success or a negative errno value on error. So fix the arm_enable_runtime_services() implementation to return 0 on conditions that are not in fact errors, and return a meaningful error code otherwise. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
Add NULL return value checks to two invocations of early_memremap() in the UEFI init code. For the UEFI configuration tables, we just warn since we have a better chance of being able to report the issue in a way that can actually be noticed by a human operator if we don't abort right away. For the UEFI memory map, however, all we can do is panic() since we cannot proceed without a description of memory. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
IDAA64DFR0_EL1: BRPs and WRPs are unsigned values. Use the appropriate helpers to extract those fields. Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reported-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
Some of the feature bits have unsigned values and need to be treated accordingly to avoid errors. Adds the property to the feature bits and use the appropriate field extract helpers. Reported-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Boris Ostrovsky 提交于
After commit 8c058b0b ("x86/irq: Probe for PIC presence before allocating descs for legacy IRQs") early_irq_init() will no longer preallocate descriptors for legacy interrupts if PIC does not exist, which is the case for Xen PV guests. Therefore we may need to allocate those descriptors ourselves. Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Suggested-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Suzuki K. Poulose 提交于
The cpuid_feature_extract_field() extracts the feature value as a signed integer. This could be problematic for features whose values are unsigned. e.g, ID_AA64DFR0_EL1:BRPs. Add an unsigned variant for the unsigned fields. Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reported-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 26 11月, 2015 3 次提交
-
-
由 Catalin Marinas 提交于
This reverts commit 348a65cd. Incorrect page table manipulation that does not respect the ARM ARM recommended break-before-make sequence may lead to TLB conflicts. The contiguous PTE patch makes the system even more susceptible to such errors by changing the mapping from a single page to a contiguous range of pages. An additional TLB invalidation would reduce the risk window, however, the correct fix is to switch to a temporary swapper_pg_dir. Once the correct workaround is done, the reverted commit will be re-applied. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NJeremy Linton <jeremy.linton@arm.com>
-
由 Will Deacon 提交于
Under some unusual context-switching patterns, it is possible to end up with multiple threads from the same mm running concurrently with different ASIDs: 1. CPU x schedules task t with mm p containing ASID a and generation g This task doesn't block and the CPU doesn't context switch. So: * per_cpu(active_asid, x) = {g,a} * p->context.id = {g,a} 2. Some other CPU generates an ASID rollover. The global generation is now (g + 1). CPU x is still running t, with no context switch and so per_cpu(reserved_asid, x) = {g,a} 3. CPU y schedules task t', which shares mm p with t. The generation mismatches, so we take the slowpath and hit the reserved ASID from CPU x. p is then updated so that p->context.id = {g + 1,a} 4. CPU y schedules some other task u, which has an mm != p. 5. Some other CPU generates *another* CPU rollover. The global generation is now (g + 2). CPU x is still running t, with no context switch and so per_cpu(reserved_asid, x) = {g,a}. 6. CPU y once again schedules task t', but now *fails* to hit the reserved ASID from CPU x because of the generation mismatch. This results in a new ASID being allocated, despite the fact that t is still running on CPU x with the same mm. Consequently, TLBIs (e.g. as a result of CoW) will not be synchronised between the two threads. This patch fixes the problem by updating all of the matching reserved ASIDs when we hit on the slowpath (i.e. in step 3 above). This keeps the reserved ASIDs in-sync with the mm and avoids the problem. Reported-by: NTony Thompson <anthony.thompson@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Andrey Ryabinin 提交于
On KASAN + 16K_PAGES + 48BIT_VA arch/arm64/mm/kasan_init.c: In function ‘kasan_early_init’: include/linux/compiler.h:484:38: error: call to ‘__compiletime_assert_95’ declared with attribute error: BUILD_BUG_ON failed: !IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE) _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__) Currently KASAN will not work on 16K_PAGES and 48BIT_VA, so forbid such configuration to avoid above build failure. Signed-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Reported-by: NSuzuki K. Poulose <Suzuki.Poulose@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 11月, 2015 7 次提交
-
-
由 Mark Rutland 提交于
The kernel may use a page granularity of 4K, 16K, or 64K depending on configuration. When mapping EFI runtime regions, we use memrange_efi_to_native to round the physical base address of a region down to a kernel page boundary, and round the size up to a kernel page boundary, adding the residue left over from rounding down the physical base address. We do not round down the virtual base address. In __create_mapping we account for the offset of the virtual base from a granule boundary, adding the residue to the size before rounding the base down to said granule boundary. Thus we account for the residue twice, and when the residue is non-zero will cause __create_mapping to map an additional page at the end of the region. Depending on the memory map, this page may be in a region we are not intended/permitted to map, or may clash with a different region that we wish to map. In typical cases, mapping the next item in the memory map will overwrite the erroneously created entry, as we sort the memory map in the stub. As __create_mapping can cope with base addresses which are not page aligned, we can instead rely on it to map the region appropriately, and simplify efi_virtmap_init by removing the unnecessary code. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
We are missing descriptions for some valid xFSC values in the fault info table (e.g. "TLB conflict abort"), and have erroneous descriptions for reserved values (e.g. "asynchronous external abort", "debug event"). This patch adds the missing xFSC values, and removes erroneous decoding of values reserved by the architecture, as described in ARM DDI 0487A.h. At the same time, fixed the unbalanced brackets for the synchronous parity error strings in the table. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
In early_alloc we check if the memblock_alloc failed by checking the virtual address of the result, which will never fail. This patch fixes it to check the actual result for failure. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
If we call __kvm_hyp_panic while a guest context is active, we call __restore_sysregs before acquiring the system register values for the panic, in the process throwing away the PAR_EL1 value at the point of the panic. This patch modifies __kvm_hyp_panic to stash the PAR_EL1 value prior to restoring host register values, enabling us to report the original values at the point of the panic. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Mark Rutland 提交于
Currently __kvm_hyp_panic uses %p for values which are not pointers, such as the ESR value. This can confusingly lead to "(null)" being printed for the value. Use %x instead, and only use %p for host pointers. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
Cortex-A57 parts up to r1p2 can misreport Stage 2 translation faults when a Stage 1 permission fault or device alignment fault should have been reported. This patch implements the workaround (which is to validate that the Stage-1 translation actually succeeds) by using code patching. Cc: stable@vger.kernel.org Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
When running a 32bit guest under a 64bit hypervisor, the ARMv8 architecture defines a mapping of the 32bit registers in the 64bit space. This includes banked registers that are being demultiplexed over the 64bit ones. On exceptions caused by an operation involving a 32bit register, the HW exposes the register number in the ESR_EL2 register. It was so far understood that SW had to distinguish between AArch32 and AArch64 accesses (based on the current AArch32 mode and register number). It turns out that I misinterpreted the ARM ARM, and the clue is in D1.20.1: "For some exceptions, the exception syndrome given in the ESR_ELx identifies one or more register numbers from the issued instruction that generated the exception. Where the exception is taken from an Exception level using AArch32 these register numbers give the AArch64 view of the register." Which means that the HW is already giving us the translated version, and that we shouldn't try to interpret it at all (for example, doing an MMIO operation from the IRQ mode using the LR register leads to very unexpected behaviours). The fix is thus not to perform a call to vcpu_reg32() at all from vcpu_reg(), and use whatever register number is supplied directly. The only case we need to find out about the mapping is when we actively generate a register access, which only occurs when injecting a fault in a guest. Cc: stable@vger.kernel.org Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 20 11月, 2015 1 次提交
-
-
由 Yang Shi 提交于
As previously reported, some userspace applications depend on bogomips showed by /proc/cpuinfo. Although there is much less legacy impact on aarch64 than arm, it does break libvirt. This patch reverts commit 326b16db ("arm64: delay: don't bother reporting bogomips in /proc/cpuinfo"), but with some tweak due to context change and without the pr_info(). Fixes: 326b16db ("arm64: delay: don't bother reporting bogomips in /proc/cpuinfo") Signed-off-by: NYang Shi <yang.shi@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> # 3.12+ Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 19 11月, 2015 1 次提交
-
-
由 Will Deacon 提交于
A newly introduced function in include/net/sock.h passes a const argument to smp_load_acquire: static inline int sk_state_load(const struct sock *sk) { return smp_load_acquire(&sk->sk_state); } This cause an allmodconfig build failure, since our underlying load-acquire implementation does not handle const types correctly: include/net/sock.h: In function 'sk_state_load': ./arch/arm64/include/asm/barrier.h:71:3: error: read-only variable '___p1' used as 'asm' output asm volatile ("ldarb %w0, %1" \ This patch fixes the problem by reusing the trick in READ_ONCE that loads via a non-const member of an anonymous union. This has the advantage of allowing us to use smp_load_acquire on packed structures (e.g. arch_spinlock_t) as well as primitive types. Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Daney <david.daney@cavium.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Reported-by: NArnd Bergmann <arnd@arndb.de> Reported-by: NDavid Daney <david.daney@cavium.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 18 11月, 2015 5 次提交
-
-
由 Laura Abbott 提交于
The permissions in mark_rodata_ro trigger a build error with STRICT_MM_TYPECHECKS. Fix this by introducing PAGE_KERNEL_ROX for the same reasons as PAGE_KERNEL_RO. From Ard: "PAGE_KERNEL_EXEC has PTE_WRITE set as well, making the range writeable under the ARMv8.1 DBM feature, that manages the dirty bit in hardware (writing to a page with the PTE_RDONLY and PTE_WRITE bits both set will clear the PTE_RDONLY bit in that case)" Signed-off-by: NLaura Abbott <labbott@fedoraproject.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
The asynchronous, merged implementations of AES in CBC, CTR and XTS modes are preferred when available (i.e., when instantiating ablkciphers explicitly). However, the synchronous core AES cipher combined with the generic CBC mode implementation will produce a 'cbc(aes)' blkcipher that is callable asynchronously as well. To prevent this implementation from being used when the accelerated asynchronous implemenation is also available, lower its priority to 250 (i.e., below the asynchronous module's priority of 300). Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
As pointed out by Russell King in response to the proposed ARM version of this code, the sequence to switch between the UEFI runtime mapping and current's actual userland mapping (and vice versa) is potentially unsafe, since it leaves a time window between the switch to the new page tables and the TLB flush where speculative accesses may hit on stale global TLB entries. So instead, use non-global mappings, and perform the switch via the ordinary ASID-aware context switch routines. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Yang Shi 提交于
Save and restore FP/LR in BPF prog prologue and epilogue, save SP to FP in prologue in order to get the correct stack backtrace. However, ARM64 JIT used FP (x29) as eBPF fp register, FP is subjected to change during function call so it may cause the BPF prog stack base address change too. Use x25 to replace FP as BPF stack base register (fp). Since x25 is callee saved register, so it will keep intact during function call. It is initialized in BPF prog prologue when BPF prog is started to run everytime. Save and restore x25/x26 in BPF prologue and epilogue to keep them intact for the outside of BPF. Actually, x26 is unnecessary, but SP requires 16 bytes alignment. So, the BPF stack layout looks like: high original A64_SP => 0:+-----+ BPF prologue |FP/LR| current A64_FP => -16:+-----+ | ... | callee saved registers +-----+ | | x25/x26 BPF fp register => -80:+-----+ | | | ... | BPF prog stack | | | | current A64_SP => +-----+ | | | ... | Function call stack | | +-----+ low CC: Zi Shen Lim <zlim.lnx@gmail.com> CC: Xi Wang <xi.wang@gmail.com> Signed-off-by: NYang Shi <yang.shi@linaro.org> Acked-by: NZi Shen Lim <zlim.lnx@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lorenzo Pieralisi 提交于
The function graph tracer adds instrumentation that is required to trace both entry and exit of a function. In particular the function graph tracer updates the "return address" of a function in order to insert a trace callback on function exit. Kernel power management functions like cpu_suspend() are called upon power down entry with functions called "finishers" that are in turn called to trigger the power down sequence but they may not return to the kernel through the normal return path. When the core resumes from low-power it returns to the cpu_suspend() function through the cpu_resume path, which leaves the trace stack frame set-up by the function tracer in an incosistent state upon return to the kernel when tracing is enabled. This patch fixes the issue by pausing/resuming the function graph tracer on the thread executing cpu_suspend() (ie the function call that subsequently triggers the "suspend finishers"), so that the function graph tracer state is kept consistent across functions that enter power down states and never return by effectively disabling graph tracer while they are executing. Fixes: 819e50e2 ("arm64: Add ftrace support") Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reported-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Suggested-by: NSteven Rostedt <rostedt@goodmis.org> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Cc: Will Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> # 3.16+ Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 17 11月, 2015 5 次提交
-
-
由 Arnd Bergmann 提交于
including ptrace.h brings a definition of BITS_PER_PAGE into device drivers and cause a build warning in allmodconfig builds: drivers/block/drbd/drbd_bitmap.c:482:0: warning: "BITS_PER_PAGE" redefined #define BITS_PER_PAGE (1UL << (PAGE_SHIFT + 3)) This uses a slightly different way to express current_pt_regs() that avoids the use of the header and gets away with the already included asm/ptrace.h. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Arnd Bergmann 提交于
Including linux/acpi.h from asm/dma-mapping.h causes tons of compile-time warnings, e.g. drivers/isdn/mISDN/dsp_ecdis.h:43:0: warning: "FALSE" redefined drivers/isdn/mISDN/dsp_ecdis.h:44:0: warning: "TRUE" redefined drivers/net/fddi/skfp/h/targetos.h:62:0: warning: "TRUE" redefined drivers/net/fddi/skfp/h/targetos.h:63:0: warning: "FALSE" redefined However, it looks like the dependency should not even there as I do not see why __generic_dma_ops() cares about whether we have an ACPI based system or not. The current behavior is to fall back to the global dma_ops when a device has not set its own dma_ops, but only for DT based systems. This seems dangerous, as a random device might have different requirements regarding IOMMU or coherency, so we should really never have that fallback and just forbid DMA when we have not initialized DMA for a device. This removes the global dma_ops variable and the special-casing for ACPI, and just returns the dma ops that got set for the device, or the dummy_dma_ops if none were present. The original code has apparently been copied from arm32 where we rely on it for ISA devices things like the floppy controller, but we should have no such devices on ARM64. Signed-off-by: NArnd Bergmann <arnd@arndb.de> [catalin.marinas@arm.com: removed acpi_disabled check in arch_setup_dma_ops()] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
When booting a 64k pages kernel that is built with CONFIG_DEBUG_RODATA and resides at an offset that is not a multiple of 512 MB, the rounding that occurs in __map_memblock() and fixup_executable() results in incorrect regions being mapped. The following snippet from /sys/kernel/debug/kernel_page_tables shows how, when the kernel is loaded 2 MB above the base of DRAM at 0x40000000, the first 2 MB of memory (which may be inaccessible from non-secure EL1 or just reserved by the firmware) is inadvertently mapped into the end of the module region. ---[ Modules start ]--- 0xfffffdffffe00000-0xfffffe0000000000 2M RW NX ... UXN MEM/NORMAL ---[ Modules end ]--- ---[ Kernel Mapping ]--- 0xfffffe0000000000-0xfffffe0000090000 576K RW NX ... UXN MEM/NORMAL 0xfffffe0000090000-0xfffffe0000200000 1472K ro x ... UXN MEM/NORMAL 0xfffffe0000200000-0xfffffe0000800000 6M ro x ... UXN MEM/NORMAL 0xfffffe0000800000-0xfffffe0000810000 64K ro x ... UXN MEM/NORMAL 0xfffffe0000810000-0xfffffe0000a00000 1984K RW NX ... UXN MEM/NORMAL 0xfffffe0000a00000-0xfffffe00ffe00000 4084M RW NX ... UXN MEM/NORMAL The same issue is likely to occur on 16k pages kernels whose load address is not a multiple of 32 MB (i.e., SECTION_SIZE). So round to SWAPPER_BLOCK_SIZE instead of SECTION_SIZE. Fixes: da141706 ("arm64: add better page protections to arm64") Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NLaura Abbott <labbott@redhat.com> Cc: <stable@vger.kernel.org> # 4.0+ Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Daniel Borkmann 提交于
While recently going over ARM64's BPF code, I noticed that the icache range we're flushing should start at header already and not at ctx.image. Reason is that after b569c1c6 ("net: bpf: arm64: address randomize and write protect JIT code"), we also want to make sure to flush the random-sized trap in front of the start of the actual program (analogous to x86). No operational differences from user side. Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NZi Shen Lim <zlim.lnx@gmail.com> Cc: Alexei Starovoitov <ast@kernel.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Shi 提交于
BPF fp should point to the top of the BPF prog stack. The original implementation made it point to the bottom incorrectly. Move A64_SP to fp before reserve BPF prog stack space. CC: Zi Shen Lim <zlim.lnx@gmail.com> CC: Xi Wang <xi.wang@gmail.com> Signed-off-by: NYang Shi <yang.shi@linaro.org> Reviewed-by: NZi Shen Lim <zlim.lnx@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 11月, 2015 1 次提交
-
-
由 Robin Murphy 提交于
The iommu-dma layer does its own size-alignment for coherent DMA allocations based on IOMMU page sizes, but we still need to consider CPU page sizes for the cases where a non-cacheable CPU mapping is created. Whilst everything on the alloc/map path seems to implicitly align things enough to make it work, some functions used by the corresponding unmap/free path do not, which leads to problems freeing odd-sized allocations. Either way it's something we really should be handling explicitly, so do that to make both paths suitably robust. Reported-by: NYong Wu <yong.wu@mediatek.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 12 11月, 2015 1 次提交
-
-
由 Jisheng Zhang 提交于
hw_breakpoint_restore is only used within suspend.c, so it can be declared static. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-