- 04 3月, 2016 1 次提交
-
-
由 Mark Rutland 提交于
Commit 0f54b14e ("arm64: cpufeature: Change read_cpuid() to use sysreg's mrs_s macro") changed read_cpuid to require a SYS_ prefix on register names, to allow manual assembly of registers unknown by the toolchain, using tables in sysreg.h. This interacts poorly with commit 42b55734 ("efi/arm64: Check for h/w support before booting a >4 KB granular kernel"), which is curretly queued via the tip tree, and uses read_cpuid without a SYS_ prefix. Due to this, a build of next-20160304 fails if EFI and 64K pages are selected. To avoid this issue when trees are merged, move the required SYS_ prefixing into read_cpuid, and revert all of the updated callsites to pass plain register names. This effectively reverts the bulk of commit 0f54b14e. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 2月, 2016 3 次提交
-
-
由 Suzuki K Poulose 提交于
Now that we have a clear understanding of the sign of a feature, rename the routines to reflect the sign, so that it is not misused. The cpuid_feature_extract_field() now accepts a 'sign' parameter. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K Poulose 提交于
Adds a hook for checking whether a secondary CPU has the features used already by the kernel during early boot, based on the boot CPU and plugs in the check for ASID size. The ID_AA64MMFR0_EL1:ASIDBits determines the size of the mm context id and is used in the early boot to make decisions. The value is picked up from the Boot CPU and cannot be delayed until other CPUs are up. If a secondary CPU has a smaller size than that of the Boot CPU, things will break horribly and the usual SANITY check is not good enough to prevent the system from crashing. So, crash the system with enough information. Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K Poulose 提交于
Add a helper to extract ASIDBits on the current cpu Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 18 2月, 2016 1 次提交
-
-
由 James Morse 提交于
Older assemblers may not have support for newer feature registers. To get round this, sysreg.h provides a 'mrs_s' macro that takes a register encoding and generates the raw instruction. Change read_cpuid() to use mrs_s in all cases so that new registers don't have to be a special case. Including sysreg.h means we need to move the include and definition of read_cpuid() after the #ifndef __ASSEMBLY__ to avoid syntax errors in vmlinux.lds. Signed-off-by: NJames Morse <james.morse@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 26 11月, 2015 1 次提交
-
-
由 Will Deacon 提交于
Under some unusual context-switching patterns, it is possible to end up with multiple threads from the same mm running concurrently with different ASIDs: 1. CPU x schedules task t with mm p containing ASID a and generation g This task doesn't block and the CPU doesn't context switch. So: * per_cpu(active_asid, x) = {g,a} * p->context.id = {g,a} 2. Some other CPU generates an ASID rollover. The global generation is now (g + 1). CPU x is still running t, with no context switch and so per_cpu(reserved_asid, x) = {g,a} 3. CPU y schedules task t', which shares mm p with t. The generation mismatches, so we take the slowpath and hit the reserved ASID from CPU x. p is then updated so that p->context.id = {g + 1,a} 4. CPU y schedules some other task u, which has an mm != p. 5. Some other CPU generates *another* CPU rollover. The global generation is now (g + 2). CPU x is still running t, with no context switch and so per_cpu(reserved_asid, x) = {g,a}. 6. CPU y once again schedules task t', but now *fails* to hit the reserved ASID from CPU x because of the generation mismatch. This results in a new ASID being allocated, despite the fact that t is still running on CPU x with the same mm. Consequently, TLBIs (e.g. as a result of CoW) will not be synchronised between the two threads. This patch fixes the problem by updating all of the matching reserved ASIDs when we hit on the slowpath (i.e. in step 3 above). This keeps the reserved ASIDs in-sync with the mm and avoids the problem. Reported-by: NTony Thompson <anthony.thompson@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 07 10月, 2015 4 次提交
-
-
由 Will Deacon 提交于
mm_cpumask isn't actually used for anything on arm64, so remove all the code trying to keep it up-to-date. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
switch_mm performs some checks to try and avoid entering the ASID allocator: (1) If we're switching to the init_mm (no user mappings), then simply set a reserved TTBR0 value with no page table (the zero page) (2) If prev == next *and* the mm_cpumask indicates that we've run on this CPU before, then we can skip the allocator. However, there is plenty of redundancy here. With the new ASID allocator, if prev == next, then we know that our ASID is valid and do not need to worry about re-allocation. Consequently, we can drop the mm_cpumask check in (2) and move the prev == next check before the init_mm check, since if prev == next == init_mm then there's nothing to do. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Our current switch_mm implementation suffers from a number of problems: (1) The ASID allocator relies on IPIs to synchronise the CPUs on a rollover event (2) Because of (1), we cannot allocate ASIDs with interrupts disabled and therefore make use of a TIF_SWITCH_MM flag to postpone the actual switch to finish_arch_post_lock_switch (3) We run context switch with a reserved (invalid) TTBR0 value, even though the ASID and pgd are updated atomically (4) We take a global spinlock (cpu_asid_lock) during context-switch (5) We use h/w broadcast TLB operations when they are not required (e.g. in flush_context) This patch addresses these problems by rewriting the ASID algorithm to match the bitmap-based arch/arm/ implementation more closely. This in turn allows us to remove much of the complications surrounding switch_mm, including the ugly thread flag. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
There are a number of places where a single CPU is running with a private page-table and we need to perform maintenance on the TLB and I-cache in order to ensure correctness, but do not require the operation to be broadcast to other CPUs. This patch adds local variants of tlb_flush_all and __flush_icache_all to support these use-cases and updates the callers respectively. __local_flush_icache_all also implies an isb, since it is intended to be used synchronously. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NDavid Daney <david.daney@cavium.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 7月, 2015 1 次提交
-
-
由 Will Deacon 提交于
Nobody seems to be producing !SMP systems anymore, so this is just becoming a source of kernel bugs, particularly if people want to use coherent DMA with non-shared pages. This patch forces CONFIG_SMP=y for arm64, removing a modest amount of code in the process. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 12 6月, 2015 1 次提交
-
-
由 Catalin Marinas 提交于
After secondary CPU boot or hotplug, the active_mm of the idle thread is &init_mm. The init_mm.pgd (swapper_pg_dir) is only meant for TTBR1_EL1 and must not be set in TTBR0_EL1. Since when active_mm == &init_mm the TTBR0_EL1 is already set to the reserved value, there is no need to perform any context reset. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org>
-
- 17 9月, 2012 1 次提交
-
-
由 Catalin Marinas 提交于
The patch adds support for thread creation and context switching. The context switching CPU specific code is introduced with the CPU support patch (part of the arch/arm64/mm/proc.S file). AArch64 supports ASID-tagged TLBs and the ASID can be either 8 or 16-bit wide (detectable via the ID_AA64AFR0_EL1 register). Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: NArnd Bergmann <arnd@arndb.de>
-