- 08 12月, 2011 1 次提交
-
-
由 Catalin Marinas 提交于
With LPAE, TTBRx registers are 64-bit. The ASID is stored in TTBR0 rather than a separate Context ID register. This patch makes the necessary changes to handle context switching on LPAE. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 13 9月, 2011 1 次提交
-
-
由 Thomas Gleixner 提交于
Annotate the low level hardware locks which must not be preempted. In mainline this change documents the low level nature of the lock - otherwise there's no functional difference. Lockdep and Sparse checking will work as usual. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 09 6月, 2011 2 次提交
-
-
由 Russell King 提交于
This reverts commit 45b95235. Will Deacon reports that: In 52af9c6c ("ARM: 6943/1: mm: use TTBR1 instead of reserved context ID") I updated the ASID rollover code to use only the kernel page tables whilst updating the ASID. Unfortunately, the code to restore the user page tables was part of a later patch which isn't yet in mainline, so this leaves the code quite broken. We're also in the process of eliminating __ARCH_WANT_INTERRUPTS_ON_CTXSW from ARM, so lets revert these until we can properly sort out what we're doing with the context switching. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
This reverts commit 52af9c6c. Will Deacon reports that: In 52af9c6c ("ARM: 6943/1: mm: use TTBR1 instead of reserved context ID") I updated the ASID rollover code to use only the kernel page tables whilst updating the ASID. Unfortunately, the code to restore the user page tables was part of a later patch which isn't yet in mainline, so this leaves the code quite broken. We're also in the process of eliminating __ARCH_WANT_INTERRUPTS_ON_CTXSW from ARM, so lets revert these until we can properly sort out what we're doing with the ARM context switching. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 5月, 2011 2 次提交
-
-
由 Will Deacon 提交于
Now that ASID 0 is no longer used as a reserved value, allow it to be allocated to tasks. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
On ARMv7 CPUs that cache first level page table entries (like the Cortex-A15), using a reserved ASID while changing the TTBR or flushing the TLB is unsafe. This is because the CPU may cache the first level entry as the result of a speculative memory access while the reserved ASID is assigned. After the process owning the page tables dies, the memory will be reallocated and may be written with junk values which can be interpreted as global, valid PTEs by the processor. This will result in the TLB being populated with bogus global entries. This patch avoids the use of a reserved context ID in the v7 switch_mm and ASID rollover code by temporarily using the swapper_pg_dir pointed at by TTBR1, which contains only global entries that are not tagged with ASIDs. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 16 2月, 2010 1 次提交
-
-
由 Catalin Marinas 提交于
The current ASID allocation algorithm doesn't ensure the notification of the other CPUs when the ASID rolls over. This may lead to two processes using the same ASID (but different generation) or multiple threads of the same process using different ASIDs. This patch adds the broadcasting of the ASID rollover event to the other CPUs. To avoid a race on multiple CPUs modifying "cpu_last_asid" during the handling of the broadcast, the ASID numbering now starts at "smp_processor_id() + 1". At rollover, the cpu_last_asid will be set to NR_CPUS. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 10月, 2009 1 次提交
-
-
由 Russell King 提交于
Errata 411920 indicates that any "invalidate entire instruction cache" operation can fail if the right conditions are present. This is not limited just to those operations in flush.c, but elsewhere. Place the workaround in the already existing __flush_icache_all() function instead. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 9月, 2009 1 次提交
-
-
由 Rusty Russell 提交于
Makes code futureproof against the impending change to mm->cpu_vm_mask. It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 09 5月, 2007 2 次提交
-
-
由 Catalin Marinas 提交于
ARMv7 can have VIPT, PIPT or ASID-tagged VIVT I-cache. This patch adds the necessary invalidation of the I-cache when the ASID numbers are re-used. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Close a hole in the ASID version switch, particularly the following scenario: CPU0 MM PID CPU1 MM PID idle A pid(A) A idle(lazy tlb) * new asid version triggered by B * B pid(B) A pid(A) * MM A gets new asid version * A idle(lazy tlb) A pid(A) * CPU1 doesn't see the new ASID * The result is that CPU1 continues running with the hardware set for the original (stale) ASID value, but mm->context.id contains the new ASID value. The result is that the next MM fault on CPU1 updates the page table entries, but flush_tlb_page() fails due to wrong ASID. There is a related case with a threaded application is allocated a new ASID on one CPU while another of its threads is running on some different CPU. This scenario is not fixed by this commit. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 2月, 2007 1 次提交
-
-
由 Catalin Marinas 提交于
On newer architectures (ARMv6, ARMv7), the depth of the prefetch and branch prediction is implementation defined and there is a small risk of wrong ASID tagging when changing TTBR0 before setting the new context id. The recommended solution is to set a reserved ASID during TTBR changing. This patch reserves ASID 0. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 9月, 2006 1 次提交
-
-
由 Russell King 提交于
Rename mmu.c to context.c - it's the ARMv6 ASID context handling code rather than generic "mmu" handling code. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 17 4月, 2005 1 次提交
-
-
由 Linus Torvalds 提交于
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
-