- 04 3月, 2013 1 次提交
-
-
由 Will Deacon 提交于
mm->context.id is updated under asid_lock when a new ASID is allocated to an mm_struct. However, it is also read without the lock when a task is being scheduled and checking whether or not the current ASID generation is up-to-date. If two threads of the same process are being scheduled in parallel and the bottom bits of the generation in their mm->context.id match the current generation (that is, the mm_struct has not been used for ~2^24 rollovers) then the non-atomic, lockless access to mm->context.id may yield the incorrect ASID. This patch fixes this issue by making mm->context.id and atomic64_t, ensuring that the generation is always read consistently. For code that only requires access to the ASID bits (e.g. TLB flushing by mm), then the value is accessed directly, which GCC converts to an ldrb. Cc: <stable@vger.kernel.org> # 3.8 Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 11月, 2012 1 次提交
-
-
由 Nicolas Pitre 提交于
The kvm_seq value has nothing to do what so ever with this other KVM. Given that KVM support on ARM is imminent, it's best to rename kvm_seq into something else to clearly identify what it is about i.e. a sequence number for vmalloc section mappings. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 06 11月, 2012 1 次提交
-
-
由 Will Deacon 提交于
ASIDs are allocated to MMU contexts based on a rolling counter. This means that after 255 allocations we must invalidate all existing ASIDs via an expensive IPI mechanism to synchronise all of the online CPUs and ensure that all tasks execute with an ASID from the new generation. This patch changes the rollover behaviour so that we rely instead on the hardware broadcasting of the TLB invalidation to avoid the IPI calls. This works by keeping track of the active ASID on each core, which is then reserved in the case of a rollover so that currently scheduled tasks can continue to run. For cores without hardware TLB broadcasting, we keep track of pending flushes in a cpumask, so cores can flush their local TLB before scheduling a new mm. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 17 4月, 2012 3 次提交
-
-
由 Catalin Marinas 提交于
This patch removes the __ARCH_WANT_INTERRUPTS_ON_CTXSW definition for ARMv5 and earlier processors. On such processors, the context switch requires a full cache flush. To avoid high interrupt latencies, this patch defers the mm switching to the post-lock switch hook if the interrupts are disabled. Reviewed-by: NWill Deacon <will.deacon@arm.com> Tested-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NFrank Rowand <frank.rowand@am.sony.com> Tested-by: NMarc Zyngier <Marc.Zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
The current_mm variable was used to store the new mm between the switch_mm() and switch_to() calls where an IPI to reset the context could have set the wrong mm. Since the interrupts are disabled during context switch, there is no need for this variable, current->active_mm already points to the current mm when interrupts are re-enabled. Reviewed-by: NWill Deacon <will.deacon@arm.com> Tested-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NFrank Rowand <frank.rowand@am.sony.com> Tested-by: NMarc Zyngier <Marc.Zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
Since the ASIDs must be unique to an mm across all the CPUs in a system, the __new_context() function needs to broadcast a context reset event to all the CPUs during ASID allocation if a roll-over occurred. Such IPIs cannot be issued with interrupts disabled and ARM had to define __ARCH_WANT_INTERRUPTS_ON_CTXSW. This patch changes the check_context() function to check_and_switch_context() called from switch_mm(). In case of ASID-capable CPUs (ARMv6 onwards), if a new ASID is needed and the interrupts are disabled, it defers the __new_context() and cpu_switch_mm() calls to the post-lock switch hook where the interrupts are enabled. Setting the reserved TTBR0 was also moved to check_and_switch_context() from cpu_v7_switch_mm(). Reviewed-by: NWill Deacon <will.deacon@arm.com> Tested-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NFrank Rowand <frank.rowand@am.sony.com> Tested-by: NMarc Zyngier <Marc.Zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 24 3月, 2012 1 次提交
-
-
由 Will Deacon 提交于
The current user mapping for the vectors page is inserted as a `horrible hack vma' into each task via arch_setup_additional_pages. This causes problems with the MM subsystem and vm_normal_page, as described here: https://lkml.org/lkml/2012/1/14/55 Following the suggestion from Hugh in the above thread, this patch uses the gate_vma for the vectors user mapping, therefore consolidating the horrible hack VMAs into one. Acked-and-Tested-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 10月, 2010 1 次提交
-
-
由 Nicolas Pitre 提交于
The kernel makes the high vector page visible to user space. This page contains (amongst others) small code segments that can be executed in user space. Make this page visible through ptrace and /proc/<pid>/mem in order to let gdb perform code parsing needed for proper unwinding. For example, the ERESTART_RESTARTBLOCK handler actually has a stack frame -- it returns to a PC value stored on the user's stack. To unwind after a "sleep" system call was interrupted twice, GDB would have to recognize this situation and understand that stack frame layout -- which it currently cannot do. We could fix this by hard-coding addresses in the vector page range into GDB, but that isn't really portable as not all of those addresses are guaranteed to remain stable across kernel releases. And having the gdb process make an exception for this page and get content from its own address space for it looks strange, and it is not future proof either. Being located above PAGE_OFFSET, this vma cannot be deleted by user space code. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
- 16 2月, 2010 1 次提交
-
-
由 Catalin Marinas 提交于
The current ASID allocation algorithm doesn't ensure the notification of the other CPUs when the ASID rolls over. This may lead to two processes using the same ASID (but different generation) or multiple threads of the same process using different ASIDs. This patch adds the broadcasting of the ASID rollover event to the other CPUs. To avoid a race on multiple CPUs modifying "cpu_last_asid" during the handling of the broadcast, the ASID numbering now starts at "smp_processor_id() + 1". At rollover, the cpu_last_asid will be set to NR_CPUS. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 9月, 2009 1 次提交
-
-
由 Rusty Russell 提交于
Makes code futureproof against the impending change to mm->cpu_vm_mask. It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 24 7月, 2009 1 次提交
-
-
由 Catalin Marinas 提交于
There is no MMU context switching on MMU-less systems. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 30 11月, 2008 1 次提交
-
-
由 Russell King 提交于
... and fix those drivers that were incorrectly relying upon that include. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 9月, 2008 1 次提交
-
-
由 Russell King 提交于
Rather than pollute asm/cacheflush.h with the cache type definitions, move them to asm/cachetype.h, and include this new header where necessary. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 8月, 2008 1 次提交
-
-
由 Russell King 提交于
Move platform independent header files to arch/arm/include/asm, leaving those in asm/arch* and asm/plat* alone. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 7月, 2008 1 次提交
-
-
由 Catalin Marinas 提交于
This patch adds the I-cache invalidation in update_mmu_cache if the corresponding vma is marked as executable. It also invalidates the I-cache if a thread migrates to a CPU it never ran on. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 17 5月, 2007 1 次提交
-
-
由 Russell King 提交于
Presently, we check for the minimum ARM architecture that we're building for to determine whether we need ASID support. This is wrong - if we're going to support a range of CPUs which include ARMv6 or higher, we need the ASID. Convert the checks to use a new configuration symbol, and arrange for ARMv6 and higher CPU entries to select it. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 5月, 2007 1 次提交
-
-
由 Russell King 提交于
Close a hole in the ASID version switch, particularly the following scenario: CPU0 MM PID CPU1 MM PID idle A pid(A) A idle(lazy tlb) * new asid version triggered by B * B pid(B) A pid(A) * MM A gets new asid version * A idle(lazy tlb) A pid(A) * CPU1 doesn't see the new ASID * The result is that CPU1 continues running with the hardware set for the original (stale) ASID value, but mm->context.id contains the new ASID value. The result is that the next MM fault on CPU1 updates the page table entries, but flush_tlb_page() fails due to wrong ASID. There is a related case with a threaded application is allocated a new ASID on one CPU while another of its threads is running on some different CPU. This scenario is not fixed by this commit. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 5月, 2007 1 次提交
-
-
由 Jeremy Fitzhardinge 提交于
Add hooks to allow a paravirt implementation to track the lifetime of an mm. Paravirtualization requires three hooks, but only two are needed in common code. They are: arch_dup_mmap, which is called when a new mmap is created at fork arch_exit_mmap, which is called when the last process reference to an mm is dropped, which typically happens on exit and exec. The third hook is activate_mm, which is called from the arch-specific activate_mm() macro/function, and so doesn't need stub versions for other architectures. It's called when an mm is first used. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NAndi Kleen <ak@suse.de> Cc: linux-arch@vger.kernel.org Cc: James Bottomley <James.Bottomley@SteelEye.com> Acked-by: NIngo Molnar <mingo@elte.hu>
-
- 30 6月, 2006 1 次提交
-
-
由 Russell King 提交于
Allow section mappings to be setup using ioremap() and torn down with iounmap(). This requires additional support in the MM context switch to ensure that mappings are properly synchronised when mapped in. Based an original implementation by Deepak Saxena, reworked and ARMv6 support added by rmk. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 6月, 2006 1 次提交
-
-
由 Russell King 提交于
Majorily based on Hyok Choi's patches, this fixes up the asm-arm header files for mmuless systems. Over and above Hyok's patches: - nommu.h merged into mmu.h (it's only a structure) - nommu_context.h is essentially the same as mmu_context.h, but without the MM switching code. so there's no point having separate files. Also, in memory.h, there's no point #ifndef'ing PHYS_OFFSET and END_MEM - both CONFIG_DRAM_BASE and CONFIG_DRAM_SIZE will always be set by the configuration scripts. Other files have minor formatting changes, but are essentially the same. Hyok's original patches were signed off thusly: Signed-off-by: NHyok S. Choi <hyok.choi@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 17 11月, 2005 1 次提交
-
-
由 Russell King 提交于
atomic.h, bitops.h and mmu_context.h are using likely/unlikely. thread_info.h uses __attribute_const__. Hence these files require linux/compiler.h to be included. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 07 11月, 2005 1 次提交
-
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 11月, 2005 1 次提交
-
-
由 Russell King 提交于
Since we do not invalidate TLBs/caches on MM switches, we should not clear the cpu_vm_mask for the CPU. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 17 4月, 2005 1 次提交
-
-
由 Linus Torvalds 提交于
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
-