- 18 4月, 2014 2 次提交
-
-
由 Peter Zijlstra 提交于
The arc mb() implementation is a compiler barrier(), therefore it all doesn't matter one way or the other. Simply remove the existing definitions and use whatever is generated by the defaults. Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Link: http://lkml.kernel.org/n/tip-ua48a59wri3ybz1rz8i7uvbr@git.kernel.org Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Peter Zijlstra 提交于
Both already use asm-generic/barrier.h as per their include/asm/Kbuild. Remove the stale files. Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Link: http://lkml.kernel.org/n/tip-c7vlkshl3tblim0o8z2p70kt@git.kernel.org Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: linux-hexagon@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 26 3月, 2014 1 次提交
-
-
由 Vineet Gupta 提交于
With commit 9df62f05 "arch: use ASM_NL instead of ';'" the generic macros can handle the arch specific newline quirk. Hence we can get rid of ARC asm macros and use the "C" style macros. Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 10 2月, 2014 2 次提交
-
-
由 Tim Chen 提交于
This patch allows each architecture to add its specific assembly optimized arch_mcs_spin_lock_contended and arch_mcs_spinlock_uncontended for MCS lock and unlock functions. Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com> Cc: Scott J Norton <scott.norton@hp.com> Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Cc: AswinChandramouleeswaran <aswin@hp.com> Cc: George Spelvin <linux@horizon.com> Cc: Rik vanRiel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: MichelLespinasse <walken@google.com> Cc: Peter Hurley <peter@hurleysoftware.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Alex Shi <alex.shi@linaro.org> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: "Figo.zhang" <figo1802@gmail.com> Cc: "Paul E.McKenney" <paulmck@linux.vnet.ibm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Davidlohr Bueso <davidlohr.bueso@hp.com> Cc: Waiman Long <waiman.long@hp.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Will Deacon <will.deacon@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matthew R Wilcox <matthew.r.wilcox@intel.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1390347382.3138.67.camel@schen9-DESKSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Tim Chen 提交于
We perform a clean up of the Kbuid files in each architecture. We order the files in each Kbuild in alphabetical order by running the below script. for i in arch/*/include/asm/Kbuild do cat $i | gawk '/^generic-y/ { i = 3; do { for (; i <= NF; i++) { if ($i == "\\") { getline; i = 1; continue; } if ($i != "") hdr[$i] = $i; } break; } while (1); next; } // { print $0; } END { n = asort(hdr); for (i = 1; i <= n; i++) print "generic-y += " hdr[i]; }' > ${i}.sorted; mv ${i}.sorted $i; done Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Matthew R Wilcox <matthew.r.wilcox@intel.com> Cc: AswinChandramouleeswaran <aswin@hp.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: "Paul E.McKenney" <paulmck@linux.vnet.ibm.com> Cc: Scott J Norton <scott.norton@hp.com> Cc: Will Deacon <will.deacon@arm.com> Cc: "Figo.zhang" <figo1802@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Rik van Riel <riel@redhat.com> Cc: Waiman Long <waiman.long@hp.com> Cc: Peter Hurley <peter@hurleysoftware.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Alex Shi <alex.shi@linaro.org> Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: George Spelvin <linux@horizon.com> Cc: MichelLespinasse <walken@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Davidlohr Bueso <davidlohr.bueso@hp.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> [ Fixed build bug. ] Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 28 1月, 2014 1 次提交
-
-
由 Chen Gang 提交于
For some assemblers, they use another character as newline in a macro (e.g. arc uses '`'), so for generic assembly code, need use ASM_NL (a macro) instead of ';' for it. Signed-off-by: NChen Gang <gang.chen.5i5j@gmail.com> Acked-by: NVineet Gupta <vgupta@synopsys.com> Signed-off-by: NMichal Marek <mmarek@suse.cz>
-
- 14 1月, 2014 2 次提交
-
-
由 Stephen Rothwell 提交于
Checkin: 93ea02bb arch: Clean up asm/barrier.h implementations using asm-generic/barrier.h ... unfortunately left some Kbuild files out of order, which caused unnecessary merge conflicts, in particular with checkin: e3fec2f7 lib: Add missing arch generic-y entries for asm-generic/hash.h Put them back in order to make the upcoming merges cleaner. Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Link: http://lkml.kernel.org/r/20140114164420.d296fbcc4be3a5f126c86069@canb.auug.org.auSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Cc: Peter Zijlstra <peterz@infradead.org> Cc: David Miller <davem@davemloft.net>
-
由 Stephen Rothwell 提交于
Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 1月, 2014 2 次提交
-
-
由 Peter Zijlstra 提交于
We're going to be adding a few new barrier primitives, and in order to avoid endless duplication make more agressive use of asm-generic/barrier.h. Change the asm-generic/barrier.h such that it allows partial barrier definitions and fills out the rest with defaults. There are a few architectures (m32r, m68k) that could probably do away with their barrier.h file entirely but are kept for now due to their unconventional nop() implementation. Suggested-by: NGeert Uytterhoeven <geert@linux-m68k.org> Reviewed-by: N"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Reviewed-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Cc: Michael Ellerman <michael@ellerman.id.au> Cc: Michael Neuling <mikey@neuling.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Victor Kaplansky <VICTORK@il.ibm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Link: http://lkml.kernel.org/r/20131213150640.846368594@infradead.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Peter Zijlstra 提交于
Move the barriers functions that depend on the atomic implementation into the atomic implementation. Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: Vineet Gupta <vgupta@synopsys.com> [for arch/arc bits] Cc: Peter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Link: http://lkml.kernel.org/r/20131213150640.786183683@infradead.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 23 12月, 2013 2 次提交
-
-
由 Vineet Gupta 提交于
The interface is confusing, it feels like we are getting "sender" info, whereas it is the "receiver", which can very well be retrived by smp_processor_id(), if need be. Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
The current IPI sending callstack needlessly involves cpumask. arch_send_call_function_single_ipi(cpu) / smp_send_reschedule(cpu) ipi_send_msg(cpumask_of(cpu)) --> [cpu to cpumask] plat_smp_ops.ipi_send(callmap) for_each_cpu(callmap) --> [cpuask to cpu] do_plat_specific_ipi_PER_CPU Given that current backends are not capable of 1:N IPIs, lets simplify the interface for now, by keeping "a" cpu all along. Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 18 12月, 2013 1 次提交
-
-
由 David S. Miller 提交于
Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 11月, 2013 1 次提交
-
-
由 Kirill A. Shutemov 提交于
Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: Vineet Gupta <vgupta@synopsys.com> [for arch/arc bits] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 14 11月, 2013 1 次提交
-
-
由 Thomas Gleixner 提交于
No point in having this bit defined by architecture. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20130917183629.090698799@linutronix.de
-
- 12 11月, 2013 1 次提交
-
-
由 Mischa Jonker 提交于
This adds basic perf support for ARC700 cores. Most PERF_COUNT_HW* events are supported now. Signed-off-by: NMischa Jonker <mjonker@synopsys.com> Acked-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 06 11月, 2013 8 次提交
-
-
由 Vineet Gupta 提交于
- Add mm_cpumask setting (aggregating only, unlike some other arches) used to restrict the TLB flush cross-calling - cross-calling versions of TLB flush routines (thanks to Noam) Signed-off-by: NNoam Camus <noamc@ezchip.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
-Track a Per CPU ASID counter -mm-per-cpu ASID (multiple threads, or mm migrated around) Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Chen Gang 提交于
get_hw_config_num_irq() may be called by normal iss_model_init_smp() which is a function pointer for 'init_smp' which may be called by first_lines_of_secondary() which also need be normal too. The related warning (with allmodconfig): MODPOST vmlinux.o WARNING: vmlinux.o(.text+0x5814): Section mismatch in reference from the function iss_model_init_smp() to the function .init.text:get_hw_config_num_irq() The function iss_model_init_smp() references the function __init get_hw_config_num_irq(). This is often because iss_model_init_smp lacks a __init annotation or the annotation of get_hw_config_num_irq is wrong. Signed-off-by: NChen Gang <gang.chen@asianux.com>
-
由 Chen Gang 提交于
first_lines_of_secondary() is a '__init' function, but it may be called by __cpu_up() by _cpu_up() by cpu_up() which is a normal export symbol function. So recommend to remove '__init'. The related warning (with allmodconfig): MODPOST vmlinux.o WARNING: vmlinux.o(.text+0x315c): Section mismatch in reference from the function __cpu_up() to the function .init.text:first_lines_of_secondary() The function __cpu_up() references the function __init first_lines_of_secondary(). This is often because __cpu_up lacks a __init annotation or the annotation of first_lines_of_secondary is wrong. Signed-off-by: NChen Gang <gang.chen@asianux.com>
-
由 Chen Gang 提交于
They haven't '__init' in definition, but has '__init' in declaration. And normal function start_kernel_secondary() may call setup_processor() which will call arc_init_IRQ(). So need remove '__init' for both of them. The related warning (with allmodconfig): MODPOST vmlinux.o WARNING: vmlinux.o(.text+0x3084): Section mismatch in reference from the function start_kernel_secondary() to the function .init.text:setup_processor() The function start_kernel_secondary() references the function __init setup_processor(). This is often because start_kernel_secondary lacks a __init annotation or the annotation of setup_processor is wrong. Signed-off-by: NChen Gang <gang.chen@asianux.com>
-
由 Vineet Gupta 提交于
Lockdep required a small fix to stacktrace API which was incorrectly unwindign out of __switch_to for the current call frame. Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
Emulation not enabled is treated as if the fixup failed, so no need for special #ifdef checks. Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
Having them be different seems an obscure configuration. Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 10 10月, 2013 4 次提交
-
-
由 Rob Herring 提交于
Now that prom.h is optional, all the empty prom.h headers can be removed. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NVineet Gupta <vgupta@synopsys.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NGrant Likely <grant.likely@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Salter <msalter@redhat.com> Cc: Aurelien Jacquiot <a-jacquiot@ti.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Jonas Bonn <jonas@southpole.se> Cc: Chris Zankel <chris@zankel.net> Cc: Max Filippov <jcmvbkbc@gmail.com>
-
由 Rob Herring 提交于
HAVE_ARCH_DEVTREE_FIXUPS appears to always be needed except for sparc, but it is only used for /proc/device-teee and sparc does not enable /proc/device-tree. So this option is redundant. Remove the option and always enable it. This has the side effect of fixing /proc/device-tree on arches such as arm64 which failed to define this option. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NVineet Gupta <vgupta@synopsys.com> Acked-by: NGrant Likely <grant.likely@linaro.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: James Hogan <james.hogan@imgtec.com> Cc: Michal Simek <monstr@monstr.eu> Cc: Jonas Bonn <jonas@southpole.se> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: x86@kernel.org Cc: Chris Zankel <chris@zankel.net> Cc: Max Filippov <jcmvbkbc@gmail.com>
-
由 Rob Herring 提交于
Convert arc to use the common of_flat_dt_match_machine function. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Rob Herring 提交于
Use the common unflatten_and_copy_device_tree to copy the built-in FDT out of init section. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NVineet Gupta <vgupta@synopsys.com> Acked-by: NGrant Likely <grant.likely@linaro.org>
-
- 27 9月, 2013 2 次提交
-
-
由 Vineet Gupta 提交于
Some ARC SMP systems lack native atomic R-M-W (LLOCK/SCOND) insns and can only use atomic EX insn (reg with mem) to build higher level R-M-W primitives. This includes a SystemC based SMP simulation model. So rwlocks need to use a protecting spinlock for atomic cmp-n-exchange operation to update reader(s)/writer count. The spinlock operation itself looks as follows: mov reg, 1 ; 1=locked, 0=unlocked retry: EX reg, [lock] ; load existing, store 1, atomically BREQ reg, 1, rety ; if already locked, retry In single-threaded simulation, SystemC alternates between the 2 cores with "N" insn each based scheduling. Additionally for insn with global side effect, such as EX writing to shared mem, a core switch is enforced too. Given that, 2 cores doing a repeated EX on same location, Linux often got into a livelock e.g. when both cores were fiddling with tasklist lock (gdbserver / hackbench) for read/write respectively as the sequence diagram below shows: core1 core2 -------- -------- 1. spin lock [EX r=0, w=1] - LOCKED 2. rwlock(Read) - LOCKED 3. spin unlock [ST 0] - UNLOCKED spin lock [EX r=0,w=1] - LOCKED -- resched core 1---- 5. spin lock [EX r=1] - ALREADY-LOCKED -- resched core 2---- 6. rwlock(Write) - READER-LOCKED 7. spin unlock [ST 0] 8. rwlock failed, retry again 9. spin lock [EX r=0, w=1] -- resched core 1---- 10 spinlock locked in #9, retry #5 11. spin lock [EX gets 1] -- resched core 2---- ... ... The fix was to unlock using the EX insn too (step 7), to trigger another SystemC scheduling pass which would let core1 proceed, eliding the livelock. Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
Anton reported | LTP tests syscalls/process_vm_readv01 and process_vm_writev01 fail | similarly in one testcase test_iov_invalid -> lvec->iov_base. | Testcase expects errno EFAULT and return code -1, | but it gets return code 1 and ERRNO is 0 what means success. Essentially test case was passing a pointer of -1 which access_ok() was not catching. It was doing [@addr + @sz <= TASK_SIZE] which would pass for @addr == -1 Fixed that by rewriting as [@addr <= TASK_SIZE - @sz] Reported-by: NAnton Kolesov <Anton.Kolesov@synopsys.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 25 9月, 2013 1 次提交
-
-
由 Peter Zijlstra 提交于
In order to prepare to per-arch implementations of preempt_count move the required bits into an asm-generic header and use this for all archs. Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-h5j0c1r3e3fk015m30h8f1zx@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 12 9月, 2013 1 次提交
-
-
由 Noam Camus 提交于
Commit 05b016ec "ARC: Setup Vector Table Base in early boot" moved the Interrupt vector Table setup out of arc_init_IRQ() which is called for all CPUs, to entry point of boot cpu only, breaking booting of others. Fix by adding the same to entry point of non-boot CPUs too. read_arc_build_cfg_regs() printing IVT Base Register didn't help the casue since it prints a synthetic value if zero which is totally bogus, so fix that to print the exact Register. [vgupta: Remove the now stale comment from header of arc_init_IRQ and also added the commentary for halt-on-reset] Cc: Gilad Ben-Yossef <gilad@benyossef.com> Cc: Cc: <stable@vger.kernel.org> #3.11 Signed-off-by: NNoam Camus <noamc@ezchip.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 9月, 2013 3 次提交
-
-
由 Vineet Gupta 提交于
```------------>8-------------------- WARNING: vmlinux.o(.text+0x708): Section mismatch in reference from the function read_arc_build_cfg_regs() to the function .init.text:read_decode_cache_bcr() WARNING: vmlinux.o(.text+0x702): Section mismatch in reference from the function read_arc_build_cfg_regs() to the function .init.text:read_decode_mmu_bcr() ``` ------------>8-------------------- Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Mischa Jonker 提交于
Cast usecs to u64, to ensure that the (usecs * 4295 * HZ) multiplication is 64 bit. Initially, the (usecs * 4295 * HZ) part was done as a 32 bit multiplication, with the result casted to 64 bit. This led to some bits falling off, causing a "DMA initialization error" in the stmmac Ethernet driver, due to a premature timeout. Signed-off-by: NMischa Jonker <mjonker@synopsys.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Mischa Jonker 提交于
Some drivers require these, and ARC didn't had them yet. Signed-off-by: NMischa Jonker <mjonker@synopsys.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 31 8月, 2013 5 次提交
-
-
由 Vineet Gupta 提交于
This helps remove asid-to-mm reverse map While mm->context.id contains the ASID assigned to a process, our ASID allocator also used asid_mm_map[] reverse map. In a new allocation cycle (mm->ASID >= @asid_cache), the Round Robin ASID allocator used this to check if new @asid_cache belonged to some mm2 (from prev cycle). If so, it could locate that mm using the ASID reverse map, and mark that mm as unallocated ASID, to force it to refresh at the time of switch_mm() However, for SMP, the reverse map has to be maintained per CPU, so becomes 2 dimensional, hence got rid of it. With reverse map gone, it is NOT possible to reach out to current assignee. So we track the ASID allocation generation/cycle and on every switch_mm(), check if the current generation of CPU ASID is same as mm's ASID; If not it is refreshed. (Based loosely on arch/sh implementation) Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
ASID allocation changes/2 Use the fact that switch_mm() and activate_mm() are exactly same code now while acknowledging the semantical difference in comment Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
ASID allocation changes/1 This patch does 2 things: (1) get_new_mmu_context() NOW moves mm->ASID to a new value ONLY if it was from a prev allocation cycle/generation OR if mm had no ASID allocated (vs. before would unconditionally moving to a new ASID) Callers desiring unconditional update of ASID, e.g.local_flush_tlb_mm() (for parent's address space invalidation at fork) need to first force the parent to an unallocated ASID. (2) get_new_mmu_context() always sets the MMU PID reg with unchanged/new ASID value. The gains are: - consolidation of all asid alloc logic into get_new_mmu_context() - avoiding code duplication in switch_mm() for PID reg setting - Enables future change to fold activate_mm() into switch_mm() Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
-Asm code already has values of SW and HW ASID values, so they can be passed to the printing routine. Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-