- 09 5月, 2016 2 次提交
-
-
由 Noam Camus 提交于
NPS use special mapping right below TASK_SIZE. Hence we need to lower STACK_TOP so that user stack won't overlap NPS special mapping. Signed-off-by: NNoam Camus <noamc@ezchip.com> Acked-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Noam Camus 提交于
On ARC, lower 2G of address space is translated and used for - user vaddr space (region 0 to 5) - unused kernel-user gutter (region 6) - kernel vaddr space (region 7) where each region simply represents 256MB of address space. The kernel vaddr space of 256MB is used to implement vmalloc, modules So far this was enough, but not on EZChip system with 4K CPUs (given that per cpu mechanism uses vmalloc for allocating chunks) So allow VMALLOC_SIZE to be configurable by expanding down into the unused kernel-user gutter region which at default 256M was excessive anyways. Also use _BITUL() to fix a build error since PGDIR_SIZE cannot use "1UL" as called from assembly code in mm/tlbex.S Signed-off-by: NNoam Camus <noamc@ezchip.com> [vgupta: rewrote changelog, debugged bootup crash due to int vs. hex] Acked-by: NVineet Gupta <vgupta@synopsys.com>
-
- 11 3月, 2016 1 次提交
-
-
由 Adam Buchbinder 提交于
Signed-off-by: NAdam Buchbinder <adam.buchbinder@gmail.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 16 11月, 2015 1 次提交
-
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 29 10月, 2015 1 次提交
-
-
由 Vineet Gupta 提交于
This is the first working implementation of 40-bit physical address extension on ARCv2. Signed-off-by: NAlexey Brodkin <abrodkin@synopsys.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 28 10月, 2015 2 次提交
-
-
由 Vineet Gupta 提交于
That way a single flip of phys_addr_t to 64 bit ensures all places dealing with physical addresses get correct data Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
- Move the verbosity knob from .data to .bss by using inverted logic - No need to readout PD1 descriptor - clip the non pfn bits of PD0 to avoid clipping inside the loop Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 17 10月, 2015 7 次提交
-
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
This frees up some bits to hold more high level info such as PAE being present, w/o increasing the size of already bloated cpuinfo struct Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
Implement the TLB flush routine to evict a sepcific Super TLB entry, vs. moving to a new ASID on every such flush. Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
MMUv4 in HS38x cores supports Super Pages which are basis for Linux THP support. Normal and Super pages can co-exist (ofcourse not overlap) in TLB with a new bit "SZ" in TLB page desciptor to distinguish between them. Super Page size is configurable in hardware (4K to 16M), but fixed once RTL builds. The exact THP size a Linx configuration will support is a function of: - MMU page size (typical 8K, RTL fixed) - software page walker address split between PGD:PTE:PFN (typical 11:8:13, but can be changed with 1 line) So for above default, THP size supported is 8K * 256 = 2M Default Page Walker is 2 levels, PGD:PTE:PFN, which in THP regime reduces to 1 level (as PTE is folded into PGD and canonically referred to as PMD). Thus thp PMD accessors are implemented in terms of PTE (just like sparc) Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 22 6月, 2015 1 次提交
-
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 19 6月, 2015 1 次提交
-
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 13 10月, 2014 1 次提交
-
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 06 11月, 2013 3 次提交
-
-
由 Vineet Gupta 提交于
- Add mm_cpumask setting (aggregating only, unlike some other arches) used to restrict the TLB flush cross-calling - cross-calling versions of TLB flush routines (thanks to Noam) Signed-off-by: NNoam Camus <noamc@ezchip.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
-Track a Per CPU ASID counter -mm-per-cpu ASID (multiple threads, or mm migrated around) Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
```--------------->8---------------------- arch/arc/mm/tlb.c: In function ‘do_tlb_overlap_fault’: arch/arc/mm/tlb.c:688:13: warning: array subscript is above array bounds [-Warray-bounds] (pd0[n] & PAGE_MASK)) { ^ ``` --------------->8---------------------- While at it, remove the usless last iteration of outer loop when reading a TLB SET for duplicate entries. Suggested-by: NMischa Jonker <mjonker@synopsys.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 31 8月, 2013 3 次提交
-
-
由 Vineet Gupta 提交于
This helps remove asid-to-mm reverse map While mm->context.id contains the ASID assigned to a process, our ASID allocator also used asid_mm_map[] reverse map. In a new allocation cycle (mm->ASID >= @asid_cache), the Round Robin ASID allocator used this to check if new @asid_cache belonged to some mm2 (from prev cycle). If so, it could locate that mm using the ASID reverse map, and mark that mm as unallocated ASID, to force it to refresh at the time of switch_mm() However, for SMP, the reverse map has to be maintained per CPU, so becomes 2 dimensional, hence got rid of it. With reverse map gone, it is NOT possible to reach out to current assignee. So we track the ASID allocation generation/cycle and on every switch_mm(), check if the current generation of CPU ASID is same as mm's ASID; If not it is refreshed. (Based loosely on arch/sh implementation) Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
ASID allocation changes/1 This patch does 2 things: (1) get_new_mmu_context() NOW moves mm->ASID to a new value ONLY if it was from a prev allocation cycle/generation OR if mm had no ASID allocated (vs. before would unconditionally moving to a new ASID) Callers desiring unconditional update of ASID, e.g.local_flush_tlb_mm() (for parent's address space invalidation at fork) need to first force the parent to an unallocated ASID. (2) get_new_mmu_context() always sets the MMU PID reg with unchanged/new ASID value. The gains are: - consolidation of all asid alloc logic into get_new_mmu_context() - avoiding code duplication in switch_mm() for PID reg setting - Enables future change to fold activate_mm() into switch_mm() Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
-Asm code already has values of SW and HW ASID values, so they can be passed to the printing routine. Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 30 8月, 2013 3 次提交
-
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
This reorganizes the current TLB operations into psuedo-ops to better pair with MMUv4's native Insert/Delete operations Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
With previous commit freeing up PTE bits, reassign them so as to: - Match the bit to H/w counterpart where possible (e.g. MMUv2 GLOBAL/PRESENT, this avoids a shift in create_tlb()) - Avoid holes in _PAGE_xxx definitions Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 29 8月, 2013 1 次提交
-
-
由 Vineet Gupta 提交于
The current ARC VM code has 13 flags in Page Table entry: some software (accesed/dirty/non-linear-maps) and rest hardware specific. With 8k MMU page, we need 19 bits for addressing page frame so remaining 13 bits is just about enough to accomodate the current flags. In MMUv4 there are 2 additional flags, SZ (normal or super page) and WT (cache access mode write-thru) - and additionally PFN is 20 bits (vs. 19 before for 8k). Thus these can't be held in current PTE w/o making each entry 64bit wide. It seems there is some scope of compressing the current PTE flags (and freeing up a few bits). Currently PTE contains fully orthogonal distinct access permissions for kernel and user mode (Kr, Kw, Kx; Ur, Uw, Ux) which can be folded into one set (R, W, X). The translation of 3 PTE bits into 6 TLB bits (when programming the MMU) can be done based on following pre-requites/assumptions: 1. For kernel-mode-only translations (vmalloc: 0x7000_0000 to 0x7FFF_FFFF), PTE additionally has PAGE_GLOBAL flag set (and user space entries can never be global). Thus such a PTE can translate to Kr, Kw, Kx (as appropriate) and zero for User mode counterparts. 2. For non global entries, the PTE flags can be used to create mirrored K and U TLB bits. This is true after commit a950549c "ARC: copy_(to|from)_user() to honor usermode-access permissions" which ensured that user-space translations _MUST_ have same access permissions for both U/K mode accesses so that copy_{to,from}_user() play fair with fault based CoW break and such... There is no such thing as free lunch - the cost is slightly infalted TLB-Miss Handlers. Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 27 6月, 2013 1 次提交
-
-
由 Paul Gortmaker 提交于
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) are flagged as __cpuinit -- so if we remove the __cpuinit from arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the arch/arc uses of the __cpuinit macros from all C files. Currently arc does not have any __CPUINIT used in assembly files. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 22 6月, 2013 4 次提交
-
-
由 Vineet Gupta 提交于
Similar to ARM/SH Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
* Move the various sub-system defines/types into relevant files/functions (reduces compilation time) * move CPU specific stuff out of asm/tlb.h into asm/mmu.h Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 23 5月, 2013 1 次提交
-
-
由 Vineet Gupta 提交于
The VM_EXEC check in update_mmu_cache() was getting optimized away because of a stupid error in definition of macro addr_not_cache_congruent() The intention was to have the equivalent of following: if (a || (1 ? b : 0)) but we ended up with following: if (a || 1 ? b : 0) And because precedence of '||' is more that that of '?', gcc was optimizing away evaluation of <a> Nasty Repercussions: 1. For non-aliasing configs it would mean some extraneous dcache flushes for non-code pages if U/K mappings were not congruent. 2. For aliasing config, some needed dcache flush for code pages might be missed if U/K mappings were congruent. Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 10 5月, 2013 2 次提交
-
-
由 Vineet Gupta 提交于
This is the meat of the series which prevents any dcache alias creation by always keeping the U and K mapping of a page congruent. If a mapping already exists, and other tries to access the page, prev one is flushed to physical page (wback+inv) Essentially flush_dcache_page()/copy_user_highpage() create K-mapping of a page, but try to defer flushing, unless U-mapping exist. When page is actually mapped to userspace, update_mmu_cache() flushes the K-mapping (in certain cases this can be optimised out) Additonally flush_cache_mm(), flush_cache_range(), flush_cache_page() handle the puring of stale userspace mappings on exit/munmap... flush_anon_page() handles the existing U-mapping for anon page before kernel reads it via the GUP path. Note that while not complete, this is enough to boot a simple dynamically linked Busybox based rootfs Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
This preps the low level dcache flush helpers to take vaddr argument in addition to the existing paddr to properly flush the VIPT dcache Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 07 5月, 2013 3 次提交
-
-
由 Vineet Gupta 提交于
flush_dcache_page( ) is MM hook to ensure that a page has consistent views between kernel and userspace. Thus it is called when * kernel writes to a page which at some later point could get mapped to userspace (so kernel mapping needs to be flushed-n-inv) * kernel is about to read from a page with possible userspace mappings (so userspace mappings needs to be made coherent with kernel ones) However for Non aliasing VIPT dcache, any userspace mapping will always be congruent to kernel mapping. Thus d-cache need need not be flushed at all (or delayed indefinitely). The only reason it does need to be flushed is when mapping code pages. Since icache doesn't snoop dcache, those dirty dcache lines need to be written back to memory and icache line invalidated so that icache lines fetch will get the right data. Decent gains on LMBench fork/exec/sh and File I/O micro-benchmarks. (1) FPGA @ 80 MHZ Processor, Processes - times in microseconds - smaller is better ------------------------------------------------------------------------------ Host OS Mhz null null open slct sig sig fork exec sh call I/O stat clos TCP inst hndl proc proc proc --------- ------------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- 3.9-rc6-a Linux 3.9.0-r 80 4.79 8.72 66.7 116. 239. 8.39 30.4 4798 14.K 34.K 3.9-rc6-b Linux 3.9.0-r 80 4.79 8.62 65.4 111. 239. 8.35 29.0 3995 12.K 30.K 3.9-rc7-c Linux 3.9.0-r 80 4.79 9.00 66.1 106. 239. 8.61 30.4 2858 10.K 24.K ^^^^ ^^^^ ^^^ File & VM system latencies in microseconds - smaller is better ------------------------------------------------------------------------------- Host OS 0K File 10K File Mmap Prot Page 100fd Create Delete Create Delete Latency Fault Fault selct --------- ------------- ------ ------ ------ ------ ------- ----- ------- ----- 3.9-rc6-a Linux 3.9.0-r 317.8 204.2 1122.3 375.1 3522.0 4.288 20.7 126.8 3.9-rc6-b Linux 3.9.0-r 298.7 223.0 1141.6 367.8 3531.0 4.866 20.9 126.4 3.9-rc7-c Linux 3.9.0-r 278.4 179.2 862.1 339.3 3705.0 3.223 20.3 126.6 ^^^^^ ^^^^^ ^^^^^ ^^^^ (2) Customer Silicon @ 500 MHz (166 MHz mem) ------------------------------------------------------------------------------ Host OS Mhz null null open slct sig sig fork exec sh call I/O stat clos TCP inst hndl proc proc proc --------- ------------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- abilis-ba Linux 3.9.0-r 497 0.71 1.38 4.58 12.0 35.5 1.40 3.89 2070 5525 13.K abilis-ca Linux 3.9.0-r 497 0.71 1.40 4.61 11.8 35.6 1.37 3.92 1411 4317 10.K ^^^^ ^^^^ ^^^ Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
ARC icache doesn't snoop dcache thus executable pages need to be made coherent before mapping into userspace in flush_icache_page(). However ARC700 CDU (hardware cache flush module) requires both vaddr (index in cache) as well as paddr (tag match) to correctly identify a line in the VIPT cache. A typical ARC700 SoC has aliasing icache, thus the paddr only based flush_icache_page() API couldn't be implemented efficiently. It had to loop thru all possible alias indexes and perform the invalidate operation (ofcourse the cache op would only succeed at the index(es) where tag matches - typically only 1, but the cost of visiting all the cache-bins needs to paid nevertheless). Turns out however that the vaddr (along with paddr) is available in update_mmu_cache() hence better suits ARC icache flush semantics. With both vaddr+paddr, exactly one flush operation per line is done. Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Noam Camus 提交于
Signed-off-by: NNoam Camus <noamc@ezchip.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 09 4月, 2013 1 次提交
-
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 16 2月, 2013 1 次提交
-
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-