- 12 2月, 2016 1 次提交
-
-
由 Vineet Gupta 提交于
MMUv4 supports 2 concurrent page sizes: Normal and Super [4K to 16M] So far Linux supported a single super page size for a given Normal page, depending on the software page walking address split. e.g. we had 11:8:13 address split for 8K page, which meant super page was 2 ^(8+13) = 2M (given that THP size has to be PMD_SHIFT) Now we turn this around, by allowing multiple Super Pages in Kconfig (currently 2M and 16M only) and forcing page walker address split to PGDIR_SHIFT and PAGE_SHIFT For configs without Super page, things are same as before and PGDIR_SHIFT can be hacked to get non default address split The motivation for this change is a customer who needs 16M super page and a 8K Normal page combo. Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 29 10月, 2015 1 次提交
-
-
由 Vineet Gupta 提交于
This is the first working implementation of 40-bit physical address extension on ARCv2. Signed-off-by: NAlexey Brodkin <abrodkin@synopsys.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 28 10月, 2015 2 次提交
-
-
由 Vineet Gupta 提交于
Before we plug in highmem support, some of code needs to be ready for it - copy_user_highpage() needs to be using the kmap_atomic API - mk_pte() can't assume page_address() - do_page_fault() can't assume VMALLOC_END is end of kernel vaddr space Signed-off-by: NAlexey Brodkin <abrodkin@synopsys.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Alexey Brodkin 提交于
Signed-off-by: NAlexey Brodkin <abrodkin@synopsys.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 17 10月, 2015 1 次提交
-
-
由 Vineet Gupta 提交于
MMUv4 in HS38x cores supports Super Pages which are basis for Linux THP support. Normal and Super pages can co-exist (ofcourse not overlap) in TLB with a new bit "SZ" in TLB page desciptor to distinguish between them. Super Page size is configurable in hardware (4K to 16M), but fixed once RTL builds. The exact THP size a Linx configuration will support is a function of: - MMU page size (typical 8K, RTL fixed) - software page walker address split between PGD:PTE:PFN (typical 11:8:13, but can be changed with 1 line) So for above default, THP size supported is 8K * 256 = 2M Default Page Walker is 2 levels, PGD:PTE:PFN, which in THP regime reduces to 1 level (as PTE is folded into PGD and canonically referred to as PMD). Thus thp PMD accessors are implemented in terms of PTE (just like sparc) Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 09 10月, 2015 2 次提交
-
-
由 Vineet Gupta 提交于
Needed for THP, but will also come in handy for fast GUP later Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
由 Vineet Gupta 提交于
No semantical changes Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 22 6月, 2015 1 次提交
-
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 13 2月, 2015 1 次提交
-
-
由 Alexey Brodkin 提交于
We used to calculate page address differently in 2 cases: 1. In virt_to_page(x) we do --->8--- mem_map + (x - CONFIG_LINUX_LINK_BASE) >> PAGE_SHIFT --->8--- 2. In in pte_page(x) we do --->8--- mem_map + (pte_val(x) - PAGE_OFFSET) >> PAGE_SHIFT --->8--- That leads to problems in case PAGE_OFFSET != CONFIG_LINUX_LINK_BASE - different pages will be selected depending on where and how we calculate page address. In particular in the STAR 9000853582 when gdb attempted to read memory of another process it got improper page in get_user_pages() because this is exactly one of the places where we search for a page by pte_page(). The fix is trivial - we need to calculate page address similarly in both cases. Cc: <stable@vger.kernel.org> Signed-off-by: NAlexey Brodkin <abrodkin@synopsys.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 12 2月, 2015 1 次提交
-
-
由 Kirill A. Shutemov 提交于
LKP has triggered a compiler warning after my recent patch "mm: account pmd page tables to the process": mm/mmap.c: In function 'exit_mmap': >> mm/mmap.c:2857:2: warning: right shift count >= width of type [enabled by default] The code: > 2857 WARN_ON(mm_nr_pmds(mm) > 2858 round_up(FIRST_USER_ADDRESS, PUD_SIZE) >> PUD_SHIFT); In this, on tile, we have FIRST_USER_ADDRESS defined as 0. round_up() has the same type -- int. PUD_SHIFT. I think the best way to fix it is to define FIRST_USER_ADDRESS as unsigned long. On every arch for consistency. Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: NWu Fengguang <fengguang.wu@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 2月, 2015 1 次提交
-
-
由 Kirill A. Shutemov 提交于
We've replaced remap_file_pages(2) implementation with emulation. Nobody creates non-linear mapping anymore. Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: NVineet Gupta <vgupta@synopsys.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 30 8月, 2013 1 次提交
-
-
由 Vineet Gupta 提交于
With previous commit freeing up PTE bits, reassign them so as to: - Match the bit to H/w counterpart where possible (e.g. MMUv2 GLOBAL/PRESENT, this avoids a shift in create_tlb()) - Avoid holes in _PAGE_xxx definitions Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 29 8月, 2013 1 次提交
-
-
由 Vineet Gupta 提交于
The current ARC VM code has 13 flags in Page Table entry: some software (accesed/dirty/non-linear-maps) and rest hardware specific. With 8k MMU page, we need 19 bits for addressing page frame so remaining 13 bits is just about enough to accomodate the current flags. In MMUv4 there are 2 additional flags, SZ (normal or super page) and WT (cache access mode write-thru) - and additionally PFN is 20 bits (vs. 19 before for 8k). Thus these can't be held in current PTE w/o making each entry 64bit wide. It seems there is some scope of compressing the current PTE flags (and freeing up a few bits). Currently PTE contains fully orthogonal distinct access permissions for kernel and user mode (Kr, Kw, Kx; Ur, Uw, Ux) which can be folded into one set (R, W, X). The translation of 3 PTE bits into 6 TLB bits (when programming the MMU) can be done based on following pre-requites/assumptions: 1. For kernel-mode-only translations (vmalloc: 0x7000_0000 to 0x7FFF_FFFF), PTE additionally has PAGE_GLOBAL flag set (and user space entries can never be global). Thus such a PTE can translate to Kr, Kw, Kx (as appropriate) and zero for User mode counterparts. 2. For non global entries, the PTE flags can be used to create mirrored K and U TLB bits. This is true after commit a950549c "ARC: copy_(to|from)_user() to honor usermode-access permissions" which ensured that user-space translations _MUST_ have same access permissions for both U/K mode accesses so that copy_{to,from}_user() play fair with fault based CoW break and such... There is no such thing as free lunch - the cost is slightly infalted TLB-Miss Handlers. Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 29 6月, 2013 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 22 6月, 2013 1 次提交
-
-
由 Vineet Gupta 提交于
* Move the various sub-system defines/types into relevant files/functions (reduces compilation time) * move CPU specific stuff out of asm/tlb.h into asm/mmu.h Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 23 5月, 2013 1 次提交
-
-
由 Vineet Gupta 提交于
This manifested as grep failing psuedo-randomly: -------------->8--------------------- [ARCLinux]$ ip address show lo | grep inet [ARCLinux]$ ip address show lo | grep inet [ARCLinux]$ ip address show lo | grep inet [ARCLinux]$ [ARCLinux]$ ip address show lo | grep inet inet 127.0.0.1/8 scope host lo -------------->8--------------------- ARC700 MMU provides fully orthogonal permission bits per page: Ur, Uw, Ux, Kr, Kw, Kx The user mode page permission templates used to have all Kernel mode access bits enabled. This caused a tricky race condition observed with uClibc buffered file read and UNIX pipes. 1. Read access to an anon mapped page in libc .bss: write-protected zero_page mapped: TLB Entry installed with Ur + K[rwx] 2. grep calls libc:getc() -> buffered read layer calls read(2) with the internal read buffer in same .bss page. The read() call is on STDIN which has been redirected to a pipe. read(2) => sys_read() => pipe_read() => copy_to_user() 3. Since page has Kernel-write permission (despite being user-mode write-protected), copy_to_user() suceeds w/o taking a MMU TLB-Miss Exception (page-fault for ARC). core-MM is unaware that kernel erroneously wrote to the reserved read-only zero-page (BUG #1) 4. Control returns to userspace which now does a write to same .bss page Since Linux MM is not aware that page has been modified by kernel, it simply reassigns a new writable zero-init page to mapping, loosing the prior write by kernel - effectively zero'ing out the libc read buffer under the hood - hence grep doesn't see right data (BUG #2) The fix is to make all kernel-mode access permissions mirror the user-mode ones. Note that the kernel still has full access to pages, when accessed directly (w/o MMU) - this fix ensures that kernel-mode access in copy_to_from() path uses the same faulting access model as for pure user accesses to keep MM fully aware of page state. The issue is peudo-random because it only shows up if the TLB entry installed in #1 is present at the time of #3. If it is evicted out, due to TLB pressure or some-such, then copy_to_user() does take a TLB Miss Exception, with a routine write-to-anon COW processing installing a fresh page for kernel writes and also usable as it is in userspace. Further the issue was dormant for so long as it depends on where the libc internal read buffer (in .bss) is mapped at runtime. If it happens to reside in file-backed data mapping of libc (in the page-aligned slack space trailing the file backed data), loader zero padding the slack space, does the early cow page replacement, setting things up at the very beginning itself. With gcc 4.8 based builds, the libc buffer got pushed out to a real anon mapping which triggers the issue. Reported-by: NAnton Kolesov <akolesov@synopsys.com> Cc: <stable@vger.kernel.org> # 3.9 Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 10 5月, 2013 1 次提交
-
-
由 Vineet Gupta 提交于
Enforce congruency of userspace shared mappings Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-
- 16 2月, 2013 2 次提交
-
-
由 Vineet Gupta 提交于
ARC common code to enable a SMP system + ISS provided SMP extensions. ARC700 natively lacks SMP support, hence some of the core features are are only enabled if SoCs have the necessary h/w pixie-dust. This includes: -Inter Processor Interrupts (IPI) -Cache coherency -load-locked/store-conditional ... The low level exception handling would be completely broken in SMP because we don't have hardware assisted stack switching. Thus a fair bit of this code is repurposing the MMU_SCRATCH reg for event handler prologues to keep them re-entrant. Many thanks to Rajeshwar Ranga for his initial "major" contributions to SMP Port (back in 2008), and to Noam Camus and Gilad Ben-Yossef for help with resurrecting that in 3.2 kernel (2012). Note that this platform code is again singleton design pattern - so multiple SMP platforms won't build at the moment - this deficiency is addressed in subsequent patches within this series. Signed-off-by: NVineet Gupta <vgupta@synopsys.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rajeshwar Ranga <rajeshwar.ranga@gmail.com> Cc: Noam Camus <noamc@ezchip.com> Cc: Gilad Ben-Yossef <gilad@benyossef.com>
-
由 Vineet Gupta 提交于
Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
-