1. 29 12月, 2018 1 次提交
  2. 10 6月, 2016 1 次提交
    • W
      ARM: 8578/1: mm: ensure pmd_present only checks the valid bit · 62453188
      Will Deacon 提交于
      In a subsequent patch, pmd_mknotpresent will clear the valid bit of the
      pmd entry, resulting in a not-present entry from the hardware's
      perspective. Unfortunately, pmd_present simply checks for a non-zero pmd
      value and will therefore continue to return true even after a
      pmd_mknotpresent operation. Since pmd_mknotpresent is only used for
      managing huge entries, this is only an issue for the 3-level case.
      
      This patch fixes the 3-level pmd_present implementation to take into
      account the valid bit. For bisectability, the change is made before the
      fix to pmd_mknotpresent.
      
      [catalin.marinas@arm.com: comment update regarding pmd_mknotpresent patch]
      
      Fixes: 8d962507 ("ARM: mm: Transparent huge page support for LPAE systems.")
      Cc: <stable@vger.kernel.org> # 3.11+
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Steve Capper <Steve.Capper@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      62453188
  3. 04 7月, 2015 1 次提交
  4. 12 2月, 2015 1 次提交
  5. 11 2月, 2015 1 次提交
  6. 10 10月, 2014 1 次提交
    • S
      arm: mm: introduce special ptes for LPAE · bd951303
      Steve Capper 提交于
      We need a mechanism to tag ptes as being special, this indicates that no
      attempt should be made to access the underlying struct page * associated
      with the pte.  This is used by the fast_gup when operating on ptes as it
      has no means to access VMAs (that also contain this information)
      locklessly.
      
      The L_PTE_SPECIAL bit is already allocated for LPAE, this patch modifies
      pte_special and pte_mkspecial to make use of it, and defines
      __HAVE_ARCH_PTE_SPECIAL.
      
      This patch also excludes special ptes from the icache/dcache sync logic.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Dann Frazier <dann.frazier@canonical.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Christoffer Dall <christoffer.dall@linaro.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bd951303
  7. 10 2月, 2014 1 次提交
    • W
      ARM: 7954/1: mm: remove remaining domain support from ARMv6 · b6ccb980
      Will Deacon 提交于
      CPU_32v6 currently selects CPU_USE_DOMAINS if CPU_V6 and MMU. This is
      because ARM 1136 r0pX CPUs lack the v6k extensions, and therefore do
      not have hardware thread registers. The lack of these registers requires
      the kernel to update the vectors page at each context switch in order to
      write a new TLS pointer. This write must be done via the userspace
      mapping, since aliasing caches can lead to expensive flushing when using
      kmap. Finally, this requires the vectors page to be mapped r/w for
      kernel and r/o for user, which has implications for things like put_user
      which must trigger CoW appropriately when targetting user pages.
      
      The upshot of all this is that a v6/v7 kernel makes use of domains to
      segregate kernel and user memory accesses. This has the nasty
      side-effect of making device mappings executable, which has been
      observed to cause subtle bugs on recent cores (e.g. Cortex-A15
      performing a speculative instruction fetch from the GIC and acking an
      interrupt in the process).
      
      This patch solves this problem by removing the remaining domain support
      from ARMv6. A new memory type is added specifically for the vectors page
      which allows that page (and only that page) to be mapped as user r/o,
      kernel r/w. All other user r/o pages are mapped also as kernel r/o.
      Patch co-developed with Russell King.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      b6ccb980
  8. 11 12月, 2013 1 次提交
  9. 29 10月, 2013 1 次提交
    • S
      ARM: 7858/1: mm: make UACCESS_WITH_MEMCPY huge page aware · a3a9ea65
      Steven Capper 提交于
      The memory pinning code in uaccess_with_memcpy.c does not check
      for HugeTLB or THP pmds, and will enter an infinite loop should
      a __copy_to_user or __clear_user occur against a huge page.
      
      This patch adds detection code for huge pages to pin_page_for_write.
      As this code can be executed in a fast path it refers to the actual
      pmds rather than the vma. If a HugeTLB or THP is found (they have
      the same pmd representation on ARM), the page table spinlock is
      taken to prevent modification whilst the page is pinned.
      
      On ARM, huge pages are only represented as pmds, thus no huge pud
      checks are performed. (For huge puds one would lock the page table
      in a similar manner as in the pmd case).
      
      Two helper functions are introduced; pmd_thp_or_huge will check
      whether or not a page is huge or transparent huge (which have the
      same pmd layout on ARM), and pmd_hugewillfault will detect whether
      or not a page fault will occur on write to the page.
      
      Running the following test (with the chunking from read_zero
      removed):
       $ dd if=/dev/zero of=/dev/null bs=10M count=1024
      Gave:  2.3 GB/s backed by normal pages,
             2.9 GB/s backed by huge pages,
             5.1 GB/s backed by huge pages, with page mask=HPAGE_MASK.
      
      After some discussion, it was decided not to adopt the HPAGE_MASK,
      as this would have a significant detrimental effect on the overall
      system latency due to page_table_lock being held for too long.
      This could be revisited if split huge page locks are adopted.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Reviewed-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a3a9ea65
  10. 09 11月, 2012 2 次提交
    • W
      ARM: mm: introduce present, faulting entries for PAGE_NONE · 26ffd0d4
      Will Deacon 提交于
      PROT_NONE mappings apply the page protection attributes defined by _P000
      which translate to PAGE_NONE for ARM. These attributes specify an XN,
      RDONLY pte that is inaccessible to userspace. However, on kernels
      configured without support for domains, such a pte *is* accessible to
      the kernel and can be read via get_user, allowing tasks to read
      PROT_NONE pages via syscalls such as read/write over a pipe.
      
      This patch introduces a new software pte flag, L_PTE_NONE, that is set
      to identify faulting, present entries.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      26ffd0d4
    • W
      ARM: mm: introduce L_PTE_VALID for page table entries · dbf62d50
      Will Deacon 提交于
      For long-descriptor translation table formats, the ARMv7 architecture
      defines the last two bits of the second- and third-level descriptors to
      be:
      
      	x0b	- Invalid
      	01b	- Block (second-level), Reserved (third-level)
      	11b	- Table (second-level), Page (third-level)
      
      This allows us to define L_PTE_PRESENT as (3 << 0) and use this value to
      create ptes directly. However, when determining whether a given pte
      value is present in the low-level page table accessors, we only need to
      check the least significant bit of the descriptor, allowing us to write
      faulting, present entries which are required for PROT_NONE mappings.
      
      This patch introduces L_PTE_VALID, which can be used to test whether a
      pte should fault, and updates the low-level page table accessors
      accordingly.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      dbf62d50
  11. 08 12月, 2011 1 次提交
  12. 06 10月, 2011 1 次提交