1. 10 2月, 2014 1 次提交
    • W
      ARM: 7954/1: mm: remove remaining domain support from ARMv6 · b6ccb980
      Will Deacon 提交于
      CPU_32v6 currently selects CPU_USE_DOMAINS if CPU_V6 and MMU. This is
      because ARM 1136 r0pX CPUs lack the v6k extensions, and therefore do
      not have hardware thread registers. The lack of these registers requires
      the kernel to update the vectors page at each context switch in order to
      write a new TLS pointer. This write must be done via the userspace
      mapping, since aliasing caches can lead to expensive flushing when using
      kmap. Finally, this requires the vectors page to be mapped r/w for
      kernel and r/o for user, which has implications for things like put_user
      which must trigger CoW appropriately when targetting user pages.
      
      The upshot of all this is that a v6/v7 kernel makes use of domains to
      segregate kernel and user memory accesses. This has the nasty
      side-effect of making device mappings executable, which has been
      observed to cause subtle bugs on recent cores (e.g. Cortex-A15
      performing a speculative instruction fetch from the GIC and acking an
      interrupt in the process).
      
      This patch solves this problem by removing the remaining domain support
      from ARMv6. A new memory type is added specifically for the vectors page
      which allows that page (and only that page) to be mapped as user r/o,
      kernel r/w. All other user r/o pages are mapped also as kernel r/o.
      Patch co-developed with Russell King.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      b6ccb980
  2. 11 12月, 2013 1 次提交
  3. 29 10月, 2013 1 次提交
    • S
      ARM: 7858/1: mm: make UACCESS_WITH_MEMCPY huge page aware · a3a9ea65
      Steven Capper 提交于
      The memory pinning code in uaccess_with_memcpy.c does not check
      for HugeTLB or THP pmds, and will enter an infinite loop should
      a __copy_to_user or __clear_user occur against a huge page.
      
      This patch adds detection code for huge pages to pin_page_for_write.
      As this code can be executed in a fast path it refers to the actual
      pmds rather than the vma. If a HugeTLB or THP is found (they have
      the same pmd representation on ARM), the page table spinlock is
      taken to prevent modification whilst the page is pinned.
      
      On ARM, huge pages are only represented as pmds, thus no huge pud
      checks are performed. (For huge puds one would lock the page table
      in a similar manner as in the pmd case).
      
      Two helper functions are introduced; pmd_thp_or_huge will check
      whether or not a page is huge or transparent huge (which have the
      same pmd layout on ARM), and pmd_hugewillfault will detect whether
      or not a page fault will occur on write to the page.
      
      Running the following test (with the chunking from read_zero
      removed):
       $ dd if=/dev/zero of=/dev/null bs=10M count=1024
      Gave:  2.3 GB/s backed by normal pages,
             2.9 GB/s backed by huge pages,
             5.1 GB/s backed by huge pages, with page mask=HPAGE_MASK.
      
      After some discussion, it was decided not to adopt the HPAGE_MASK,
      as this would have a significant detrimental effect on the overall
      system latency due to page_table_lock being held for too long.
      This could be revisited if split huge page locks are adopted.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Reviewed-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a3a9ea65
  4. 09 11月, 2012 2 次提交
    • W
      ARM: mm: introduce present, faulting entries for PAGE_NONE · 26ffd0d4
      Will Deacon 提交于
      PROT_NONE mappings apply the page protection attributes defined by _P000
      which translate to PAGE_NONE for ARM. These attributes specify an XN,
      RDONLY pte that is inaccessible to userspace. However, on kernels
      configured without support for domains, such a pte *is* accessible to
      the kernel and can be read via get_user, allowing tasks to read
      PROT_NONE pages via syscalls such as read/write over a pipe.
      
      This patch introduces a new software pte flag, L_PTE_NONE, that is set
      to identify faulting, present entries.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      26ffd0d4
    • W
      ARM: mm: introduce L_PTE_VALID for page table entries · dbf62d50
      Will Deacon 提交于
      For long-descriptor translation table formats, the ARMv7 architecture
      defines the last two bits of the second- and third-level descriptors to
      be:
      
      	x0b	- Invalid
      	01b	- Block (second-level), Reserved (third-level)
      	11b	- Table (second-level), Page (third-level)
      
      This allows us to define L_PTE_PRESENT as (3 << 0) and use this value to
      create ptes directly. However, when determining whether a given pte
      value is present in the low-level page table accessors, we only need to
      check the least significant bit of the descriptor, allowing us to write
      faulting, present entries which are required for PROT_NONE mappings.
      
      This patch introduces L_PTE_VALID, which can be used to test whether a
      pte should fault, and updates the low-level page table accessors
      accordingly.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      dbf62d50
  5. 08 12月, 2011 1 次提交
  6. 06 10月, 2011 1 次提交