1. 08 1月, 2018 1 次提交
  2. 09 9月, 2016 1 次提交
    • M
      arm64: simplify sysreg manipulation · adf75899
      Mark Rutland 提交于
      A while back we added {read,write}_sysreg accessors to handle accesses
      to system registers, without the usual boilerplate asm volatile,
      temporary variable, etc.
      
      This patch makes use of these across arm64 to make code shorter and
      clearer. For sequences with a trailing ISB, the existing isb() macro is
      also used so that asm blocks can be removed entirely.
      
      A few uses of inline assembly for msr/mrs are left as-is. Those
      manipulating sp_el0 for the current thread_info value have special
      clobber requiremends.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      adf75899
  3. 29 6月, 2016 1 次提交
  4. 06 5月, 2016 1 次提交
    • C
      arm64: Ensure pmd_present() returns false after pmd_mknotpresent() · 5bb1cc0f
      Catalin Marinas 提交于
      Currently, pmd_present() only checks for a non-zero value, returning
      true even after pmd_mknotpresent() (which only clears the type bits).
      This patch converts pmd_present() to using pte_present(), similar to the
      other pmd_*() checks. As a side effect, it will return true for
      PROT_NONE mappings, though they are not yet used by the kernel with
      transparent huge pages.
      
      For consistency, also change pmd_mknotpresent() to only clear the
      PMD_SECT_VALID bit, even though the PMD_TABLE_BIT is already 0 for block
      mappings (no functional change). The unused PMD_SECT_PROT_NONE
      definition is removed as transparent huge pages use the pte page prot
      values.
      
      Fixes: 9c7e535f ("arm64: mm: Route pmd thp functions through pte equivalents")
      Cc: <stable@vger.kernel.org> # 3.15+
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      5bb1cc0f
  5. 21 4月, 2016 1 次提交
  6. 22 12月, 2015 1 次提交
    • D
      arm64: hugetlb: add support for PTE contiguous bit · 66b3923a
      David Woods 提交于
      The arm64 MMU supports a Contiguous bit which is a hint that the TTE
      is one of a set of contiguous entries which can be cached in a single
      TLB entry.  Supporting this bit adds new intermediate huge page sizes.
      
      The set of huge page sizes available depends on the base page size.
      Without using contiguous pages the huge page sizes are as follows.
      
       4KB:   2MB  1GB
      64KB: 512MB
      
      With a 4KB granule, the contiguous bit groups together sets of 16 pages
      and with a 64KB granule it groups sets of 32 pages.  This enables two new
      huge page sizes in each case, so that the full set of available sizes
      is as follows.
      
       4KB:  64KB   2MB  32MB  1GB
      64KB:   2MB 512MB  16GB
      
      If a 16KB granule is used then the contiguous bit groups 128 pages
      at the PTE level and 32 pages at the PMD level.
      
      If the base page size is set to 64KB then 2MB pages are enabled by
      default.  It is possible in the future to make 2MB the default huge
      page size for both 4KB and 64KB granules.
      Reviewed-by: NChris Metcalf <cmetcalf@ezchip.com>
      Reviewed-by: NSteve Capper <steve.capper@linaro.org>
      Signed-off-by: NDavid Woods <dwoods@ezchip.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      66b3923a
  7. 20 10月, 2015 1 次提交
  8. 09 10月, 2015 1 次提交
  9. 27 7月, 2015 1 次提交
    • C
      arm64: Add support for hardware updates of the access and dirty pte bits · 2f4b829c
      Catalin Marinas 提交于
      The ARMv8.1 architecture extensions introduce support for hardware
      updates of the access and dirty information in page table entries. With
      TCR_EL1.HA enabled, when the CPU accesses an address with the PTE_AF bit
      cleared in the page table, instead of raising an access flag fault the
      CPU sets the actual page table entry bit. To ensure that kernel
      modifications to the page tables do not inadvertently revert a change
      introduced by hardware updates, the exclusive monitor (ldxr/stxr) is
      adopted in the pte accessors.
      
      When TCR_EL1.HD is enabled, a write access to a memory location with the
      DBM (Dirty Bit Management) bit set in the corresponding pte
      automatically clears the read-only bit (AP[2]). Such DBM bit maps onto
      the Linux PTE_WRITE bit and to check whether a writable (DBM set) page
      is dirty, the kernel tests the PTE_RDONLY bit. In order to allow
      read-only and dirty pages, the kernel needs to preserve the software
      dirty bit. The hardware dirty status is transferred to the software
      dirty bit in ptep_set_wrprotect() (using load/store exclusive loop) and
      pte_modify().
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2f4b829c
  10. 15 4月, 2015 1 次提交
  11. 23 3月, 2015 1 次提交
  12. 16 1月, 2015 1 次提交
  13. 23 7月, 2014 4 次提交
  14. 09 5月, 2014 1 次提交
  15. 03 4月, 2014 1 次提交
  16. 13 3月, 2014 1 次提交
  17. 07 12月, 2013 1 次提交
  18. 18 10月, 2013 1 次提交
  19. 03 9月, 2013 1 次提交
  20. 14 6月, 2013 2 次提交
  21. 07 6月, 2013 1 次提交
  22. 16 11月, 2012 1 次提交
  23. 17 9月, 2012 1 次提交