1. 28 7月, 2015 1 次提交
  2. 27 7月, 2015 3 次提交
    • W
      arm64: force CONFIG_SMP=y and remove redundant #ifdefs · 4b3dc967
      Will Deacon 提交于
      Nobody seems to be producing !SMP systems anymore, so this is just
      becoming a source of kernel bugs, particularly if people want to use
      coherent DMA with non-shared pages.
      
      This patch forces CONFIG_SMP=y for arm64, removing a modest amount of
      code in the process.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4b3dc967
    • C
      arm64: Add support for hardware updates of the access and dirty pte bits · 2f4b829c
      Catalin Marinas 提交于
      The ARMv8.1 architecture extensions introduce support for hardware
      updates of the access and dirty information in page table entries. With
      TCR_EL1.HA enabled, when the CPU accesses an address with the PTE_AF bit
      cleared in the page table, instead of raising an access flag fault the
      CPU sets the actual page table entry bit. To ensure that kernel
      modifications to the page tables do not inadvertently revert a change
      introduced by hardware updates, the exclusive monitor (ldxr/stxr) is
      adopted in the pte accessors.
      
      When TCR_EL1.HD is enabled, a write access to a memory location with the
      DBM (Dirty Bit Management) bit set in the corresponding pte
      automatically clears the read-only bit (AP[2]). Such DBM bit maps onto
      the Linux PTE_WRITE bit and to check whether a writable (DBM set) page
      is dirty, the kernel tests the PTE_RDONLY bit. In order to allow
      read-only and dirty pages, the kernel needs to preserve the software
      dirty bit. The hardware dirty status is transferred to the software
      dirty bit in ptep_set_wrprotect() (using load/store exclusive loop) and
      pte_modify().
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2f4b829c
    • W
      arm64: move update_mmu_cache() into asm/pgtable.h · cba3574f
      Will Deacon 提交于
      Mark Brown reported an allnoconfig build failure in -next:
      
        Today's linux-next fails to build an arm64 allnoconfig due to "mm:
        make GUP handle pfn mapping unless FOLL_GET is requested" which
        causes:
      
        >       arm64-allnoconfig
        > ../mm/gup.c:51:4: error: implicit declaration of function
          'update_mmu_cache' [-Werror=implicit-function-declaration]
      
      Fix the error by moving the function to asm/pgtable.h, as is the case
      for most other architectures.
      Reported-by: NMark Brown <broonie@kernel.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      cba3574f
  3. 15 4月, 2015 1 次提交
  4. 27 2月, 2015 1 次提交
    • F
      arm64: enable PTE type bit in the mask for pte_modify · 6910fa16
      Feng Kan 提交于
      Caught during Trinity testing. The pte_modify does not allow
      modification for PTE type bit. This cause the test to hang
      the system. It is found that the PTE can't transit from an
      inaccessible page (b00) to a valid page (b11) because the mask
      does not allow it. This happens when a big block of mmaped
      memory is set the PROT_NONE, then the a small piece is broken
      off and set to PROT_WRITE | PROT_READ cause a huge page split.
      Signed-off-by: NFeng Kan <fkan@apm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6910fa16
  5. 12 2月, 2015 1 次提交
  6. 11 2月, 2015 1 次提交
  7. 28 1月, 2015 1 次提交
  8. 12 1月, 2015 1 次提交
  9. 24 12月, 2014 1 次提交
    • J
      arm64: mm: Add pgd_page to support RCU fast_gup · 5d96e0cb
      Jungseok Lee 提交于
      This patch adds pgd_page definition in order to keep supporting
      HAVE_GENERIC_RCU_GUP configuration. In addition, it changes pud_page
      expression to align with pmd_page for readability.
      
      An introduction of pgd_page resolves the following build breakage
      under 4KB + 4Level memory management combo.
      
      mm/gup.c: In function 'gup_huge_pgd':
      mm/gup.c:889:2: error: implicit declaration of function 'pgd_page' [-Werror=implicit-function-declaration]
        head = pgd_page(orig);
        ^
      mm/gup.c:889:7: warning: assignment makes pointer from integer without a cast
        head = pgd_page(orig);
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Signed-off-by: NJungseok Lee <jungseoklee85@gmail.com>
      [catalin.marinas@arm.com: remove duplicate pmd_page definition]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5d96e0cb
  10. 11 12月, 2014 1 次提交
  11. 10 10月, 2014 2 次提交
  12. 01 10月, 2014 1 次提交
  13. 08 9月, 2014 1 次提交
  14. 24 7月, 2014 1 次提交
    • C
      arm64: Fix barriers used for page table modifications · 7f0b1bf0
      Catalin Marinas 提交于
      The architecture specification states that both DSB and ISB are required
      between page table modifications and subsequent memory accesses using the
      corresponding virtual address. When TLB invalidation takes place, the
      tlb_flush_* functions already have the necessary barriers. However, there are
      other functions like create_mapping() for which this is not the case.
      
      The patch adds the DSB+ISB instructions in the set_pte() function for
      valid kernel mappings. The invalid pte case is handled by tlb_flush_*
      and the user mappings in general have a corresponding update_mmu_cache()
      call containing a DSB. Even when update_mmu_cache() isn't called, the
      kernel can still cope with an unlikely spurious page fault by
      re-executing the instruction.
      
      In addition, the set_pmd, set_pud() functions gain an ISB for
      architecture compliance when block mappings are created.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NLeif Lindholm <leif.lindholm@linaro.org>
      Acked-by: NSteve Capper <steve.capper@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: <stable@vger.kernel.org>
      7f0b1bf0
  15. 23 7月, 2014 5 次提交
  16. 17 7月, 2014 1 次提交
  17. 04 7月, 2014 1 次提交
  18. 18 6月, 2014 1 次提交
  19. 29 5月, 2014 1 次提交
    • W
      arm64: mm: fix pmd_write CoW brokenness · ceb21835
      Will Deacon 提交于
      Commit 9c7e535f ("arm64: mm: Route pmd thp functions through pte
      equivalents") changed the pmd manipulator and accessor functions to
      convert the target pmd to a pte, process it with the pte functions, then
      convert it back. Along the way, we gained support for PTE_WRITE, however
      this is completely ignored by set_pmd_at, and so we fail to set the
      PMD_SECT_RDONLY for PMDs, resulting in all sorts of lovely failures (like
      CoW not working).
      
      Partially reverting the offending commit (by making use of
      PMD_SECT_RDONLY explicitly for pmd_{write,wrprotect,mkwrite} functions)
      leads to further issues because pmd_write can then return potentially
      incorrect values for page table entries marked as RDONLY, leading to
      BUG_ON(pmd_write(entry)) tripping under some THP workloads.
      
      This patch fixes the issue by routing set_pmd_at through set_pte_at,
      which correctly takes the PTE_WRITE flag into account. Given that
      THP mappings are always anonymous, the additional cache-flushing code
      in __sync_icache_dcache won't impose any significant overhead as the
      flush will be skipped.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Acked-by: NSteve Capper <steve.capper@arm.com>
      Tested-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ceb21835
  20. 16 5月, 2014 1 次提交
  21. 10 5月, 2014 1 次提交
  22. 09 5月, 2014 3 次提交
    • S
      arm64: mm: Create gigabyte kernel logical mappings where possible · 206a2a73
      Steve Capper 提交于
      We have the capability to map 1GB level 1 blocks when using a 4K
      granule.
      
      This patch adjusts the create_mapping logic s.t. when mapping physical
      memory on boot, we attempt to use a 1GB block if both the VA and PA
      start and end are 1GB aligned. This both reduces the levels of lookup
      required to resolve a kernel logical address, as well as reduces TLB
      pressure on cores that support 1GB TLB entries.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Tested-by: NJungseok Lee <jays.lee@samsung.com>
      [catalin.marinas@arm.com: s/prot_sect_kernel/PROT_SECT_NORMAL_EXEC/]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      206a2a73
    • C
      arm64: Clean up the default pgprot setting · a501e324
      Catalin Marinas 提交于
      The primary aim of this patchset is to remove the pgprot_default and
      prot_sect_default global variables and rely strictly on predefined
      values. The original goal was to be able to run SMP kernels on UP
      hardware by not setting the Shareability bit. However, it is unlikely to
      see UP ARMv8 hardware and even if we do, the Shareability bit is no
      longer assumed to disable cacheable accesses.
      
      A side effect is that the device mappings now have the Shareability
      attribute set. The hardware, however, should ignore it since Device
      accesses are always Outer Shareable.
      
      Following the removal of the two global variables, there is some PROT_*
      macro reshuffling and cleanup, including the __PAGE_* macros (replaced
      by PAGE_*).
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      a501e324
    • C
      arm64: Introduce execute-only page access permissions · bc07c2c6
      Catalin Marinas 提交于
      The ARMv8 architecture allows execute-only user permissions by clearing
      the PTE_UXN and PTE_USER bits. The kernel, however, can still access
      such page, so execute-only page permission does not protect against
      read(2)/write(2) etc. accesses. Systems requiring such protection must
      implement/enable features like SECCOMP.
      
      This patch changes the arm64 __P100 and __S100 protection_map[] macros
      to the new __PAGE_EXECONLY attributes. A side effect is that
      pte_valid_user() no longer triggers for __PAGE_EXECONLY since PTE_USER
      isn't set. To work around this, the check is done on the PTE_NG bit via
      the pte_valid_ng() macro. VM_READ is also checked now for page faults.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bc07c2c6
  23. 05 5月, 2014 1 次提交
  24. 24 3月, 2014 1 次提交
    • C
      arm64: Remove pgprot_dmacoherent() · 196adf2f
      Catalin Marinas 提交于
      Since this macro is identical to pgprot_writecombine() and is only used
      in a single place, remove it completely to avoid confusion. On ARMv7+
      processors, the coherent DMA mapping must be Normal NonCacheable (a.k.a.
      writecombine) to avoid mismatched hardware attribute aliases (with the
      kernel linear mapping as Normal Cacheable).
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      196adf2f
  25. 15 3月, 2014 1 次提交
  26. 13 3月, 2014 2 次提交
  27. 28 2月, 2014 1 次提交
    • S
      arm64: mm: Add double logical invert to pte accessors · 84fe6826
      Steve Capper 提交于
      Page table entries on ARM64 are 64 bits, and some pte functions such as
      pte_dirty return a bitwise-and of a flag with the pte value. If the
      flag to be tested resides in the upper 32 bits of the pte, then we run
      into the danger of the result being dropped if downcast.
      
      For example:
      	gather_stats(page, md, pte_dirty(*pte), 1);
      where pte_dirty(*pte) is downcast to an int.
      
      This patch adds a double logical invert to all the pte_ accessors to
      ensure predictable downcasting.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      84fe6826
  28. 31 1月, 2014 2 次提交
  29. 29 11月, 2013 1 次提交