1. 11 2月, 2015 1 次提交
  2. 28 1月, 2015 1 次提交
  3. 12 1月, 2015 1 次提交
  4. 24 12月, 2014 1 次提交
    • J
      arm64: mm: Add pgd_page to support RCU fast_gup · 5d96e0cb
      Jungseok Lee 提交于
      This patch adds pgd_page definition in order to keep supporting
      HAVE_GENERIC_RCU_GUP configuration. In addition, it changes pud_page
      expression to align with pmd_page for readability.
      
      An introduction of pgd_page resolves the following build breakage
      under 4KB + 4Level memory management combo.
      
      mm/gup.c: In function 'gup_huge_pgd':
      mm/gup.c:889:2: error: implicit declaration of function 'pgd_page' [-Werror=implicit-function-declaration]
        head = pgd_page(orig);
        ^
      mm/gup.c:889:7: warning: assignment makes pointer from integer without a cast
        head = pgd_page(orig);
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Signed-off-by: NJungseok Lee <jungseoklee85@gmail.com>
      [catalin.marinas@arm.com: remove duplicate pmd_page definition]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5d96e0cb
  5. 11 12月, 2014 1 次提交
  6. 10 10月, 2014 2 次提交
  7. 01 10月, 2014 1 次提交
  8. 08 9月, 2014 1 次提交
  9. 24 7月, 2014 1 次提交
    • C
      arm64: Fix barriers used for page table modifications · 7f0b1bf0
      Catalin Marinas 提交于
      The architecture specification states that both DSB and ISB are required
      between page table modifications and subsequent memory accesses using the
      corresponding virtual address. When TLB invalidation takes place, the
      tlb_flush_* functions already have the necessary barriers. However, there are
      other functions like create_mapping() for which this is not the case.
      
      The patch adds the DSB+ISB instructions in the set_pte() function for
      valid kernel mappings. The invalid pte case is handled by tlb_flush_*
      and the user mappings in general have a corresponding update_mmu_cache()
      call containing a DSB. Even when update_mmu_cache() isn't called, the
      kernel can still cope with an unlikely spurious page fault by
      re-executing the instruction.
      
      In addition, the set_pmd, set_pud() functions gain an ISB for
      architecture compliance when block mappings are created.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NLeif Lindholm <leif.lindholm@linaro.org>
      Acked-by: NSteve Capper <steve.capper@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: <stable@vger.kernel.org>
      7f0b1bf0
  10. 23 7月, 2014 5 次提交
  11. 17 7月, 2014 1 次提交
  12. 04 7月, 2014 1 次提交
  13. 18 6月, 2014 1 次提交
  14. 29 5月, 2014 1 次提交
    • W
      arm64: mm: fix pmd_write CoW brokenness · ceb21835
      Will Deacon 提交于
      Commit 9c7e535f ("arm64: mm: Route pmd thp functions through pte
      equivalents") changed the pmd manipulator and accessor functions to
      convert the target pmd to a pte, process it with the pte functions, then
      convert it back. Along the way, we gained support for PTE_WRITE, however
      this is completely ignored by set_pmd_at, and so we fail to set the
      PMD_SECT_RDONLY for PMDs, resulting in all sorts of lovely failures (like
      CoW not working).
      
      Partially reverting the offending commit (by making use of
      PMD_SECT_RDONLY explicitly for pmd_{write,wrprotect,mkwrite} functions)
      leads to further issues because pmd_write can then return potentially
      incorrect values for page table entries marked as RDONLY, leading to
      BUG_ON(pmd_write(entry)) tripping under some THP workloads.
      
      This patch fixes the issue by routing set_pmd_at through set_pte_at,
      which correctly takes the PTE_WRITE flag into account. Given that
      THP mappings are always anonymous, the additional cache-flushing code
      in __sync_icache_dcache won't impose any significant overhead as the
      flush will be skipped.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Acked-by: NSteve Capper <steve.capper@arm.com>
      Tested-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ceb21835
  15. 16 5月, 2014 1 次提交
  16. 10 5月, 2014 1 次提交
  17. 09 5月, 2014 3 次提交
    • S
      arm64: mm: Create gigabyte kernel logical mappings where possible · 206a2a73
      Steve Capper 提交于
      We have the capability to map 1GB level 1 blocks when using a 4K
      granule.
      
      This patch adjusts the create_mapping logic s.t. when mapping physical
      memory on boot, we attempt to use a 1GB block if both the VA and PA
      start and end are 1GB aligned. This both reduces the levels of lookup
      required to resolve a kernel logical address, as well as reduces TLB
      pressure on cores that support 1GB TLB entries.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Tested-by: NJungseok Lee <jays.lee@samsung.com>
      [catalin.marinas@arm.com: s/prot_sect_kernel/PROT_SECT_NORMAL_EXEC/]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      206a2a73
    • C
      arm64: Clean up the default pgprot setting · a501e324
      Catalin Marinas 提交于
      The primary aim of this patchset is to remove the pgprot_default and
      prot_sect_default global variables and rely strictly on predefined
      values. The original goal was to be able to run SMP kernels on UP
      hardware by not setting the Shareability bit. However, it is unlikely to
      see UP ARMv8 hardware and even if we do, the Shareability bit is no
      longer assumed to disable cacheable accesses.
      
      A side effect is that the device mappings now have the Shareability
      attribute set. The hardware, however, should ignore it since Device
      accesses are always Outer Shareable.
      
      Following the removal of the two global variables, there is some PROT_*
      macro reshuffling and cleanup, including the __PAGE_* macros (replaced
      by PAGE_*).
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      a501e324
    • C
      arm64: Introduce execute-only page access permissions · bc07c2c6
      Catalin Marinas 提交于
      The ARMv8 architecture allows execute-only user permissions by clearing
      the PTE_UXN and PTE_USER bits. The kernel, however, can still access
      such page, so execute-only page permission does not protect against
      read(2)/write(2) etc. accesses. Systems requiring such protection must
      implement/enable features like SECCOMP.
      
      This patch changes the arm64 __P100 and __S100 protection_map[] macros
      to the new __PAGE_EXECONLY attributes. A side effect is that
      pte_valid_user() no longer triggers for __PAGE_EXECONLY since PTE_USER
      isn't set. To work around this, the check is done on the PTE_NG bit via
      the pte_valid_ng() macro. VM_READ is also checked now for page faults.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bc07c2c6
  18. 05 5月, 2014 1 次提交
  19. 24 3月, 2014 1 次提交
    • C
      arm64: Remove pgprot_dmacoherent() · 196adf2f
      Catalin Marinas 提交于
      Since this macro is identical to pgprot_writecombine() and is only used
      in a single place, remove it completely to avoid confusion. On ARMv7+
      processors, the coherent DMA mapping must be Normal NonCacheable (a.k.a.
      writecombine) to avoid mismatched hardware attribute aliases (with the
      kernel linear mapping as Normal Cacheable).
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      196adf2f
  20. 15 3月, 2014 1 次提交
  21. 13 3月, 2014 2 次提交
  22. 28 2月, 2014 1 次提交
    • S
      arm64: mm: Add double logical invert to pte accessors · 84fe6826
      Steve Capper 提交于
      Page table entries on ARM64 are 64 bits, and some pte functions such as
      pte_dirty return a bitwise-and of a flag with the pte value. If the
      flag to be tested resides in the upper 32 bits of the pte, then we run
      into the danger of the result being dropped if downcast.
      
      For example:
      	gather_stats(page, md, pte_dirty(*pte), 1);
      where pte_dirty(*pte) is downcast to an int.
      
      This patch adds a double logical invert to all the pte_ accessors to
      ensure predictable downcasting.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      84fe6826
  23. 31 1月, 2014 2 次提交
  24. 29 11月, 2013 2 次提交
  25. 06 11月, 2013 1 次提交
  26. 29 6月, 2013 1 次提交
  27. 14 6月, 2013 4 次提交
    • S
      ARM64: mm: THP support. · af074848
      Steve Capper 提交于
      Bring Transparent HugePage support to ARM. The size of a
      transparent huge page depends on the normal page size. A
      transparent huge page is always represented as a pmd.
      
      If PAGE_SIZE is 4KB, THPs are 2MB.
      If PAGE_SIZE is 64KB, THPs are 512MB.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      af074848
    • S
      ARM64: mm: HugeTLB support. · 084bd298
      Steve Capper 提交于
      Add huge page support to ARM64, different huge page sizes are
      supported depending on the size of normal pages:
      
      PAGE_SIZE is 4KB:
         2MB - (pmds) these can be allocated at any time.
      1024MB - (puds) usually allocated on bootup with the command line
               with something like: hugepagesz=1G hugepages=6
      
      PAGE_SIZE is 64KB:
       512MB - (pmds) usually allocated on bootup via command line.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      084bd298
    • S
      ARM64: mm: Move PTE_PROT_NONE bit. · 59911ca4
      Steve Capper 提交于
      Under ARM64, PTEs can be broadly categorised as follows:
         - Present and valid: Bit #0 is set. The PTE is valid and memory
           access to the region may fault.
      
         - Present and invalid: Bit #0 is clear and bit #1 is set.
           Represents present memory with PROT_NONE protection. The PTE
           is an invalid entry, and the user fault handler will raise a
           SIGSEGV.
      
         - Not present (file or swap): Bits #0 and #1 are clear.
           Memory represented has been paged out. The PTE is an invalid
           entry, and the fault handler will try and re-populate the
           memory where necessary.
      
      Huge PTEs are block descriptors that have bit #1 clear. If we wish
      to represent PROT_NONE huge PTEs we then run into a problem as
      there is no way to distinguish between regular and huge PTEs if we
      set bit #1.
      
      To resolve this ambiguity this patch moves PTE_PROT_NONE from
      bit #1 to bit #2 and moves PTE_FILE from bit #2 to bit #3. The
      number of swap/file bits is reduced by 1 as a consequence, leaving
      60 bits for file and swap entries.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      59911ca4
    • S
      ARM64: mm: Make PAGE_NONE pages read only and no-execute. · 072b1b62
      Steve Capper 提交于
      If we consider the following code sequence:
      
      	my_pte = pte_modify(entry, myprot);
      	x = pte_write(my_pte);
      	y = pte_exec(my_pte);
      
      If myprot comes from a PROT_NONE page, then x and y will both be
      true which is undesireable behaviour.
      
      This patch sets the no-execute and read-only bits for PAGE_NONE
      such that the code above will return false for both x and y.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      072b1b62