1. 13 10月, 2015 2 次提交
    • Y
      arm64: add kc_offset_to_vaddr and kc_vaddr_to_offset macro · 03875ad5
      yalin wang 提交于
      This patch add kc_offset_to_vaddr() and kc_vaddr_to_offset(),
      the default version doesn't work on arm64, because arm64 kernel address
      is below the PAGE_OFFSET, like module address and vmemmap address are
      all below PAGE_OFFSET address.
      Signed-off-by: Nyalin wang <yalin.wang2010@gmail.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      03875ad5
    • A
      arm64: add KASAN support · 39d114dd
      Andrey Ryabinin 提交于
      This patch adds arch specific code for kernel address sanitizer
      (see Documentation/kasan.txt).
      
      1/8 of kernel addresses reserved for shadow memory. There was no
      big enough hole for this, so virtual addresses for shadow were
      stolen from vmalloc area.
      
      At early boot stage the whole shadow region populated with just
      one physical page (kasan_zero_page). Later, this page reused
      as readonly zero shadow for some memory that KASan currently
      don't track (vmalloc).
      After mapping the physical memory, pages for shadow memory are
      allocated and mapped.
      
      Functions like memset/memmove/memcpy do a lot of memory accesses.
      If bad pointer passed to one of these function it is important
      to catch this. Compiler's instrumentation cannot do this since
      these functions are written in assembly.
      KASan replaces memory functions with manually instrumented variants.
      Original functions declared as weak symbols so strong definitions
      in mm/kasan/kasan.c could replace them. Original functions have aliases
      with '__' prefix in name, so we could call non-instrumented variant
      if needed.
      Some files built without kasan instrumentation (e.g. mm/slub.c).
      Original mem* function replaced (via #define) with prefixed variants
      to disable memory access checks for such files.
      Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: NLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      39d114dd
  2. 09 10月, 2015 2 次提交
  3. 07 10月, 2015 2 次提交
  4. 02 10月, 2015 1 次提交
  5. 14 9月, 2015 3 次提交
  6. 28 7月, 2015 1 次提交
  7. 27 7月, 2015 3 次提交
    • W
      arm64: force CONFIG_SMP=y and remove redundant #ifdefs · 4b3dc967
      Will Deacon 提交于
      Nobody seems to be producing !SMP systems anymore, so this is just
      becoming a source of kernel bugs, particularly if people want to use
      coherent DMA with non-shared pages.
      
      This patch forces CONFIG_SMP=y for arm64, removing a modest amount of
      code in the process.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4b3dc967
    • C
      arm64: Add support for hardware updates of the access and dirty pte bits · 2f4b829c
      Catalin Marinas 提交于
      The ARMv8.1 architecture extensions introduce support for hardware
      updates of the access and dirty information in page table entries. With
      TCR_EL1.HA enabled, when the CPU accesses an address with the PTE_AF bit
      cleared in the page table, instead of raising an access flag fault the
      CPU sets the actual page table entry bit. To ensure that kernel
      modifications to the page tables do not inadvertently revert a change
      introduced by hardware updates, the exclusive monitor (ldxr/stxr) is
      adopted in the pte accessors.
      
      When TCR_EL1.HD is enabled, a write access to a memory location with the
      DBM (Dirty Bit Management) bit set in the corresponding pte
      automatically clears the read-only bit (AP[2]). Such DBM bit maps onto
      the Linux PTE_WRITE bit and to check whether a writable (DBM set) page
      is dirty, the kernel tests the PTE_RDONLY bit. In order to allow
      read-only and dirty pages, the kernel needs to preserve the software
      dirty bit. The hardware dirty status is transferred to the software
      dirty bit in ptep_set_wrprotect() (using load/store exclusive loop) and
      pte_modify().
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2f4b829c
    • W
      arm64: move update_mmu_cache() into asm/pgtable.h · cba3574f
      Will Deacon 提交于
      Mark Brown reported an allnoconfig build failure in -next:
      
        Today's linux-next fails to build an arm64 allnoconfig due to "mm:
        make GUP handle pfn mapping unless FOLL_GET is requested" which
        causes:
      
        >       arm64-allnoconfig
        > ../mm/gup.c:51:4: error: implicit declaration of function
          'update_mmu_cache' [-Werror=implicit-function-declaration]
      
      Fix the error by moving the function to asm/pgtable.h, as is the case
      for most other architectures.
      Reported-by: NMark Brown <broonie@kernel.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      cba3574f
  8. 15 4月, 2015 1 次提交
  9. 27 2月, 2015 1 次提交
    • F
      arm64: enable PTE type bit in the mask for pte_modify · 6910fa16
      Feng Kan 提交于
      Caught during Trinity testing. The pte_modify does not allow
      modification for PTE type bit. This cause the test to hang
      the system. It is found that the PTE can't transit from an
      inaccessible page (b00) to a valid page (b11) because the mask
      does not allow it. This happens when a big block of mmaped
      memory is set the PROT_NONE, then the a small piece is broken
      off and set to PROT_WRITE | PROT_READ cause a huge page split.
      Signed-off-by: NFeng Kan <fkan@apm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6910fa16
  10. 12 2月, 2015 1 次提交
  11. 11 2月, 2015 1 次提交
  12. 28 1月, 2015 1 次提交
  13. 12 1月, 2015 1 次提交
  14. 24 12月, 2014 1 次提交
    • J
      arm64: mm: Add pgd_page to support RCU fast_gup · 5d96e0cb
      Jungseok Lee 提交于
      This patch adds pgd_page definition in order to keep supporting
      HAVE_GENERIC_RCU_GUP configuration. In addition, it changes pud_page
      expression to align with pmd_page for readability.
      
      An introduction of pgd_page resolves the following build breakage
      under 4KB + 4Level memory management combo.
      
      mm/gup.c: In function 'gup_huge_pgd':
      mm/gup.c:889:2: error: implicit declaration of function 'pgd_page' [-Werror=implicit-function-declaration]
        head = pgd_page(orig);
        ^
      mm/gup.c:889:7: warning: assignment makes pointer from integer without a cast
        head = pgd_page(orig);
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Signed-off-by: NJungseok Lee <jungseoklee85@gmail.com>
      [catalin.marinas@arm.com: remove duplicate pmd_page definition]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5d96e0cb
  15. 11 12月, 2014 1 次提交
  16. 10 10月, 2014 2 次提交
  17. 01 10月, 2014 1 次提交
  18. 08 9月, 2014 1 次提交
  19. 24 7月, 2014 1 次提交
    • C
      arm64: Fix barriers used for page table modifications · 7f0b1bf0
      Catalin Marinas 提交于
      The architecture specification states that both DSB and ISB are required
      between page table modifications and subsequent memory accesses using the
      corresponding virtual address. When TLB invalidation takes place, the
      tlb_flush_* functions already have the necessary barriers. However, there are
      other functions like create_mapping() for which this is not the case.
      
      The patch adds the DSB+ISB instructions in the set_pte() function for
      valid kernel mappings. The invalid pte case is handled by tlb_flush_*
      and the user mappings in general have a corresponding update_mmu_cache()
      call containing a DSB. Even when update_mmu_cache() isn't called, the
      kernel can still cope with an unlikely spurious page fault by
      re-executing the instruction.
      
      In addition, the set_pmd, set_pud() functions gain an ISB for
      architecture compliance when block mappings are created.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NLeif Lindholm <leif.lindholm@linaro.org>
      Acked-by: NSteve Capper <steve.capper@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: <stable@vger.kernel.org>
      7f0b1bf0
  20. 23 7月, 2014 5 次提交
  21. 17 7月, 2014 1 次提交
  22. 04 7月, 2014 1 次提交
  23. 18 6月, 2014 1 次提交
  24. 29 5月, 2014 1 次提交
    • W
      arm64: mm: fix pmd_write CoW brokenness · ceb21835
      Will Deacon 提交于
      Commit 9c7e535f ("arm64: mm: Route pmd thp functions through pte
      equivalents") changed the pmd manipulator and accessor functions to
      convert the target pmd to a pte, process it with the pte functions, then
      convert it back. Along the way, we gained support for PTE_WRITE, however
      this is completely ignored by set_pmd_at, and so we fail to set the
      PMD_SECT_RDONLY for PMDs, resulting in all sorts of lovely failures (like
      CoW not working).
      
      Partially reverting the offending commit (by making use of
      PMD_SECT_RDONLY explicitly for pmd_{write,wrprotect,mkwrite} functions)
      leads to further issues because pmd_write can then return potentially
      incorrect values for page table entries marked as RDONLY, leading to
      BUG_ON(pmd_write(entry)) tripping under some THP workloads.
      
      This patch fixes the issue by routing set_pmd_at through set_pte_at,
      which correctly takes the PTE_WRITE flag into account. Given that
      THP mappings are always anonymous, the additional cache-flushing code
      in __sync_icache_dcache won't impose any significant overhead as the
      flush will be skipped.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Acked-by: NSteve Capper <steve.capper@arm.com>
      Tested-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ceb21835
  25. 16 5月, 2014 1 次提交
  26. 10 5月, 2014 1 次提交
  27. 09 5月, 2014 2 次提交
    • S
      arm64: mm: Create gigabyte kernel logical mappings where possible · 206a2a73
      Steve Capper 提交于
      We have the capability to map 1GB level 1 blocks when using a 4K
      granule.
      
      This patch adjusts the create_mapping logic s.t. when mapping physical
      memory on boot, we attempt to use a 1GB block if both the VA and PA
      start and end are 1GB aligned. This both reduces the levels of lookup
      required to resolve a kernel logical address, as well as reduces TLB
      pressure on cores that support 1GB TLB entries.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Tested-by: NJungseok Lee <jays.lee@samsung.com>
      [catalin.marinas@arm.com: s/prot_sect_kernel/PROT_SECT_NORMAL_EXEC/]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      206a2a73
    • C
      arm64: Clean up the default pgprot setting · a501e324
      Catalin Marinas 提交于
      The primary aim of this patchset is to remove the pgprot_default and
      prot_sect_default global variables and rely strictly on predefined
      values. The original goal was to be able to run SMP kernels on UP
      hardware by not setting the Shareability bit. However, it is unlikely to
      see UP ARMv8 hardware and even if we do, the Shareability bit is no
      longer assumed to disable cacheable accesses.
      
      A side effect is that the device mappings now have the Shareability
      attribute set. The hardware, however, should ignore it since Device
      accesses are always Outer Shareable.
      
      Following the removal of the two global variables, there is some PROT_*
      macro reshuffling and cleanup, including the __PAGE_* macros (replaced
      by PAGE_*).
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      a501e324