1. 18 11月, 2015 1 次提交
  2. 17 11月, 2015 1 次提交
    • A
      arm64: mm: use correct mapping granularity under DEBUG_RODATA · 4fee9f36
      Ard Biesheuvel 提交于
      When booting a 64k pages kernel that is built with CONFIG_DEBUG_RODATA
      and resides at an offset that is not a multiple of 512 MB, the rounding
      that occurs in __map_memblock() and fixup_executable() results in
      incorrect regions being mapped.
      
      The following snippet from /sys/kernel/debug/kernel_page_tables shows
      how, when the kernel is loaded 2 MB above the base of DRAM at 0x40000000,
      the first 2 MB of memory (which may be inaccessible from non-secure EL1
      or just reserved by the firmware) is inadvertently mapped into the end of
      the module region.
      
        ---[ Modules start ]---
        0xfffffdffffe00000-0xfffffe0000000000     2M RW NX ... UXN MEM/NORMAL
        ---[ Modules end ]---
        ---[ Kernel Mapping ]---
        0xfffffe0000000000-0xfffffe0000090000   576K RW NX ... UXN MEM/NORMAL
        0xfffffe0000090000-0xfffffe0000200000  1472K ro x  ... UXN MEM/NORMAL
        0xfffffe0000200000-0xfffffe0000800000     6M ro x  ... UXN MEM/NORMAL
        0xfffffe0000800000-0xfffffe0000810000    64K ro x  ... UXN MEM/NORMAL
        0xfffffe0000810000-0xfffffe0000a00000  1984K RW NX ... UXN MEM/NORMAL
        0xfffffe0000a00000-0xfffffe00ffe00000  4084M RW NX ... UXN MEM/NORMAL
      
      The same issue is likely to occur on 16k pages kernels whose load
      address is not a multiple of 32 MB (i.e., SECTION_SIZE). So round to
      SWAPPER_BLOCK_SIZE instead of SECTION_SIZE.
      
      Fixes: da141706 ("arm64: add better page protections to arm64")
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NLaura Abbott <labbott@redhat.com>
      Cc: <stable@vger.kernel.org> # 4.0+
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      4fee9f36
  3. 12 11月, 2015 1 次提交
  4. 09 11月, 2015 2 次提交
  5. 20 10月, 2015 1 次提交
  6. 09 10月, 2015 1 次提交
    • J
      arm64: Mark kernel page ranges contiguous · 348a65cd
      Jeremy Linton 提交于
      With 64k pages, the next larger segment size is 512M. The linux
      kernel also uses different protection flags to cover its code and data.
      Because of this requirement, the vast majority of the kernel code and
      data structures end up being mapped with 64k pages instead of the larger
      pages common with a 4k page kernel.
      
      Recent ARM processors support a contiguous bit in the
      page tables which allows the a TLB to cover a range larger than a
      single PTE if that range is mapped into physically contiguous
      ram.
      
      So, for the kernel its a good idea to set this flag. Some basic
      micro benchmarks show it can significantly reduce the number of
      L1 dTLB refills.
      
      Add boot option to enable/disable CONT marking, as well as fix a
      bug found by Steve Capper.
      Signed-off-by: NJeremy Linton <jeremy.linton@arm.com>
      [catalin.marinas@arm.com: remove CONFIG_ARM64_CONT_PTE altogether]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      348a65cd
  7. 07 10月, 2015 1 次提交
  8. 28 7月, 2015 1 次提交
    • M
      arm64: mm: mark create_mapping as __init · c53e0baa
      Mark Rutland 提交于
      Currently create_mapping is marked with __ref, apparently because it
      refers to early_alloc. However, create_mapping has no logic to prevent
      erroneous use of early_alloc after it has been freed, and is only ever
      called by __init functions anyway. Thus the __ref marker is misleading
      and unnecessary.
      
      Instead, this patch marks create_mapping as __init, resulting in
      warnings if it is used from a a non __init functions, and allowing its
      memory to be reclaimed.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      c53e0baa
  9. 27 7月, 2015 1 次提交
  10. 01 7月, 2015 1 次提交
  11. 02 6月, 2015 1 次提交
  12. 15 4月, 2015 1 次提交
  13. 23 3月, 2015 1 次提交
  14. 19 3月, 2015 1 次提交
  15. 30 1月, 2015 1 次提交
  16. 28 1月, 2015 3 次提交
    • M
      arm64: mm: use *_sect to check for section maps · a1c76574
      Mark Rutland 提交于
      The {pgd,pud,pmd}_bad family of macros have slightly fuzzy
      cross-architecture semantics, and seem to imply a populated entry that
      is not a next-level table, rather than a particular type of entry (e.g.
      a section map).
      
      In arm64 code, for those cases where we care about whether an entry is a
      section mapping, we can instead use the {pud,pmd}_sect macros to
      explicitly check for this case. This helps to document precisely what we
      care about, making the code easier to read, and allows for future
      relaxation of the *_bad macros to check for other "bad" entries.
      
      To that end this patch updates the table dumping and initial table setup
      to check for section mappings with {pud,pmd}_sect, and adds/restores
      BUG_ON(*_bad((*p)) checks after we've handled the *_sect and *_none
      cases so as to catch remaining "bad" cases.
      
      In the fault handling code, show_pte is left with *_bad checks as it
      only cares about whether it can walk the next level table, and this path
      is used for both kernel and userspace fault handling. The former case
      will be followed by a die() where we'll report the address that
      triggered the fault, which can be useful context for debugging.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NSteve Capper <steve.capper@linaro.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a1c76574
    • M
      arm64: drop unnecessary cache+tlb maintenance · a3bba370
      Mark Rutland 提交于
      In paging_init, we call flush_cache_all, but this is backed by Set/Way
      operations which may not achieve anything in the presence of cache line
      migration and/or system caches. If the caches are already in an
      inconsistent state at this point, there is nothing we can do (short of
      flushing the entire physical address space by VA) to empty architected
      and system caches. As such, flush_cache_all only serves to mask other
      potential bugs. Hence, this patch removes the boot-time call to
      flush_cache_all.
      
      Immediately after the cache maintenance we flush the TLBs, but this is
      also unnecessary. Before enabling the MMU, the TLBs are invalidated, and
      thus are initially clean. When changing the contents of active tables
      (e.g. in fixup_executable() for DEBUG_RODATA) we perform the required
      TLB maintenance following the update, and therefore no additional
      maintenance is required to ensure the new table entries are in effect.
      Since activating the MMU we will not have modified system register
      fields permitted to be cached in a TLB, and therefore do not need
      maintenance for any cached system register fields. Hence, the TLB flush
      is unnecessary.
      
      Shortly after the unnecessary TLB flush, we update TTBR0 to point to an
      empty zero page rather than the idmap, and flush the TLBs. This
      maintenance is necessary to remove the global idmap entries from the
      TLBs (as they would conflict with userspace mappings), and is retained.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Acked-by: NSteve Capper <steve.capper@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a3bba370
    • Z
      arm64:mm: free the useless initial page table · 523d6e9f
      zhichang.yuan 提交于
      For 64K page system, after mapping a PMD section, the corresponding initial
      page table is not needed any more. That page can be freed.
      Signed-off-by: NZhichang Yuan <zhichang.yuan@linaro.org>
      [catalin.marinas@arm.com: added BUG_ON() to catch late memblock freeing]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      523d6e9f
  17. 22 1月, 2015 2 次提交
  18. 14 1月, 2015 1 次提交
    • M
      arm64: remove broken cachepolicy code · 26a945ca
      Mark Rutland 提交于
      The cachepolicy kernel parameter was intended to aid in the debugging of
      coherency issues, but it is fundamentally broken for several reasons:
      
       * On SMP platforms, only the boot CPU's tcr_el1 is altered. Secondary
         CPUs may therefore use differ w.r.t. the attributes they apply to
         MT_NORMAL memory, resulting in a loss of coherency.
      
       * The cache maintenance using flush_dcache_all (based on Set/Way
         operations) is not guaranteed to empty a given CPU's cache hierarchy
         while said CPU has caches enabled, it cannot empty the caches of
         other coherent PEs, nor is it guaranteed to flush data to the PoC
         even when caches are disabled.
      
       * The TLBs are not invalidated around the modification of MAIR_EL1 and
         TCR_EL1, as required by the architecture (as both are permitted to be
         cached in a TLB). This may result in CPUs using attributes other than
         those expected for some memory accesses, resulting in a loss of
         coherency.
      
       * Exclusive accesses are not architecturally guaranteed to function as
         expected on memory marked as Write-Through or Non-Cacheable. Thus
         changing the attributes of MT_NORMAL away from the (architecurally
         safe) defaults may cause uses of these instructions (e.g. atomics) to
         behave erratically.
      
      Given this, the cachepolicy code cannot be used for debugging purposes
      as it alone is likely to cause coherency issues. This patch removes the
      broken cachepolicy code.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      26a945ca
  19. 13 1月, 2015 1 次提交
  20. 12 1月, 2015 2 次提交
  21. 25 11月, 2014 1 次提交
  22. 13 11月, 2014 1 次提交
  23. 07 11月, 2014 1 次提交
  24. 25 10月, 2014 1 次提交
  25. 16 9月, 2014 1 次提交
  26. 23 7月, 2014 1 次提交
    • J
      arm64: mm: Implement 4 levels of translation tables · c79b954b
      Jungseok Lee 提交于
      This patch implements 4 levels of translation tables since 3 levels
      of page tables with 4KB pages cannot support 40-bit physical address
      space described in [1] due to the following issue.
      
      It is a restriction that kernel logical memory map with 4KB + 3 levels
      (0xffffffc000000000-0xffffffffffffffff) cannot cover RAM region from
      544GB to 1024GB in [1]. Specifically, ARM64 kernel fails to create
      mapping for this region in map_mem function since __phys_to_virt for
      this region reaches to address overflow.
      
      If SoC design follows the document, [1], over 32GB RAM would be placed
      from 544GB. Even 64GB system is supposed to use the region from 544GB
      to 576GB for only 32GB RAM. Naturally, it would reach to enable 4 levels
      of page tables to avoid hacking __virt_to_phys and __phys_to_virt.
      
      However, it is recommended 4 levels of page table should be only enabled
      if memory map is too sparse or there is about 512GB RAM.
      
      References
      ----------
      [1]: Principles of ARM Memory Maps, White Paper, Issue C
      Signed-off-by: NJungseok Lee <jays.lee@samsung.com>
      Reviewed-by: NSungjinn Chung <sungjinn.chung@samsung.com>
      Acked-by: NKukjin Kim <kgene.kim@samsung.com>
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Reviewed-by: NSteve Capper <steve.capper@linaro.org>
      [catalin.marinas@arm.com: MEMBLOCK_INITIAL_LIMIT removed, same as PUD_SIZE]
      [catalin.marinas@arm.com: early_ioremap_init() updated for 4 levels]
      [catalin.marinas@arm.com: 48-bit VA depends on BROKEN until KVM is fixed]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
      c79b954b
  27. 09 5月, 2014 2 次提交
    • S
      arm64: mm: Create gigabyte kernel logical mappings where possible · 206a2a73
      Steve Capper 提交于
      We have the capability to map 1GB level 1 blocks when using a 4K
      granule.
      
      This patch adjusts the create_mapping logic s.t. when mapping physical
      memory on boot, we attempt to use a 1GB block if both the VA and PA
      start and end are 1GB aligned. This both reduces the levels of lookup
      required to resolve a kernel logical address, as well as reduces TLB
      pressure on cores that support 1GB TLB entries.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Tested-by: NJungseok Lee <jays.lee@samsung.com>
      [catalin.marinas@arm.com: s/prot_sect_kernel/PROT_SECT_NORMAL_EXEC/]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      206a2a73
    • C
      arm64: Clean up the default pgprot setting · a501e324
      Catalin Marinas 提交于
      The primary aim of this patchset is to remove the pgprot_default and
      prot_sect_default global variables and rely strictly on predefined
      values. The original goal was to be able to run SMP kernels on UP
      hardware by not setting the Shareability bit. However, it is unlikely to
      see UP ARMv8 hardware and even if we do, the Shareability bit is no
      longer assumed to disable cacheable accesses.
      
      A side effect is that the device mappings now have the Shareability
      attribute set. The hardware, however, should ignore it since Device
      accesses are always Outer Shareable.
      
      Following the removal of the two global variables, there is some PROT_*
      macro reshuffling and cleanup, including the __PAGE_* macros (replaced
      by PAGE_*).
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      a501e324
  28. 04 5月, 2014 1 次提交
  29. 01 5月, 2014 1 次提交
  30. 08 4月, 2014 2 次提交
  31. 05 2月, 2014 1 次提交
  32. 28 8月, 2013 1 次提交
  33. 14 6月, 2013 1 次提交
    • S
      ARM64: mm: Restore memblock limit when map_mem finished. · f6bc87c3
      Steve Capper 提交于
      In paging_init the memblock limit is set to restrict any addresses
      returned by early_alloc to fit within the initial direct kernel
      mapping in swapper_pg_dir. This allows map_mem to allocate puds,
      pmds and ptes from the initial direct kernel mapping.
      
      The limit stays low after paging_init() though, meaning any
      bootmem allocations will be from a restricted subset of memory.
      Gigabyte huge pages, for instance, are normally allocated from
      bootmem as their order (18) is too large for the default buddy
      allocator (MAX_ORDER = 11).
      
      This patch restores the memblock limit when map_mem has finished,
      allowing gigabyte huge pages (and other objects) to be allocated
      from all of bootmem.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      f6bc87c3