1. 28 6月, 2016 1 次提交
  2. 16 4月, 2016 1 次提交
  3. 15 4月, 2016 2 次提交
  4. 25 3月, 2016 1 次提交
    • M
      arm64: consistently use p?d_set_huge · c661cb1c
      Mark Rutland 提交于
      Commit 324420bf ("arm64: add support for ioremap() block
      mappings") added new p?d_set_huge functions which do the hard work to
      generate and set a correct block entry.
      
      These differ from open-coded huge page creation in the early page table
      code by explicitly setting the P?D_TYPE_SECT bits (which are implicitly
      retained by mk_sect_prot() for any valid prot), but are otherwise
      identical (and cannot fail on arm64).
      
      For simplicity and consistency, make use of these in the initial page
      table creation code.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c661cb1c
  5. 26 2月, 2016 1 次提交
  6. 24 2月, 2016 1 次提交
    • A
      arm64: add support for kernel ASLR · f80fb3a3
      Ard Biesheuvel 提交于
      This adds support for KASLR is implemented, based on entropy provided by
      the bootloader in the /chosen/kaslr-seed DT property. Depending on the size
      of the address space (VA_BITS) and the page size, the entropy in the
      virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all
      4 levels), with the sidenote that displacements that result in the kernel
      image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB
      granule kernels, respectively) are not allowed, and will be rounded up to
      an acceptable value.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is
      randomized independently from the core kernel. This makes it less likely
      that the location of core kernel data structures can be determined by an
      adversary, but causes all function calls from modules into the core kernel
      to be resolved via entries in the module PLTs.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is
      randomized by choosing a page aligned 128 MB region inside the interval
      [_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of
      entropy (depending on page size), independently of the kernel randomization,
      but still guarantees that modules are within the range of relative branch
      and jump instructions (with the caveat that, since the module region is
      shared with other uses of the vmalloc area, modules may need to be loaded
      further away if the module region is exhausted)
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f80fb3a3
  7. 19 2月, 2016 4 次提交
    • A
      arm64: allow kernel Image to be loaded anywhere in physical memory · a7f8de16
      Ard Biesheuvel 提交于
      This relaxes the kernel Image placement requirements, so that it
      may be placed at any 2 MB aligned offset in physical memory.
      
      This is accomplished by ignoring PHYS_OFFSET when installing
      memblocks, and accounting for the apparent virtual offset of
      the kernel Image. As a result, virtual address references
      below PAGE_OFFSET are correctly mapped onto physical references
      into the kernel Image regardless of where it sits in memory.
      
      Special care needs to be taken for dealing with memory limits passed
      via mem=, since the generic implementation clips memory top down, which
      may clip the kernel image itself if it is loaded high up in memory. To
      deal with this case, we simply add back the memory covering the kernel
      image, which may result in more memory to be retained than was passed
      as a mem= parameter.
      
      Since mem= should not be considered a production feature, a panic notifier
      handler is installed that dumps the memory limit at panic time if one was
      set.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a7f8de16
    • A
      arm64: move kernel image to base of vmalloc area · f9040773
      Ard Biesheuvel 提交于
      This moves the module area to right before the vmalloc area, and moves
      the kernel image to the base of the vmalloc area. This is an intermediate
      step towards implementing KASLR, which allows the kernel image to be
      located anywhere in the vmalloc area.
      
      Since other subsystems such as hibernate may still need to refer to the
      kernel text or data segments via their linears addresses, both are mapped
      in the linear region as well. The linear alias of the text region is
      mapped read-only/non-executable to prevent inadvertent modification or
      execution.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f9040773
    • A
      arm64: decouple early fixmap init from linear mapping · 157962f5
      Ard Biesheuvel 提交于
      Since the early fixmap page tables are populated using pages that are
      part of the static footprint of the kernel, they are covered by the
      initial kernel mapping, and we can refer to them without using __va/__pa
      translations, which are tied to the linear mapping.
      
      Since the fixmap page tables are disjoint from the kernel mapping up
      to the top level pgd entry, we can refer to bm_pte[] directly, and there
      is no need to walk the page tables and perform __pa()/__va() translations
      at each step.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      157962f5
    • A
      arm64: add support for ioremap() block mappings · 324420bf
      Ard Biesheuvel 提交于
      This wires up the existing generic huge-vmap feature, which allows
      ioremap() to use PMD or PUD sized block mappings. It also adds support
      to the unmap path for dealing with block mappings, which will allow us
      to unmap the __init region using unmap_kernel_range() in a subsequent
      patch.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      324420bf
  8. 16 2月, 2016 11 次提交
  9. 12 12月, 2015 1 次提交
    • W
      arm64: mm: ensure that the zero page is visible to the page table walker · 32d63978
      Will Deacon 提交于
      In paging_init, we allocate the zero page, memset it to zero and then
      point TTBR0 to it in order to avoid speculative fetches through the
      identity mapping.
      
      In order to guarantee that the freshly zeroed page is indeed visible to
      the page table walker, we need to execute a dsb instruction prior to
      writing the TTBR.
      
      Cc: <stable@vger.kernel.org> # v3.14+, for older kernels need to drop the 'ishst'
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      32d63978
  10. 11 12月, 2015 1 次提交
  11. 10 12月, 2015 1 次提交
  12. 01 12月, 2015 2 次提交
  13. 26 11月, 2015 1 次提交
    • C
      Revert "arm64: Mark kernel page ranges contiguous" · 667c2759
      Catalin Marinas 提交于
      This reverts commit 348a65cd.
      
      Incorrect page table manipulation that does not respect the ARM ARM
      recommended break-before-make sequence may lead to TLB conflicts. The
      contiguous PTE patch makes the system even more susceptible to such
      errors by changing the mapping from a single page to a contiguous range
      of pages. An additional TLB invalidation would reduce the risk window,
      however, the correct fix is to switch to a temporary swapper_pg_dir.
      Once the correct workaround is done, the reverted commit will be
      re-applied.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NJeremy Linton <jeremy.linton@arm.com>
      667c2759
  14. 25 11月, 2015 1 次提交
  15. 18 11月, 2015 1 次提交
  16. 17 11月, 2015 1 次提交
    • A
      arm64: mm: use correct mapping granularity under DEBUG_RODATA · 4fee9f36
      Ard Biesheuvel 提交于
      When booting a 64k pages kernel that is built with CONFIG_DEBUG_RODATA
      and resides at an offset that is not a multiple of 512 MB, the rounding
      that occurs in __map_memblock() and fixup_executable() results in
      incorrect regions being mapped.
      
      The following snippet from /sys/kernel/debug/kernel_page_tables shows
      how, when the kernel is loaded 2 MB above the base of DRAM at 0x40000000,
      the first 2 MB of memory (which may be inaccessible from non-secure EL1
      or just reserved by the firmware) is inadvertently mapped into the end of
      the module region.
      
        ---[ Modules start ]---
        0xfffffdffffe00000-0xfffffe0000000000     2M RW NX ... UXN MEM/NORMAL
        ---[ Modules end ]---
        ---[ Kernel Mapping ]---
        0xfffffe0000000000-0xfffffe0000090000   576K RW NX ... UXN MEM/NORMAL
        0xfffffe0000090000-0xfffffe0000200000  1472K ro x  ... UXN MEM/NORMAL
        0xfffffe0000200000-0xfffffe0000800000     6M ro x  ... UXN MEM/NORMAL
        0xfffffe0000800000-0xfffffe0000810000    64K ro x  ... UXN MEM/NORMAL
        0xfffffe0000810000-0xfffffe0000a00000  1984K RW NX ... UXN MEM/NORMAL
        0xfffffe0000a00000-0xfffffe00ffe00000  4084M RW NX ... UXN MEM/NORMAL
      
      The same issue is likely to occur on 16k pages kernels whose load
      address is not a multiple of 32 MB (i.e., SECTION_SIZE). So round to
      SWAPPER_BLOCK_SIZE instead of SECTION_SIZE.
      
      Fixes: da141706 ("arm64: add better page protections to arm64")
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NLaura Abbott <labbott@redhat.com>
      Cc: <stable@vger.kernel.org> # 4.0+
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      4fee9f36
  17. 12 11月, 2015 1 次提交
  18. 09 11月, 2015 2 次提交
  19. 20 10月, 2015 1 次提交
  20. 09 10月, 2015 1 次提交
    • J
      arm64: Mark kernel page ranges contiguous · 348a65cd
      Jeremy Linton 提交于
      With 64k pages, the next larger segment size is 512M. The linux
      kernel also uses different protection flags to cover its code and data.
      Because of this requirement, the vast majority of the kernel code and
      data structures end up being mapped with 64k pages instead of the larger
      pages common with a 4k page kernel.
      
      Recent ARM processors support a contiguous bit in the
      page tables which allows the a TLB to cover a range larger than a
      single PTE if that range is mapped into physically contiguous
      ram.
      
      So, for the kernel its a good idea to set this flag. Some basic
      micro benchmarks show it can significantly reduce the number of
      L1 dTLB refills.
      
      Add boot option to enable/disable CONT marking, as well as fix a
      bug found by Steve Capper.
      Signed-off-by: NJeremy Linton <jeremy.linton@arm.com>
      [catalin.marinas@arm.com: remove CONFIG_ARM64_CONT_PTE altogether]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      348a65cd
  21. 07 10月, 2015 1 次提交
  22. 28 7月, 2015 1 次提交
    • M
      arm64: mm: mark create_mapping as __init · c53e0baa
      Mark Rutland 提交于
      Currently create_mapping is marked with __ref, apparently because it
      refers to early_alloc. However, create_mapping has no logic to prevent
      erroneous use of early_alloc after it has been freed, and is only ever
      called by __init functions anyway. Thus the __ref marker is misleading
      and unnecessary.
      
      Instead, this patch marks create_mapping as __init, resulting in
      warnings if it is used from a a non __init functions, and allowing its
      memory to be reclaimed.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      c53e0baa
  23. 27 7月, 2015 1 次提交
  24. 01 7月, 2015 1 次提交